id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9907/math9907029.html
|
ar5iv
|
text
|
# A 𝑞–analogue of a formula of Hernandez obtained by inverting a result of Dilcher
## 1. The identities
Hernández in proved the following identity:
$$\underset{1kn}{}\left(\genfrac{}{}{0pt}{}{n}{k}\right)(1)^{k1}\underset{1i_1i_2\mathrm{}i_m=k}{}\frac{1}{i_1i_2\mathrm{}i_m}=\underset{1kn}{}\frac{1}{k^m}.$$
(1)
However this identity does not really require a proof, since we will show that it is just an inverted form of an identity of Dilcher;
$$\underset{1kn}{}\left(\genfrac{}{}{0pt}{}{n}{k}\right)(1)^{k1}\frac{1}{k^m}=\underset{1i_1i_2\mathrm{}i_mn}{}\frac{1}{i_1i_2\mathrm{}i_m}.$$
(2)
Define for $`k1`$
$$a_k:=\underset{1i_1i_2\mathrm{}i_m=k}{}\frac{1}{i_1i_2\mathrm{}i_m}\text{and}b_k:=\frac{1}{k^m},$$
and $`a_0=b_0=0`$, then the identities are
$`{\displaystyle \underset{0kn}{}}\left({\displaystyle \genfrac{}{}{0pt}{}{n}{k}}\right)(1)^ka_k`$ $`={\displaystyle \underset{0kn}{}}b_k,`$
$`{\displaystyle \underset{0kn}{}}\left({\displaystyle \genfrac{}{}{0pt}{}{n}{k}}\right)(1)^kb_k`$ $`={\displaystyle \underset{0kn}{}}a_k.`$
They are inverse relations, as can be seen by introducing ordinary generating functions $`A(z)=a_nz^n`$ and $`B(z)=b_nz^n`$. Then they are
$`{\displaystyle \frac{1}{1z}}A\left({\displaystyle \frac{z}{z1}}\right)`$ $`={\displaystyle \frac{1}{1z}}B(z),`$
$`{\displaystyle \frac{1}{1z}}B\left({\displaystyle \frac{z}{z1}}\right)`$ $`={\displaystyle \frac{1}{1z}}A(z).`$
However
$$w=\frac{z}{z1}z=\frac{w}{w1},$$
and the proof is finished
We note that Dilcher’s sum appears also in disguised form in .
## 2. A $`q`$–analogue
Dilcher’s formula (2) was only a corollary of his elegant $`q`$–version;
$$\underset{1kn}{}\left[\genfrac{}{}{0.0pt}{}{n}{k}\right]_q(1)^{k1}\frac{q^{\left(\genfrac{}{}{0pt}{}{k+1}{2}\right)+(m1)k}}{(1q^k)^m}=\underset{1i_1i_2\mathrm{}i_mn}{}\frac{q^{i_1}}{1q^{i_1}}\mathrm{}\frac{q^{i_m}}{1q^{i_m}}.$$
Here, $`\left[\genfrac{}{}{0.0pt}{}{n}{k}\right]_q`$ denotes the Gaussian polynomial
$$\left[\genfrac{}{}{0.0pt}{}{n}{k}\right]_q=\frac{(q;q)_n}{(q;q)_k(q;q)_{nk}}$$
with
$$(x;q)_n:=(1x)(1xq)\mathrm{}(1xq^{n1}).$$
Apart from Dilcher’s paper , the article is also of some relevance in this context.
Therefore it is a natural question to find a $`q`$–analogue of Hernández’ formula, or, what amount to the same, to find the appropriate inverse relations for the $`q`$–analogues.
We state them in the following lemma which is almost surely not new. However, I found it more appealing to derive it myself rather than hunting for it in the vast $`q`$–literature.
###### Lemma 1.
$`{\displaystyle \underset{k=0}{\overset{n}{}}}b_k`$ $`={\displaystyle \underset{k=0}{\overset{n}{}}}\left[{\displaystyle \genfrac{}{}{0.0pt}{}{n}{k}}\right]_q(1)^kq^{\left(\genfrac{}{}{0pt}{}{k}{2}\right)}a_k,`$ (3)
$`{\displaystyle \underset{k=0}{\overset{n}{}}}q^ka_k`$ $`={\displaystyle \underset{k=0}{\overset{n}{}}}\left[{\displaystyle \genfrac{}{}{0.0pt}{}{n}{k}}\right]_q(1)^kq^{kn+\left(\genfrac{}{}{0pt}{}{k}{2}\right)}b_k.`$ (4)
Proof. First note that it is much easier to prove this than to find it (it took me a few hours, but I am not too experienced).
We note that, as is usual in such cases, it is sufficient to prove it for a basis of the vector space of polynomials. Here, we choose $`a_n=q^nx^n(1\frac{1}{x})`$ for $`n1`$ and $`a_0=1`$.
We need the following standard formulæ that are consequences of the $`q`$–binomial theorem (see e. g. ):
$`{\displaystyle \underset{k=0}{\overset{n}{}}}\left[{\displaystyle \genfrac{}{}{0.0pt}{}{n}{k}}\right]_q(1)^kx^k`$ $`=(x;q)_n,`$
$`{\displaystyle \underset{k=0}{\overset{n}{}}}\left[{\displaystyle \genfrac{}{}{0.0pt}{}{n}{k}}\right]_q(x;q)_kx^{nk}`$ $`=1`$
We plug the form of $`a_n`$ into the right hand side of (3) and obtain
$`(1\frac{1}{x}){\displaystyle \underset{k=0}{\overset{n}{}}}\left[{\displaystyle \genfrac{}{}{0.0pt}{}{n}{k}}\right]_q(1)^kq^kx^k=(1\frac{1}{x})(qx;q)_n=\frac{1}{x}(x;q)_{n+1}={\displaystyle \underset{k=0}{\overset{n}{}}}b_k.`$
Thus
$`b_n=\frac{1}{x}(x;q)_{n+1}+\frac{1}{x}(x;q)_n=\frac{1}{x}(x;q)_n(1xq^n1)=q^n(x;q)_n.`$
We are done if these values of $`a_n`$ and $`b_n`$ also satisfy the relation (4). Note that (4) can be rewritten as
$$\underset{k=0}{\overset{n}{}}q^ka_k=\underset{k=0}{\overset{n}{}}\left[\genfrac{}{}{0.0pt}{}{n}{k}\right]_{1/q}(1)^kq^{\left(\genfrac{}{}{0pt}{}{k+1}{2}\right)}b_k.$$
We plug $`b_n`$ into the right hand side of (4) and obtain
$`{\displaystyle \underset{k=0}{\overset{n}{}}}\left[{\displaystyle \genfrac{}{}{0.0pt}{}{n}{k}}\right]_{1/q}(1)^kq^{\left(\genfrac{}{}{0pt}{}{k+1}{2}\right)}q^k(x;q)_k={\displaystyle \underset{k=0}{\overset{n}{}}}\left[{\displaystyle \genfrac{}{}{0.0pt}{}{n}{k}}\right]_{1/q}(1)^kx^k(\frac{1}{x};\frac{1}{q})_k=x^n={\displaystyle \underset{k=0}{\overset{n}{}}}q^ka_k.`$
Thus
$`q^na_n=x^nx^{n1}=x^n(1\frac{1}{x}).`$
We would like to remark that an alternative proof could be found by dealing with the matrices of connecting coefficients.
Define matrices
$`T:=\left[\left[{\displaystyle \genfrac{}{}{0.0pt}{}{n}{k}}\right]_q(1)^kq^{\left(\genfrac{}{}{0pt}{}{k}{2}\right)}\right]_{n,k},S:=\left[\left[{\displaystyle \genfrac{}{}{0.0pt}{}{n}{k}}\right]_q(1)^kq^{kn+\left(\genfrac{}{}{0pt}{}{k}{2}\right)}\right]_{n,k},`$
$$U=[\mathrm{𝟏}_{nk}]_{n,k},\text{and}V=[q^k\mathrm{𝟏}_{nk}]_{n,k}.$$
Then we have to prove that $`S=VT^1U`$.
This is not too hard, since
$`T^1=\left[\left[{\displaystyle \genfrac{}{}{0.0pt}{}{n}{k}}\right]_q(1)^kq^{kn+\left(\genfrac{}{}{0pt}{}{k+1}{2}\right)}\right]_{n,k},`$
and
$`T^1U=\left[\left[{\displaystyle \genfrac{}{}{0.0pt}{}{n1}{k1}}\right]_q(1)^kq^{n(k1)+\left(\genfrac{}{}{0pt}{}{k}{2}\right)}\right]_{n,k}.`$
###### Theorem 1.
\[$`q`$–analogue of Hernández’ formula\]
$`{\displaystyle \underset{1kn}{}}\left[{\displaystyle \genfrac{}{}{0.0pt}{}{n}{k}}\right]_q(1)^{k1}q^{kn+\left(\genfrac{}{}{0pt}{}{k}{2}\right)}{\displaystyle \underset{1i_1i_2\mathrm{}i_m=k}{}}{\displaystyle \frac{q^{i_1}}{1q^{i_1}}}\mathrm{}{\displaystyle \frac{q^{i_m}}{1q^{i_m}}}={\displaystyle \underset{1kn}{}}{\displaystyle \frac{q^{k(m1)}}{(1q^k)^m}}.`$
|
no-problem/9907/astro-ph9907170.html
|
ar5iv
|
text
|
# YOUNG EXTRAGALACTIC RADIO SOURCES
## 1 Selection of Young Radio Sources: GPS versus CSO
The identification and investigation of the young counterparts of ‘old’ extended radio sources is a key element in the study of the evolution of radio-loud active galactic nuclei. Two classes of compact radio source, Gigahertz Peaked Spectrum (GPS) sources and Compact Symmetric Objects (CSO), are the most likely representatives of this early evolutionary phase.
GPS sources are characterised by a convex shaped radio spectrum peaking at about 1 GHz in frequency , and are selected on the basis of interferometric or single dish flux density measurements at several ($`>4`$) frequencies. CSOs are characterised by their small size ($`<500`$ pc) and two-sided radio structure, e.g. having jets/lobes on both sides of a central core . They are selected on their milli-arcsecond morphology, which require high resolution VLBI observations at, at least, 2 frequencies. Since CSO and GPS sources are selected in such different ways, studies of these objects have mostly been presented separately. However, a significant overlap between these two classes exists. GPS sources optically identified with galaxies are most likely to possess compact symmetric morphologies , and the large majority of CSOs exhibit a gigahertz-peaked spectrum. The large but not complete overlap between these two classes of sources is caused by the synchrotron self-absorbed mini-lobes, located at the extremities of most CSOs, being the main contributors to the overall radio spectrum, and producing the peak at about 1 GHz in frequency (fig 1). Since orientation effects can influence both the observed radio morphology and the radio spectrum, the selection of an object as a CSO or GPS source depends on its viewing angle . This causes the overlap to be less than 100%. Furthermore, GPS sources optically identified with quasars are preferentially found to have core-jet morphologies . The morphological dichotomy of GPS galaxies and quasars and their very different redshift distributions, make it unlikely that GPS galaxies and quasars are related by orientation; they may just happen to have similar radio spectra .
The study of complete samples of young objects can constrain the evolution of radio sources, e.g. by using their number counts and source size distributions as function of flux density. The selection effects in these samples have to be understood to be able to compare these statistics with those for old, large size radio sources. Radio surveys at generally two frequencies are needed to select GPS candidates. In this process strong selection effects are evident in peak frequency and peak flux density. Furthermore, the boundary between a spectrum to be peaked or not-peaked, eg. its curvature, is drawn quite arbitrarily. Since the selection surveys and additional observations are most likely to have taken place at different epochs, variability may also play a role in the selection. The selection of a complete sample of CSOs needs at least VLBI surveys at two frequencies (eg. the Caltech-Jodrell Bank I&II Surveys ). Dependent on the resolution and dynamic range of the VLBI observations at all observing frequencies, strong selection effects are made on the overall angular size of the source, the contrast of the core to the approaching and receding sides of the source, and the spectral indices of the different components which are required to establish the two-sided nature of the radio morphology. For example, if observations with infinite dynamic range were possible, all compact core-jet sources could be classified as CSOs, since their counter-jets would be visible. In addition, VLBI surveys are mainly completed on samples of flat spectrum sources. In this way, a significant fraction of CSOs with steeper spectra may be missed.
It is relatively straightforward to select young radio sources on the basis of their gigahertz-peaked spectrum, while the selection on compact symmetric morphology is non trivial, in particular for very compact and/or faint sources. If a complete sample of young radio sources is required, selection on a gigahertz-peaked spectrum is therefore preferable, especially at fainter flux density levels. However, it is probably preferable to omit those optically identified with quasars, since their relation to young sources is doubtful. For the detailed analysis of individual objects, it is preferable to use confirmed CSOs, since the nature of their different components and possible orientation effects are better understood.
## 2 Evidence for GPS/CSO being young
Since the initial discovery of GPS sources, it has been speculated that these are young objects . However, a commonly discussed alternative to them being young was that they are small due to confinement by a particularly dense and clumpy interstellar medium that impedes the outward propagation of the jets . This latter hypothesis now looks less likely since recent observations show that the surrounding media of peaked spectrum sources are not significantly different from large scale radio sources, and insufficiently dense to confine these objects. More convincingly, the propagation velocities of the hot spots of several CSOs have now been measured to be $`0.2h^1c`$ , giving an apparent age of $`10^3`$ year and clearly showing that these are indeed young objects. Recent determinations of the radiative ages from the high frequency breaks GPS sources and the larger Compact Steep Spectrum (CSS) sources are found to be consistent with ages ranging from $`10^310^5`$ years .
## 3 Current Views on Radio Source Evolution
Observational constraints on the luminosity evolution of radio sources mainly come from the source density in the power - linear size ($`PD`$) diagram . It was found that sources with large sizes ($`D>1`$ Mpc) and high radio luminosities ($`P>10^{26}`$ W/Hz at 178 MHz) are rare, suggesting that the luminosity of sources should decrease quickly with linear sizes approaching 1 Mpc. Several evolution scenarios have been proposed for young radio sources, in which GPS sources subsequently evolve to Compact Steep Spectrum (CSS) sources and large-scale doubles , and CSOs evolve in Medium Symmetric Objects (MSO), and Large Scale Objects (LSO) . In these models, the age ratio of large scale to GPS, and LSO to CSO is typically $`10^3`$. The much larger fraction (say 10%) of GPS and CSOs in radio surveys therefore implies that young radio sources have to substantially decrease (a factor $``$10) in radio luminosity when evolving to large size radio sources. This can be explained by a decrease in radiation efficiency with source size The transition from CSO to MSO does not have to occur at the same moment as the transition from GPS to CSS, and depends on the quite arbitrary definitions of the different classes of object. eg. A CSO can have such a low spectral peak frequency that it is actually defined as a CSS and not as a GPS source.
Several young radio sources (eg. 0108+388; ) exhibit low level, steep spectrum, extended emission on arcsecond scales, which seem to be relics of much older radio-activity. These objects are generally classified as being intermittent or re-occurent, and not as young objects. However, the components related to their gigahertz-peaked spectra and CSO morphologies are certainly young, and we therefore believe it is correct to call them young objects. The presence of faint relic emission only indicates that the active nucleus has been active before, and may constrain the typical timescale and frequency of such an event. Based on the current knowledge of the formation of massive black-holes in the centers of galaxies (eg. ), it is unlikely that the central engine itself is young, but only the radio source.
It is unclear whether all young sources actually evolve to large extended objects. Some, or even the majority, may be short-lived phenomena due to a lack of significant fuel . The possible existence of these objects can largely influence the source statistics of young radio sources, and their luminosity evolution.
## 4 GPS sources at faint flux densities
In addition to the GB6 survey at 5 GHz, several new surveys have become available in recent years, like the WENSS at 325 MHz , and the NVSS and FIRST at 1.4 GHz. These surveys form a very powerful combination to select large and homogeneous samples of GPS candidates at faint flux density levels. The study of GPS samples at faint and bright flux density levels allow a disentanglement of redshift and radio luminosity effects. A small sample of 47 faint GPS sources has been investigated by our group, which was selected from first available areas of the WENSS survey The sample has been studied extensively in the optical to determine the nature and redshifts of the optical identifications, resulting in an identification fraction of $`87\%`$ . About 40% of the sample consists of high redshift quasars (which we will further ignore). Only a few of the redshifts of GPS galaxies have been determined yet, due to their faint magnitudes and weak emission lines . Fortunately their redshifts can be estimated due to their well established Hubble diagram . Global VLBI observations at 5 GHz were obtained for all sources in the sample. In addition, observations at 1.6 and 15 GHz with the global array and the VLBA respectively were taken. In this way, 94% percent of the sources in the sample were observed at least at two frequencies, above and at or below their spectral peak .
The combination of this faint GPS sample, and bright GPS and CSS samples from the literature gave a unique opportunity to investigate the relation between spectral peak and size of young radio sources. Not surprisingly, the well-known correlation between peak frequency and angular size was confirmed. However, in addition, a correlation was found between the peak flux density and angular size. Most remarkably, the strength and signs of these two correlations are exactly as expected for synchrotron self absorption (SSA). This strongly suggests that SSA is indeed the cause of the spectral turnovers in GPS and CSS sources, and not free-free absorption as recently proposed by . The spectral peak originates in the dominant features of the radio source, the mini-lobes, and therefore reflects the sizes of the mini-lobes. The angular size from the VLBI observations is the overall size of the radio source, eg. the distance between the two mini-lobes. The correlations between the spectral peak and size therefore imply a linear correlation between the mini-lobes and overall sizes, meaning that during the evolution of young radio sources the ratio of the size of the mini-lobes and the distance between the two mini-lobes is constant. This suggests they evolve in a self-similar way.
## 5 Luminosity Evolution and the Luminosity Function
In flux density limited samples, GPS galaxies are found at higher redshifts than large size radio sources . Since the lifetimes of radio sources are short compared to cosmological timescales, this can only mean that the slope of their luminosity functions are different, if GPS sources are to evolve to large size radio sources. We argue that the slope of the luminosity function is strongly dependent on the evolution in radio power of the individual sources. To explain the difference in redshift distribution, we propose a luminosity evolution scenario in which GPS sources increase in luminosity and large extended objects decrease in luminosity with time (Fig. 2). Sources in a volume-based sample are biased towards low jet-powers and older ages, for populations of both GPS and extended objects. Low jet powers result in low luminosity sources. The higher the age of a large scale source the lower its luminosity, but the higher the age of a GPS source the higher its luminosity. This means that for a population of large scale sources the jet power and age biases strengthen each other resulting in a steep luminosity function, while they counteract for GPS sources, resulting in a flatter luminosity function. The evolution scenario proposed is expected for a ram-pressure confined radio source in a surrounding medium with a King profile density. In the inner parts of the King profile, the density of the medium is constant and the radio source builds up its luminosity, but after it grows large enough the density of the surrounding medium declines and the luminosity of the radio source decreases.
Triggered by the ideas above, a new method has been developed to constrain the luminosity evolution of radio sources by comparison of the local luminosity functions (LLF) of young and old objects. At present, an insufficient number of GPS sources are known at low redshift to construct an LLF directly. However, the cosmological number density evolution, as derived for steep spectrum (eg. large size) radio sources by , is used to derive a LLF for young radio sources from the GPS samples. The result, as is shown in Fig. 2 is consistent with the luminosity scenario as proposed above. Note however, that the faint and bright GPS samples were selected in different ways resulting in large uncertainties.
## 6 The Future and the Square Kilometer Array
The new large area radio surveys and the current sensitivity and flexibility of VLBI networks allow the construction and investigation of large and homogeneous samples of young sources. This is fueling the rapid development of this research area, with the exciting prospect this continuing to do so for the next few years at least. The total number of GPS sources in the WENSS survey selectable on the basis of their inverted spectra between 325 MHz and 1.6 GHz (NVSS), is likely to be on the order of $`2\times 10^3`$, from which about $`100200`$ can be identified with low redshift galaxies. This will allow a direct determination of the local luminosity function for young radio sources, down to a 5 GHz radio power of $`10^{24}`$ W/Hz.
A strong impact on the research area of compact radio sources can be expected from the planned Square Kilometer Array (SKA), in particular if the configuration includes multi- $`10^3`$ km baselines, or if it is added to ground and space VLBI networks. The unrivaled image quality provided by its quasi-continuous uv-coverage and its high sensitivity combined with m.a.s. resolution, promises to give new insights into the physics of (relativistic) jets (eg. Kirchbaum et al., this volume). The largest contribution of SKA to the statistical properties of young radio sources, as discussed in this paper, can be expected from its ability to select and investigate weak ($`P_{5GHz}10^{24}`$ W/Hz) young radio sources out to much larger cosmological distances ($`z2`$ instead of $`z0.1`$), allowing a detailed comparison of the cosmological number density evolution of young to that of old sources over a wide range of luminosity. This will put much stronger constraints on the luminosity evolution of the individual objects and it will provide new insights into the strong cosmological evolution of radio sources from high redshift to the present.
## Acknowledgements
This research was in part funded by the European Commission under contract ERBFMRX-CT96-0034 (CERES)
## References
|
no-problem/9907/cond-mat9907016.html
|
ar5iv
|
text
|
# Novel structural features of the ripple phase of phospholipids
## Abstract
We have calculated the electron density maps of the ripple phase of dimyristoylphosphatidylcholine (DMPC) and palmitoyl-oleoyl phosphatidylcholine (POPC) multibilayers at different temperatures and fixed relative humidity. Our analysis establishes, for the first time, the existence of an average tilt of the hydrocarbon chains of the lipid molecules along the direction of the ripple wave vector, which we believe is responsible for the occurrence of asymmetric ripples in these systems.
Lipids self-assemble in water to form a variety of lamellar phases . The ripple or P$`_\beta ^{}`$ phase characterized by a one-dimensional height modulation of the bilayers is seen in some phospholipids under high hydration . In the phase diagrams of these systems, it is sandwiched between the high-temperature L<sub>α</sub> phase and the low-temperature L$`_\beta ^{}`$ phase . In the L<sub>α</sub> phase, the hydrocarbon chains of the lipid molecules are in a molten state and the in-plane ordering is liquid-like. On the other hand, in the L$`_\beta ^{}`$ phase, the chains are predominantly in the all-trans conformation and are tilted with respect to the layer normal. The chains are also ordered in the plane of the bilayer, but the exact nature of the degree of this ordering is yet to be determined .
Experimental studies on the ripple phase have established many of its structural features. Almost all x-ray studies show asymmetric ripples corresponding to an oblique unit cell of the rippled bilayers ; though there have been some reports of symmetric ripples corresponding to a rectangular unit cell . The latter have been shown to be metastable structures in some systems . It is well established that only lipids that have a L$`_\beta ^{}`$ phase at lower temperatures exhibit the ripple phase, indicating the importance of the tilt of the chains in the formation of the ripples . Determination of the chain tilt in the ripple phase is of utmost relevance as it is the key structural feature hitherto unknown. But it has not been possible to obtain it directly from x-ray diffraction patterns as the chain reflections are rather diffuse, probably due to the presence of disordered chains, ie., chains that are not entirely in the trans conformation. NMR and diffusion experiments also detect a population of disordered chains in this phase . Hence detailed information about chain tilt has to be deduced from the electron density map of the system, calculated from x-ray diffraction data.
The lack of knowledge of its structural features has hindered the formulation of a satisfactory theory of the ripple phase; none of the current theories is consistent with all the experimental observations. The most striking disagreement concerns the occurrence of asymmetric ripples. It has been proposed that molecular chirality is responsible for such ripples , but experiments indicate otherwise .
Recently, Sun et al. have calculated the electron density map of the ripple phase of DMPC using the x-ray data of Wack and Webb . They find that the ripples have a saw-tooth shape, with the bilayer thickness in the minor arm being much smaller than that in the major arm. Further, the electron density in the headgroup region along the major arm is much higher than that along the minor arm. This led them to hypothesize that the chain organization in the major arm is like in the L$`_\beta ^{}`$ phase and that in the minor arm is like in the L<sub>α</sub> phase. However, this conjecture is not supported by other experiments. For example, self-diffusion in the ripple phase is found to be highly anisotropic, with a fast component that is 4-5 orders of magnitude faster than the slow component ; but the fast component itself is about 2-3 orders of magnitude smaller than that in the L<sub>α</sub> phase. Thus the authors of Ref. 10 conclude that although the intramolecular hydrocarbon chain disorder may be substantial in the fast bands, the intermolecular order in this region is not like that in the L<sub>α</sub> phase.
In view of this discrepancy, we have calculated the electron density maps of the ripple phase of DMPC and POPC. In addition to the x-ray data of Ref. 5, we have used data from oriented samples at different temperatures and fixed relative humidity. We find that the ripples in both these systems have a saw-tooth shape, with the ratio of the lengths of the two arms essentially independent of temperature. If the molecules in the short arm were in the L<sub>α</sub> phase, the length of this arm would be expected to increase as the L<sub>α</sub> phase is approached from below. This is contrary to what we see. Further, the difference in the bilayer thickness and the electron density in the two arms can be largely accounted for in terms of an average tilt of the chains along the direction of rippling, which we believe is responsible for the occurrence of asymmetric ripples in these systems. These results are clearly important for a satisfactory theoretical description of this phase.
We have adopted the modeling and least squares fitting procedure developed by Sun et al. to calculate the electron density maps. The unit cell parameters of the two-dimensional oblique lattice are the two vectors a and b, and the angle $`\gamma `$. In terms of the ripple wave length $`\lambda `$ and the lamellar spacing d, the two lattice vectors can be expressed as: a = d cot $`\gamma `$ x̂ + d ẑ, and b = $`\lambda `$ x̂. Here x̂ is the direction of the ripple wave vector and ẑ is the direction of the average layer normal (see Fig.1). $`\lambda `$, d, and $`\gamma `$ are directly measured from the diffraction pattern. The electron density within the unit cell, $`\rho `$(x,z), is described as the convolution of a ripple contour function C(x,z) and the transbilayer electron density profile T<sub>ψ</sub>(x,z). C(x,z) = $`\delta `$(z-u(x)), where u(x) describes the ripple profile and is taken to have the form of a saw-tooth with peak-to-peak amplitude A. $`\lambda _1`$ is the projection of the longer arm of the saw-tooth on the x-axis. T<sub>ψ</sub>(x,z) gives the electron density at any point (x,z) along a straight line, which makes an angle $`\psi `$ with the z-axis. The electron density in the methylene region of the bilayer is close to that of water and is taken as zero.
We have used three models for T<sub>ψ</sub>(x,z), two of which are equivalent to the SDF and M1G models of Ref. 13. In model I, it is taken as consisting of two delta functions with positive coefficients $`\rho _H`$, corresponding to the headgroup regions separated by a distance L, and a central delta function with negative coefficient of magnitude $`\rho _M`$, corresponding to the methyl region. The six adjustable parameters in model I are: A, $`\lambda _1`$, $`\psi `$, $`\rho _H`$/$`\rho _M`$, L and a normalizing factor. In model II, the delta functions representing the head and methyl groups are replaced with Gaussians of width $`\sigma _h`$ and $`\sigma _m`$, respectively. The electron density in the minor arm is allowed to be different by a factor $`f_1`$ from that in the major arm. The region where the two arms meet is modeled as a wall with an electron density differing by a factor $`f_2`$ from the rest of the arm. The wall thickness in this model is fixed at a small value. Thus there are 10 adjustable parameters in this model. It is using these two models that Sun et al. find that the bilayer thickness along the local layer normal is different in the two arms of the ripple. But this result is built into these models as the parameter L, which is the thickness of the bilayer along a direction that makes an angle $`\psi `$ with the z-axis, is taken to be the same in the two arms. Therefore, in model III, we remove this constraint and allow L as well as $`\sigma _h`$, $`\sigma _m`$ and $`\rho _H`$/$`\rho _M`$ to be different in the two arms of the ripple. Further, the wall between the two arms is taken to have a variable width w. This model has 15 adjustable parameters. Minimization is done by iterative least squares fitting with respect to six variables at a time .
The structure factors at the observed (h,k) values are calculated using each of the above models. These are then compared with the observed structure factors and a chi-square value is obtained, which is subsequently minimized by varying the adjustable parameters in the model. The phase of each of the Bragg reflections is obtained from the structure factors calculated from the converged model. These calculated phases are combined with the observed magnitudes of the structure factors and inverse Fourier transformed to get the electron density map of the system. We have used the x-ray diffraction data of Wack and Webb from powder samples of DMPC as well as our data from oriented films of l-DMPC, dl-DMPC and POPC. Details of the experimental procedure are discussed elsewhere . Relevant geometric intensity corrections were applied to the data from oriented samples, but absorption corrections could not be applied as the sample thicknesses are not accurately known. We have confirmed by assuming reasonable values of the sample thickness, that these corrections do not significantly change the calculated electron density profiles. However, in the absence of these corrections, we are unable to analyze these data using models II and III, due to the lack of convergence resulting from the large number of adjustable parameters in these models.
The converged values of the different parameters in the three models are given in table 1, along with the crystallographic R factor. The precision of these parameters is about 0.1 %. The data from Ref. 5 were used in these analyses. In accordance with the results of Ref. 13, we find that all the phases, except those of the three relatively weak (0,k) reflections, obtained using model II are the same as those obtained from model I. These phases are also the same as those obtained from the SDF and M1G models in Ref. 13. Model III gives only a marginally better fit to the data, the chi-square being only about 2% lower. The phases of all the reflections other than the (0,k) reflections are the same as those found from models I and II. For the (0,k) reflections this model gives a combination of phases that is different from the ones obtained from models I and II. The values of L in the two arms are almost the same and equal to that obtained from model II. This is also true of the other parameters which are allowed to be different in the two arms. However, model III gives a slightly higher value of 0.7 for $`f_1`$. As discussed below, this factor can be accounted for in terms of the chain tilt, without resorting to the assumption of a L<sub>α</sub>-like organization in the minor arm. The low $`\chi ^2`$ and R values for models II and III, and the absence of any physically unacceptable features in the electron density map (see Fig.2) indicate that these models closely represent the true structure of the system.
The electron density map of the ripple phase of DMPC, calculated with the data of Ref. 5 is shown in Fig.2. The ripples clearly have a saw-tooth shape, with an offset between the two leaves of the bilayer. The simplest explanation for this offset is an average tilt of the chains along the rippling direction; such an offset cannot be expected if the tilt were in a plane normal to the rippling direction. The tilt angle $`\psi `$ is found to be approximately equal to ($`\gamma `$-$`\frac{\pi }{2}`$). Further confirmation of the existence of an average tilt along this direction comes from the fact that the value of L is almost equal in the two arms and is comparable to twice the length of a fully stretched DMPC molecule. If it is assumed that the chains are tilted at an angle $`\psi `$ with the z-axis, their tilt with respect to the local layer normal can be calculated from the shape of the ripple. Using the values of the structural parameters given in table I, the tilt angle with respect to the local layer normal turns out to be 1.6<sup>o</sup> and 34.5<sup>o</sup> in the longer and shorter arms, respectively. The tilt in the short arm is comparable to that found in the L$`_\beta ^{}`$ phase. Since the area per molecule is inversely proportional to the cosine of this angle, a value of 0.82 is obtained for $`f_1`$. This is in very good agreement with the value of 0.77 obtained from the map for the ratio of the average electron densities in the headgroup region of the longer and shorter arms. Thus an average tilt of the chains along the rippling direction provides a consistent explanation for many features of the electron density map. This means that to a good approximation the height modulation of the bilayers along the x-axis can be described as arising from a relative sliding movement of neighboring chains, with all the chains lying the x-z plane and tilted by a constant angle $`\psi `$ with respect to the z-axis. The existence of an average chain tilt along the rippling direction breaks the reflection symmetry of the bilayer in the plane normal to it and hence can be expected to be responsible for the asymmetric ripples seen in this system.
We have also calculated the electron density maps of l-DMPC, dl-DMPC and POPC at different temperatures in the ripple phase, using data from oriented films. The structural features of the ripples are found to be similar to those obtained from the data of Ref. 5. The maps of the chiral and racemic DMPC samples were identical, indicating the lack of influence of molecular chirality on the ripple structure . The temperature dependence of the structural parameters of the ripples in DMPC are found to be very weak, as in the case of dipalmitoyl phosphatidylcholine (DPPC) . Contrary to what is observed in freeze fracture experiments , we find that the ripple shape has Fourier components higher than the second. Further, we do not find a significant temperature dependence of the amplitude of the ripples in contrast to what is reported in Ref. 17.
The electron density map of the ripple phase of POPC is shown in Fig. 3. The ripple shape is very similar to that of DMPC. It also has a saw-tooth shape and an offset between the monolayers indicating an average tilt in the direction of rippling. In POPC, the angle $`\gamma `$ is much larger than in DMPC, whereas the wave length and layer spacing are comparable. Unlike those of DMPC, the structural features of the ripple phase of POPC vary significantly with temperature, as shown in table 2. In the absence of absorption corrections, the fits are not as good as in the case of DMPC data of Ref. 5. As mentioned earlier, we find that the electron density profiles are insensitive to these corrections. Hence the values of the last two parameters quoted in the table are the ones estimated from the electron density maps. The layer spacing decreases slowly and $`\gamma `$ increases steadily as temperature is increased. The ripple wavelength first decreases and then suddenly increases to a large value just below the transition. These trends are very similar to those seen in DPPC , but in POPC the temperature dependence is very much pronounced. The amplitude, except near the L<sub>α</sub> transition, is about half that of DMPC. In both DMPC and POPC the ratio of the projected lengths of the major and minor arms is about 2 and is essentially insensitive to temperature. This observation further supports the view that the chain organization in the minor arm is not like that in the L<sub>α</sub> phase.
All the freeze fracture studies of the ripple phase show ripples oriented over micrometer-sized regions . Since the chain tilt is locked to the rippling direction, this implies long range order of the tilt direction. These experiments also show ripples oriented only along three directions, each at an angle of approximately 120<sup>o</sup> from the other two, indicating a six-fold symmetry in the underlying bilayer structure. Thus the in-plane ordering of the molecules in the bilayer is at least hexatic.
In conclusion, we have calculated the electron density maps of the ripple phase of DMPC and POPC. The shape of the ripples in these two systems are very similar, with both of them exhibiting asymmetric ripples. We have been able to establish the existence of an average chain tilt in the direction of rippling, which is probably responsible for the asymmetric ripples seen in these systems.
We thank Y. Hatwalne, K. Usha, and J. F. Nagle for discussions and HKL Research Inc. for the use of their software.
|
no-problem/9907/nucl-th9907027.html
|
ar5iv
|
text
|
# Pair creation: back-reactions and damping
## I Introduction
The nonperturbative Schwinger mechanism, which describes the spontaneous formation of fermion-antifermion pairs, has been used to model the formation of a quark-gluon plasma in heavy ion collisions. In this approach nucleon-nucleon collisions lead to the creation of flux-tubes, in which quark-antiquark pairs are connected by a strong colour-electric field. The energy density (string tension) acts like a strong background field and particle-antiparticle pairs are created via the Schwinger mechanism. These charged particles polarise the vacuum and are accelerated in the external field. Their motion generates a field that in turn modifies the initial background field and, in the absence of further interactions, that back-reaction induces plasma oscillations.
The back-reaction phenomenon has become a focus of attention in recent years, both in general and as it can arise in the pre-equilibrium stage of an heavy ion collision. Theoretical approaches as diverse as field theory and transport equations have been applied. The link between treatments based on the field equations and the formulation of a Boltzmann equation was recently investigated. These studies show that the resulting kinetic equation has a non-Markovian source term. For weak fields there is no overlap between the time-scales characterising vacuum tunnelling and the period between pair production events: $`\tau _{qu}`$, $`\tau _{prod}`$, and the Markovian approximation to the quantum Vlasov equation is valid. However, for strong background fields there is an overlap between these time-scales and this makes the non-Markovian nature of the source term very important. Back-reactions and collisions introduce at least two more time-scales: the plasma oscillation period, $`\tau _{pl}`$, and the collision period, $`\tau _r`$, and their impact is an integral focus of this article. Furthermore, in contrast to other recent studies, we induce particle production by a time-dependent external field.
In Sect. II we review the main equations and results for particle creation using a non-Markovian source term. In Sect. III we derive the renormalised Maxwell equation determined by the external and internal fields and, for special choices of the external field, present numerical results obtained by solving the coupled system of kinetic and Maxwell equations for bosons and fermions, with and without a simple collision term. We summarise our results in Sect. IV.
## II Pair creation with a non-Markovian source term
We consider an external, spatially-homogeneous, time-dependent vector potential $`A_\mu `$, in Coulomb gauge: $`A_0=0`$, and write $`\stackrel{}{A}=(0,0,A(t))`$. The corresponding electric field is
$$E(t)=\dot{A}(t):=dA(t)/dt.$$
(1)
The kinetic equation satisfied by the single-particle distribution function: $`f_\pm `$ (“$`+`$” for bosons, “$``$” for fermions) is
$$\frac{df_\pm (\stackrel{}{p},t)}{dt}=S_\pm (\stackrel{}{p},t),$$
(2)
where the source term is momentum- and time-dependent:
$$S_\pm (\stackrel{}{p},t)=\frac{1}{2}𝒲_\pm (t)_{\mathrm{}}^t𝑑t^{}𝒲_\pm (t^{})F_\pm (\stackrel{}{p},t)\mathrm{cos}[x(t^{},t)],$$
(3)
with $`x(t^{},t):=2[\mathrm{\Theta }(t)\mathrm{\Theta }(t^{})]`$ describing the difference between the dynamical phases
$$\mathrm{\Theta }(t)=_{\mathrm{}}^t𝑑t^{}\omega (t^{}).$$
(4)
Here the total energy is
$$\omega (t)=\sqrt{\epsilon _{}^2+P_{}^2(t)}$$
(5)
where $`\epsilon _{}=\sqrt{m^2+\stackrel{}{p}_{}^{\mathrm{\hspace{0.17em}2}}}`$ is the transverse energy and we have introduced the kinetic momentum: $`\stackrel{}{P}=(p_{},P_{}(t))`$, with $`\stackrel{}{p}_{}=(p_1,p_2)`$, $`P_{}(t)=p_{}eA(t)`$.
Equation (2) was recently derived from the underlying quantum field theory and exhibits a number of interesting new features. For example, in a strong background field its solutions describe an enhancement in the boson production rate and a suppression of fermion production. There are two aspects of Eq. (2) that generate such differences between the solutions for fermions and bosons: the different transition coefficients
$$𝒲_\pm (t)=\frac{eE(t)P_{}(t)}{\omega ^2(t)}\left(\frac{\epsilon _{}}{P_{}(t)}\right)^{g_\pm 1},$$
(6)
where the degeneracy factor is $`g_+=1`$ for bosons and $`g_{}=2`$ for fermions; and the statistical factor: $`F_\pm (\stackrel{}{p},t)=[1\pm 2f_\pm (\stackrel{}{p},t)]`$.
The kinetic equation, Eq. (2), is non-Markovian for two reasons: (i) the source term on the right-hand-side (r.h.s) requires knowledge of the entire history of the evolution of the distribution function, from $`t_{\mathrm{}}t`$; and (ii), even in the low density limit ($`F(t)=1`$), the integrand is a nonlocal function of time as is apparent in the coherent phase oscillation term: $`\mathrm{cos}[x(t^{},t)]`$. The mean field approaches of Refs. also incorporate non-Markovian effects in particle production. However, the merit of a kinetic formulation lies in the ability to make a simple and direct connection with widely used approximations.
In the low density limit the source term is independent of the distribution function
$$S_\pm ^0(\stackrel{}{p},t)=\frac{1}{2}𝒲_\pm (t)_{\mathrm{}}^t𝑑t^{}𝒲_\pm (t^{})\mathrm{cos}[x(t^{},t)]$$
(7)
and Eq. (2) becomes
$$\frac{df_\pm ^0(\stackrel{}{p},t)}{dt}=S_\pm ^0(\stackrel{}{p},t).$$
(8)
(The low-density limit can only be self-consistent for weak fields.) Even in this case there are differences between the solutions for fermions and bosons because of the different coefficients $`𝒲_\pm (t)`$, and the equation remains nonlocal in time. Equation (8) has the general solution
$$f_\pm ^0(\stackrel{}{p},t)=_{\mathrm{}}^t𝑑t^{}S_\pm ^0(\stackrel{}{p},t^{}),$$
(9)
which provides an excellent approximation to the solution of the complete equation when the background field strength is small compared to the transverse energy.
The ideal Markov limit was found in Ref., where a further asymptotic expansion was employed and a local source term for weak electric fields was derived. In this case $`\tau _{qu}<\tau _{prod}`$. However, for very strong fields a clear separation of these time-scales is not possible and the kinetic equation must be solved in its non-Markovian form where memory effects are important.
Equation (2) is an integro-differential equation. It can be re-expressed by introducing
$`v_\pm (\stackrel{}{p},t)`$ $`=`$ $`{\displaystyle _{t_0}^t}𝑑t^{}𝒲_\pm (\stackrel{}{p},t^{})F_\pm (\stackrel{}{p},t^{})\mathrm{cos}[x(\stackrel{}{p},t,t^{})],`$ (10)
$`z_\pm (\stackrel{}{p},t)`$ $`=`$ $`{\displaystyle _{t_0}^t}𝑑t^{}𝒲_\pm (\stackrel{}{p},t^{})F_\pm (\stackrel{}{p},t^{})\mathrm{sin}[x(\stackrel{}{p},t,t^{})],`$ (11)
in which case we have
$`{\displaystyle \frac{f_\pm (\stackrel{}{P},t)}{t}}+eE(t){\displaystyle \frac{f_\pm (\stackrel{}{P},t)}{P_{}}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}𝒲_\pm (\stackrel{}{P},t^{})v_\pm (\stackrel{}{P},t),`$ (12)
$`{\displaystyle \frac{v_\pm (\stackrel{}{P},t)}{t}}+eE(t){\displaystyle \frac{v_\pm (\stackrel{}{P},t)}{P_{}}}`$ $`=`$ $`𝒲_\pm (\stackrel{}{P},t^{})F_\pm (\stackrel{}{P},t)2\omega (\stackrel{}{P})z_\pm (\stackrel{}{P},t),`$ (13)
$`{\displaystyle \frac{z_\pm (\stackrel{}{P},t)}{t}}+eE(t){\displaystyle \frac{z_\pm (\stackrel{}{P},t)}{P_{}}}`$ $`=`$ $`2\omega (\stackrel{}{P})v_\pm (\stackrel{}{P},t),`$ (14)
with the initial conditions $`f_\pm (t_0)=v_\pm (t_0)=z_\pm (t_0)=0`$, where $`t_0\mathrm{}`$. This coupled system of linear differential equations is much simpler to solve numerically.
## III Back-reactions
### A The Maxwell equation
In recent years the effect of back-reactions in inflationary cosmology and also in the evolution of a quark-gluon plasma has been studied extensively. In both cases the particles produced by the strong background field modify that field: in cosmology it is the time-dependent gravitational field, which couples via the masses, and in a quark-gluon plasma, it is the chromoelectric field affected by the partons’ colour charge.
In our previous studies we have considered constant and simply-constructed time-dependent Abelian electric fields but ignored the effect of back-reactions; i.e., that the particles produced by the background field are accelerated by that field, generating a current that opposes and weakens it, and can also lead to plasma oscillations. The effect of this back-reaction on the induced field is accounted for by solving Maxwell’s equation: $`\dot{E}(t)=j(t)`$. Herein we assume that the plasma is initially produced by an external field, $`E_{ex}(t)`$, excited by an external current, $`j_{ex}(t)`$, such as might represent a heavy ion collision and this is our model-dependent input. The total field is the sum of that external field and an internal field, $`E_{in}(t)`$, generated by the internal current, $`j_{in}(t)`$, that characterises the behaviour of the particles produced. Hence the total field and the total current are given by
$`E(t)`$ $`=`$ $`E_{in}(t)+E_{ex}(t),`$ (15)
$`j(t)`$ $`=`$ $`j_{in}(t)+j_{ex}(t).`$ (16)
Continued spontaneous production of charged particle pairs creates a polarisation current, $`j_{pol}(t)`$, that depends on the particle production rate, $`S(\stackrel{}{p},t)`$. Meanwhile the motion of the existing particles in the plasma generates a conduction current, $`j_{cond}(t)`$, that depends on their momentum distribution, $`f(\stackrel{}{p},t)`$. The internal current is the sum of these two contributions
$$\dot{E}_{in}(t)=j_{in}=j_{cond}(t)j_{pol}(t).$$
(17)
At mean field level the currents can be obtained directly from the constraint of local energy density conservation: $`\dot{ϵ}=0`$, where
$$ϵ(t)=\frac{1}{2}E^2(t)+2\frac{d^3\stackrel{}{p}}{(2\pi )^3}\omega (\stackrel{}{p},t)f(\stackrel{}{p},t).$$
(18)
For bosons this constraint yields
$$\dot{E}(t)=2e\frac{d^3\stackrel{}{p}}{(2\pi )^3}\frac{p_{}eA(t)}{\omega (\stackrel{}{p},t)}\left[f(\stackrel{}{p},t)+\frac{\omega (\stackrel{}{p},t)}{\dot{\omega }(\stackrel{}{p},t)}\frac{df(\stackrel{}{p},t)}{dt}\right],$$
(19)
and we can identify the conduction current
$$j_{cond}(t)=2e\frac{d^3\stackrel{}{p}}{(2\pi )^3}\frac{p_{}eA(t)}{\omega (\stackrel{}{p},t)}f(\stackrel{}{p},t),$$
(20)
and, using Eq. (2), the polarisation current
$$j_{pol}(t)=\frac{2}{E(t)}\frac{d^3p}{(2\pi )^3}\omega (\stackrel{}{p},t)S(\stackrel{}{p},t).$$
(21)
Thus, using Eqs. (10) and (11), Maxwell’s equation is
$`\dot{E}_{in}(t)=\ddot{A}_{in}(t)=2e{\displaystyle \frac{d^3\stackrel{}{P}}{(2\pi )^3}\frac{P_{}(t)}{\omega (\stackrel{}{P})}\left[f(\stackrel{}{P},t)+\frac{1}{2}v(\stackrel{}{P},t)\right]}.`$ (22)
It is important to observe that this form for the internal field has been employed extensively in the study of back-reactions. However, our contribution is to employ it in conjunction with a time-dependent external field, which allows for the exploration of a richer variety of phenomena.
### B Renormalisation
The boson and fermion currents are
$$j_{in}(t)=eg_\pm \frac{d^3\stackrel{}{P}}{(2\pi )^3}\frac{P_{}(t)}{\omega (\stackrel{}{P})}\left[f(\stackrel{}{P},t)+\frac{v(\stackrel{}{P})}{2}\left(\frac{ϵ_{}}{P_{}(t)}\right)^{g_\pm 1}\right],$$
(23)
where the integrand depends on the solution of the kinetic equation, Eqs. (12)-(14), which must be such as to ensure the integral is finite. Simple power counting indicates that admissible solutions must satisfy
$$v(\stackrel{}{P},t),f(\stackrel{}{P},t)\stackrel{|\stackrel{}{P}|\mathrm{}}{<}\frac{1}{|\stackrel{}{P}|^4}.$$
(24)
To fully characterise the asymptotic behaviour we employ a separable Ansatz
$$f(\stackrel{}{P},t)=\underset{k=0}{\overset{\mathrm{}}{}}\frac{f_k(t)}{|\stackrel{}{P}|^k},v(\stackrel{}{P},t)=\underset{k=0}{\overset{\mathrm{}}{}}\frac{v_k(t)}{|\stackrel{}{P}|^k},z(\stackrel{}{P},t)=\underset{k=0}{\overset{\mathrm{}}{}}\frac{z_k(t)}{|\stackrel{}{P}|^k}.$$
(25)
Substituting these in Eqs. (12)-(14) and comparing coefficients, using $`P_{}\omega (\stackrel{}{P})ϵ_{}`$, which are valid at large $`|\stackrel{}{P}|`$, we find the leading terms
$$f_4=\frac{1}{16}e^2E^2(t),v_3=\frac{1}{4}e\dot{E}(t),z_2=\frac{1}{2}eE(t),$$
(26)
with all the lower-order coefficients being zero. Substituting these results in Eq. (20) it is clear that the conduction current is convergent. However, there is a logarithmic divergence in the polarisation current, Eqs. (19) and (21), which is apparent in Eq. (23), but that is just the usual short-distance divergence associated with charge renormalisation. We regularise the polarisation current by writing $`v=(vv_3P_{}/\omega ^4)+v_3P_{}/\omega ^4`$, so that
$`\dot{E}^\pm (t)=j_{ex}(t)`$ (28)
$`g_\pm e{\displaystyle \frac{d^3\stackrel{}{P}}{(2\pi )^3}\frac{P_{}(t)}{\omega (\stackrel{}{P})}\left[f(\stackrel{}{P},t)+\frac{1}{2}\left\{v(\stackrel{}{P},t)\frac{e\dot{E}(t)P_{}(t)}{4\omega ^4(\stackrel{}{P})}\right\}\left(\frac{ϵ_{}}{P_{}(t)}\right)^{g_\pm 1}\right]}e^2\dot{E}^\pm (t)I^\pm (\mathrm{\Lambda }),`$
where
$$I^\pm (\mathrm{\Lambda })=\frac{g_\pm }{4}\frac{d^3\stackrel{}{P}}{(2\pi )^3}\frac{P_{}^2(t)}{\omega ^5(\stackrel{}{P})}\left(\frac{ϵ_{}}{P_{}(t)}\right)^{g_\pm 1}\stackrel{\mathrm{\Lambda }\mathrm{}}{=}\frac{g_\pm }{8\pi ^2}\mathrm{ln}\left[\mathrm{\Lambda }^2/m^2\right],$$
(29)
with $`\mathrm{\Lambda }`$ a cutoff on $`|\stackrel{}{P}|`$, which effects a regularisation equivalent to Pauli-Villars. Introducing the renormalised charge, fields and current:
$$e_R^2=Ze^2,^\pm (t)=E^\pm (t)/\sqrt{Z},𝒜^\pm (t)=A^\pm (t)/\sqrt{Z},𝒥_{ex}(t)=\sqrt{Z}j_{ex}(t),$$
(30)
with $`Z=1/(1+e^2I^\pm (\mathrm{\Lambda }))`$, and noting that $`eE^\pm (t)=e_R^\pm (t)`$ and $`eA^\pm (t)=e_R𝒜^\pm (t)`$, Eq. (28) becomes
$`\ddot{𝒜}^\pm (t)=\dot{}^\pm (t)=𝒥_{ex}(t)`$ (32)
$`g_\pm e_R{\displaystyle \frac{d^3\stackrel{}{P}}{(2\pi )^3}\frac{P_{}(t)}{\omega (\stackrel{}{P})}\left[f_\pm (\stackrel{}{P},t)+\frac{1}{2}\left\{v_\pm (\stackrel{}{P},t)\frac{e_R\dot{}^\pm (t)P_{}(t)}{4\omega ^4(\stackrel{}{P})}\right\}\left(\frac{ϵ_{}}{P_{}(t)}\right)^{g_\pm 1}\right]}.`$
This defines a properly renormalised equation for the fields. Our procedure is technically different from that employed elsewhere but yields an equivalent result. Subsequently all fields and charges are to be understood as renormalised.
### C Numerical results
Equations (12)-(14) together with Maxwell’s equation, Eq. (28), form a coupled system of differential equations. To solve it we first evaluate the internal current from Eq. (28) at the primary time-slice using the initial conditions for the distribution function. That, via Eqs. (15) and (16), provides an electric field, which we use to calculate the momentum distribution from Eqs. (12)-(14). This procedure is repeated as we advance over our time-grid. We use a momentum grid with $`200`$ transverse- and $`400`$ longitudinal-points, a time-step $`dt=0.005`$, and $`\mathrm{\Lambda }=50`$ in Eq. (29). All dimensioned quantities are expressed in units of $`m`$, the parton mass.
Spontaneous particle creation occurs in the presence of a strong field under whose influence the vacuum becomes unstable and decays. Herein we induce this by a time-dependent external field and compare three different field configurations. The fields vanish at $`t\mathrm{}`$, and at $`t=t_0`$ the magnitude of the field increases and eventually leads to particle creation. Configuration (i): For comparison with Refs., we solve the set of equations as an initial value problem without an external field, using an initial value of the electric field that is large enough to cause pair production. Configuration (ii): We employ
$$A_{ex}=A_0b^2[t/b+\mathrm{ln}(2\mathrm{cosh}(t/b))],E_{ex}(t)=A_0b[\mathrm{tanh}(t/b)+1],$$
(33)
which is an electric field that “switches-on” at $`tb`$ and evolves to a constant value, $`2A_0b`$, in an interval $`t2/b`$. Configuration (iii): Is an impulse field configuration:
$$A_{ex}(t)=A_0[\mathrm{tanh}(t/b)+1],E_{ex}(t)=A_0[b\mathrm{cosh}^2(t/b)]^1,$$
(34)
which is an electric field that “switches-on” at $`t2b`$ and off at $`t2b`$, with a maximum magnitude of $`A_0/b`$ at $`t=0`$. Once this field has vanished only the induced internal field remains to create particles and affect their motion.
For configuration (i) we fix an initial value of $`E(t=0)=10`$, in units of $`m`$, with $`e^2=4`$, and for bosons obtain the electric field and current depicted in Fig. 1, where plasma oscillations are evident. The frequency of these oscillations increases with the magnitude of the field. The current exhibits a plateau for small $`t`$, when the particles reach their maximum velocity. This current opposes the field, and leads to a suppression of particle production and a deceleration of the existing particles. The effect of this is to overwhelm the field and change its sign with a consequent change in the direction of the particles’ collective motion. The process repeats itself, yielding the subsequent oscillations that persist in the absence of additional interactions, such as collisions or radiation. The structure visible at the peaks and troughs of the current is not a numerical artefact. It is related to the field-strength/mass ratio, being more pronounced for large values, and occurs on a time-scale $`\tau _{qu}`$, the vacuum tunnelling time, and hence can be characterised as Zitterbewegung. It disappears if an ideal-Markovian approximation to the source term is used because that cannot follow oscillations on such small time-scales. The $`t`$-dependence of the $`\stackrel{}{p}=0`$ distribution function is depicted in Fig. 2, where the beat-like pattern is the result of back-reactions and the rapid fluctuations coincide with the Zitterbewegung identified in the current.
Configurations (ii) and (iii) are alike in that the field “switches-on” at a given time. However, for (ii) the external field remains constant as $`t`$ increases whereas in (iii) it “switches-off” after $`t2/b`$. The electric field and current obtained for bosons in these cases are depicted in Figs. 3 and 4: plasma oscillations are again evident. We plot the total electric field and thus it is evident in Fig. 3 that the internal electric field evolves to completely compensate for the persistent external field, which alone would appear as a straight-line at $`E(t)=7`$. Unsurprisingly, as we see from Fig. 4, a stable state is reached more quickly in the absence of a persistent electric field. Outside the temporal domain on which the vector potential acts, the initial value and impulse solutions are equivalent.
We illustrate the results for fermions in Fig. 5 using the impulse configuration. The amplitude and frequency of the plasma oscillations are significantly larger than for bosons in a configuration of equal strength. Further, the stable state is reached more quickly because Pauli blocking inhibits particle production; i.e., no particles can be produced once all available momentum states are occupied. Pauli blocking also guarantees $`f_{}(\stackrel{}{p},t)<1`$, for all $`t`$.
We have also calculated the $`P_{}`$\- and $`p_{}`$-dependence of $`f`$ for both bosons and fermions. We find $`f_+(t=0)=0`$; i.e., bosons cannot be produced with zero kinetic momentum, an effect readily anticipated from Eq. (6). For small $`t`$, $`f_\pm (\stackrel{}{p},t)`$ is a slowly varying function of $`\stackrel{}{p}`$ on its domain of support. However, with increasing $`t`$, $`f_\pm (\stackrel{}{p},t)`$ develops large-magnitude fluctuations without increasing that domain. The momentum-space position of the midpoint of the domain of support oscillates with a $`t`$-dependence given by the kinetic momentum: $`P_{}=p_{}eA(t)`$.
One additional observation is important here. The magnitude, $`A_0`$, of the electric fields we have considered is large and hence the time between pair production events, $`\tau _{prod}`$, is small, being inversely proportional to the time-average of the source term, $`S`$. The period of the plasma oscillations, $`\tau _{pl}`$, also decreases with increasing $`A_0`$ but nevertheless we always have $`\tau _{prod}\tau _{pl}`$. Thus, in contrast to the effect it has on the production process, the temporal nonlocality of the non-Markovian source term is unimportant to the collective plasma oscillation.
### D Collisions
In the previous subsection we ignored the effect of collisions when treating the spontaneous production of charged particles and subsequent evolution of the plasma. Now we consider the effect of a simple collision term
$$C_\pm (\stackrel{}{p},t)=\frac{f_\pm ^{eq}(\stackrel{}{p},t)f_\pm (\stackrel{}{p},t)}{\tau _r},$$
(35)
where $`\tau _r`$ is the “relaxation time” and $`f_\pm ^{eq}`$ are the thermal equilibrium distribution functions for bosons and fermions:
$$f_\pm ^{eq}(\stackrel{}{p},t)=\frac{1}{\mathrm{exp}[\omega (\stackrel{}{p},t)/T(t)]1}.$$
(36)
Here $`T(t)`$ is the “instantaneous temperature”, which is a model-dependent concept, and since our results are not particularly sensitive to details of its form we employ a simple parametrisation
$$T(t)=T_{eq}+(T_mT_{eq})\mathrm{e}^{t^2/t_0^2},$$
(37)
with an equilibrium temperature $`T_{eq}=1.0`$, a maximum temperature $`T_m=2.0`$, and a profile-width $`t_0^2=10\tau _{pl}`$. The collision term is added to the r.h.s. of Eq.(2), which becomes
$$\frac{df_\pm (\stackrel{}{p},t)}{dt}=S_\pm (\stackrel{}{p},t)+C_\pm (\stackrel{}{p},t).$$
(38)
This “relaxation time” approximation assumes that the system evolves rapidly towards thermal equilibrium after the particles are produced. It has been used before, both in the absence of back-reactions and including them, but with source terms that neglect fluctuations on short time-scales.
We note that in the low density limit: $`f(\stackrel{}{p},t)1`$, one can neglect the distribution function in the source term, Eq. (3), and Eq. (38) has the simple solution
$$f_\pm ^0(\stackrel{}{p},t)=_{\mathrm{}}^t𝑑t^{}\mathrm{exp}\left[\frac{t^{}t}{\tau _r}\right]\left(S_\pm ^0(\stackrel{}{p},t^{})+\frac{f_\pm ^{eq}(\stackrel{}{p},t^{})}{\tau _r}\right).$$
(39)
In our numerical studies we treat $`\tau _r`$ as a parameter and study the effect of $`C`$ on the plasma oscillations. Our results for bosons using this crude approximation are depicted in Fig. 6. For $`\tau _r\tau _{pl}`$ the oscillations are unaffected, as anticipated if $`1/\tau _r`$ is interpreted as a collision frequency. For $`\tau _r\tau _{pl}`$ the collision term has a significant impact, with both the amplitude and frequency of the plasma oscillations being damped. There is a $`\tau _r`$ below which no oscillations arise and the systems evolves quickly and directly to thermal equilibrium.
## IV Summary and Conclusion
We have studied spontaneous particle creation in the presence of back-reactions and collisions, both of which dramatically affect the solution of the kinetic equation. The back-reactions lead to plasma oscillations that are damped by the thermalising collisions if the collision frequency is comparable to the plasma frequency. In electric fields where the period of the plasma oscillations is large compared to the time-scales characterising particle production, the non-Markovian features of the source term play little role in the back-reaction process.
Plasma oscillations are a necessary feature of all studies such as ours but are they relevant to the creation of a quark-gluon plasma? If we set the scale in our calculations by assuming that fermions are created in the impulse configuration with $`ϵ_{}\mathrm{\Lambda }_{\mathrm{QCD}}0.5\sqrt{\sigma }`$, the QCD string tension, then $`A_0=10`$ with $`e^2=5`$ corresponds to an initial field strength $`eE15\sigma `$ and energy density $`\frac{1}{2}E^220\sigma ^2`$. These are very large values but even so the plasma oscillation period is still large: $`\tau _{pl}=5`$fm/c, and collisions can only act to increase that. We therefore expect that a quark-gluon plasma will have formed and decayed well before plasma oscillations can arise. On these short time-scales non-Markovian effects will be important.
Our estimate shows back-reactions to be unimportant on small time-scales but that is not true of collisions. However, it is clear that in QCD applications they must be described by something more sophisticated than the “relaxation-time” approximation.
Finally, with $`1/\mathrm{\Lambda }_{\mathrm{QCD}}`$ setting the natural scale, the finite interaction volume will clearly be important and hence the assumption of a spatially homogeneous background field must also be improved before calculations such as ours are relevant to a quark-gluon plasma. Ref. is a step in that direction.
## Acknowledgments
V.A.M and D.V.V. are grateful for the support and hospitality of the Rostock University where part of this work was conducted. This work was supported by the US Department of Energy, Nuclear Physics Division, under contract no. W-31-109-ENG-38; the US National Science Foundation under grant no. INT-9603385, the State Committee of Russian Federation for Higher Education under grant N 29.15.15; BMBF under the program of scientific-technological collaboration (WTZ project RUS-656-96); the Hochschulsonderprogramm (HSP III) under the project No. 0037-6003; and benefited from the resources of the National Energy Research Scientific Computing Center. S.M.S. is a F.-Lynen Fellow of the A.v. Humboldt foundation.
|
no-problem/9907/hep-ph9907407.html
|
ar5iv
|
text
|
# A gluon condensate term in a heavy quark mass
## Abstract
We investigate a connection between a renormalon ambiguity of heavy quark mass and the gluon condensate contribution into the quark dispersion law related with a virtuality defining a displacement of the heavy quark from the perturbative mass-shell, which happens inside a hadron.
An Operator Product Expansion (OPE) is among the most powerful tools in the heavy quark physics. In this respect it is usually applied in the form of series in the inverse heavy quark mass, determining the characteristic energy scale, say, in sum rules or for decays etc. 1 . It is well recognized that the Wilson coefficients standing in front of quark-gluon operators can contain the uncertainty caused by the factorization of perturbative contribution and the nonperturbative matrix elements of composite operators. In this case the restriction on internal virtualities in Feynman diagrams has to be introduced to control the dependence on an “infrared” energy scale $`\lambda `$. Usually, the gluon propagator is modified by replacement: $`1/k^21/(k^2\lambda _g^2)`$ or the cut off the gluon momenta is performed as $`k^2>\lambda ^2`$ 2 . The calculation results depend on these parameters. Say, a peculiar behaviour at $`\lambda _g^20`$ appears in physical quantities. For example, a perturbative correlator of two heavy quark currents acquires a power correction like $`\lambda ^4/m^4`$, where $`m`$ is the heavy quark mass 3 . Physically, it means that the OPE can be valid if we sum the perturbative and nonperturbative parts with the vacuum expectation of gluon operator which has the same low energy scale dependence: the gluon condensate $`\lambda ^4`$. Then the $`\lambda `$-dependent term can be adopted by an appropriate definition of OPE with the condensates. Another case takes place for the uncertainty in the heavy quark mass, where the perturbative calculation of self-energy with the gluon virtuality cut off leads to the linear term in $`\lambda `$. However, there is no appropriate operator whose vacuum expectation is proportional to the first power of low energy scale 1 . It was shown that the mentioned uncertainty proportional to the powers of factorization scale $`\lambda `$ can be related with the perturbative summation of higher order diagrams, which in the limit of infinitely large number of flavors has the divergency of series in $`\beta _0\alpha _s`$, where $`\beta _0`$ denotes the first coefficient of Gell-Mann–Low function in QCD. The Borel transform of such series has some peculiar points, which provide the uncertainty in the inverse transformation. This uncertainty, related with the divergency of perturbative series is called the renormalon 4 , since the physical contents of such fact is clarified by the representation, where the series are combined in the running of QCD coupling constant dependent of the gluon virtuality. The coupling has the singularity, which is the indication of confinement. In this way, the uncertainty in powers of $`\mathrm{\Lambda }_{QCD}`$ appears again. Modern studies on the renormalon applications can be found in Kata . These facts imply that the OPE for fixed values of physical quantities (say, partial widths or coupling constants in the sum rules) in terms of perturbative heavy quark mass results in the heavy quark mas, whose value extracted form the data, strongly depends on the order of calculation in $`\alpha _s`$-series 1 : the mass value is significantly changed from order to order.
Thus, the heavy quark quantities have the renormalon uncertainties connected to the infrared confinement in QCD. Some of them can be eaten by the appropriate definition of OPE with condensates. The heavy quark mass is of a special interest, since its infrared uncertainty cannot be straightforwardly adopted by the vacuum expectation of an operator with the dimension 1 in the energy scale.
In present paper we evaluate the gluon condensate contribution to the dispersion law of heavy quark. We find that the corresponding operator is divided by the third power of quark virtuality, which results in the appropriate dimension of term in the heavy quark mass. We discuss how this fact can be used to cancel the infrared uncertainty of mass.
We perform the calculation of diagram shown in Fig.1 in the technique of fixed-point gauge 5 with the NRQCD propagators of heavy quarks 6 .
The covariant form of two-point heavy quark effective action $`\overline{h}_v\mathrm{\Gamma }h_v`$ can be represented as
$`\mathrm{\Gamma }`$ $`=`$ $`pv{\displaystyle \frac{(pv)^2p^2}{2m}}+`$ (1)
$`{\displaystyle \frac{\pi ^2}{24}}{\displaystyle \frac{\alpha _s}{\pi }}G_{\mu \nu }^2\left[{\displaystyle \frac{(pv)^2p^2}{m^2}}{\displaystyle \frac{1}{\left(pv\frac{(pv)^2p^2}{2m}\right)^3}}+{\displaystyle \frac{1}{m}}{\displaystyle \frac{1}{\left(pv\frac{(pv)^2p^2}{2m}\right)^2}}\right],`$
where $`v`$ denotes the four-velocity of hadron containing the heavy quark. The validity of (1) holds under the certain condition on the region of kinematical variables: the gluon condensate term in the dispersion law of quark is less than the leading contribution.
In the rest frame of hadron $`v=(1,\mathrm{𝟎})`$ we have
$$pv\frac{(pv)^2p^2}{2m}=p_0\frac{𝐩^2}{2m}=\mathrm{\Delta }E,$$
where $`\mathrm{\Delta }E`$ denotes a heavy quark virtuality inside the hadron. The perturbative mass-shell is defined by the following expression:
$$\mathrm{\Delta }E=0.$$
It is quite clear that the confined quark cannot reach the mass-shell and there is a minimal displacement from the surface of free quark motion, which is a nonperturbative quantity. So, we suppose that
$$\mathrm{\Delta }E\mathrm{\Lambda }_{QCD}.$$
In what follows we apply the model with the quark dispersion law determined by the form dictated by the account of gluon condensate in (1):
$$p_0=\omega _0+\frac{𝐩^2}{2\stackrel{~}{m}},$$
(2)
where again $`\omega _0\mathrm{\Lambda }_{QCD}`$ and $`\stackrel{~}{m}`$ denotes the effective heavy quark mass, which differs from the perturbative pole mass due to the contribution of gluon condensate. In the nonrelativistic rest frame we have<sup>1</sup><sup>1</sup>1In NRQCD, where $`|𝐩|/m<1`$, the gluon condensate correction to the heavy quark action $`\mathrm{\Gamma }`$ tends to zero at large virtualities $`Q=\mathrm{\Delta }E`$ as $`O(1/Q^2)`$ and $`O(1/Q^3)`$ for the static and dynamic terms, respectively. However, the correction remains small even at lower scales.
$$\mathrm{\Gamma }=p_0\frac{𝐩^2}{2m}+\frac{\pi ^2}{24}\frac{\alpha _s}{\pi }G_{\mu \nu }^2\left[\frac{𝐩^2}{m^2\mathrm{\Delta }E^3}+\frac{1}{m\mathrm{\Delta }E^2}\right].$$
(3)
Then, we can derive that
$$\stackrel{~}{m}=m+\frac{\pi ^2}{12}\frac{\alpha _s}{\pi }G_{\mu \nu }^2\frac{1}{\mathrm{\Delta }E^3}.$$
(4)
Eq.(4) shows that at $`\frac{\alpha _s}{\pi }G_{\mu \nu }^2\mathrm{\Lambda }_{QCD}^4`$ the contribution of gluon condensate to the heavy quark mass is about $`\mathrm{\Lambda }_{QCD}`$, i.e. it is linear in the infrared scale of energy, when the operator determining this term is of the fourth power in the scale.
Note, that the second term independent of $`𝐩^2`$ in the gluon condensate contribution shown in (3) results in the correction to the static energy of heavy quark, so that
$$\delta \omega _0=\frac{\pi ^2}{24}\frac{\alpha _s}{\pi }G_{\mu \nu }^2\frac{1}{m\mathrm{\Delta }E^2}.$$
(5)
Furthermore, the gluon condensate contributes to $`\omega _0`$ in two ways: the first one is explicitly given by (5), the second is related with the redefinition of heavy quark mass ($`m\stackrel{~}{m}`$). Indeed, in this case we have to redefine the “large” momentum of heavy quark by the substitution for $`mv`$ by $`\stackrel{~}{m}v`$ and so on, which means that the resulting change of static energy is given by
$$\mathrm{\Delta }\omega _0=\stackrel{~}{m}m+\delta \omega _0\mathrm{\Lambda }_{QCD}\left(1+\kappa \frac{\mathrm{\Lambda }_{QCD}}{2m}\right),\kappa 1.$$
Then, we can see that after the account for the gluon condensate the displacement of static energy can be basically adopted in the mass $`\stackrel{~}{m}`$.
Furthermore, we can write down the following relations for the perturbative dependence of heavy quark quantities on the scale $`\lambda `$:
$$\frac{dm^{\mathrm{pert}}}{d\lambda }=\frac{d\omega _0}{d\lambda }=\frac{d\mathrm{\Delta }E}{d\lambda },$$
(6)
where in the second equality we neglect the dynamical term and remain the static energy. Then the linear dependence on $`\lambda `$ in $`m`$ appears in to ways: the first is the direct calculation of self-energy diagram for the heavy quark, which results in
$$\frac{dm^{(1)}}{d\lambda }=C_m\alpha _s(\lambda ),$$
and the second is contributing from the gluon condensate term due to the $`\mathrm{\Delta }E`$ dependence according to (4) and (6) (the vacuum condensate of gluon operator has the higher power: $`\lambda ^4`$), so that
$$\frac{dm^{(2)}}{d\lambda }=\frac{\pi ^2}{4}\frac{\alpha _s}{\pi }G_{\mu \nu }^2\frac{1}{\mathrm{\Delta }E^4}C_m\alpha _s(\lambda ).$$
Then, we see that at $`\mathrm{\Delta }E\omega _0`$ the heavy quark mass can be physically independent on the introduction of factorization scale $`\lambda `$, i.e. $`\frac{dm}{d\lambda }=\frac{dm^{(1)}}{d\lambda }+\frac{dm^{(2)}}{d\lambda }=0`$, if
$$\omega _0^4=\frac{\pi ^2}{4}\frac{\alpha _s}{\pi }G_{\mu \nu }^2.$$
At $`\frac{\alpha _s}{\pi }G_{\mu \nu }^2(0.37\mathrm{GeV})^4`$ 7 the evaluation gives
$$\omega _00.46\mathrm{GeV}.$$
Neglecting the dynamical term in the heavy quark virtuality we obtain the following estimate of displacement for the heavy quark mass due to the gluon condensate:
$$\mathrm{\Delta }m\frac{1}{3}\omega _00.15\mathrm{GeV},$$
(7)
which can serve as the constrain of maximal value, since we expose the minimal virtuality.
Thus, the main statement on the nonperturbative displacement of heavy quark masses remains the following: it is about the confinement scale. However, we can get some definite estimates for these values.
The validity of above consideration is determined by the condition for the formation of hadron containing the heavy quark. Indeed, the time for the binding of the heavy quark, i.e. for the formation of its wavefunction, in general depends on the hadron contents. So, in the heavy-light hadron $`H_Q`$ with a single heavy quark, the static energy for light degrees of freedom is of the order of $`\mathrm{\Lambda }_{QCD}`$, and we get the estimate
$$\tau [H_Q]\frac{1}{\mathrm{\Lambda }_{QCD}},$$
which is comparable with the characteristic time for the heavy quark interaction with the gluon condensate. In the doubly heavy hadron $`H_{QQ}`$, the formation of wavefunction is determined by the average size of the doubly heavy system divided by the heavy quark velocity
$$\tau [H_{QQ}]\frac{r_{QQ}}{v_Q}\frac{1}{m_Qv_Q^2},$$
and it depends on the inverse kinetic energy in the doubly heavy subsystem. So, the calculated contribution by the gluon condensate in the heavy quark mass would be inapplicable for the quarks heavier than 20 GeV. However, in the systems composed by charmed and beauty quarks the kinetic energy is about $`\mathrm{\Lambda }_{QCD}`$, and in the reality we deal with the situation, when the effects connected with the formation of wavefunction for the hadron containing the heavy quark and the gluon condensate term are competitive. So, for instance, the energy shift due to the interaction of coulomb doubly heavy system with the gluon condensate MV is determined by the following expression:
$$\mathrm{\Delta }E_{Q\overline{Q}}=\frac{\pi ^2}{18}\frac{\alpha _s}{\pi }G_{\mu \nu }^2\frac{n^2m}{(mE_n)^2}ϵ_{nl},E_n=\frac{1}{4n^2}\left(\frac{2}{3}\alpha _s\right)^2m,$$
(8)
where $`ϵ_{nl}`$ is a rational factor depending on the principal and radial quantum numbers $`n`$ and $`l`$. Comparing (8) with (4), we see that despite different approaches these two equations can be in agreement with each other, if we substitute of $`\mathrm{\Delta }EE_n`$, i.e. if the virtuality is determined by the bound energy of the heavy quark in the heavy quarkonium system. This fact implies that the heavy quarkonium represents a specific case of the general consideration applied to the coulomb system, wherein the virtuality is prescribed to a concrete value. As for the numerical estimates, we have to take into account, that one should use (4) instead of (8), if the virtuality of the heavy quark in the quarkonium is less that the value following from the general form of dispersion law for the quark, i.e. it is less than $`0.46`$ GeV. Otherwise, the quark is heavy enough in order to use the coulomb approximation of (8).
To conclude, we have shown that the Operator Product Expansion including the gluon condensate results in the following dispersion law for the heavy quark:
$$p_0(𝐩)=\omega _0+\frac{𝐩^2}{2m},$$
where the correction to the heavy quark mass is given by
$$\mathrm{\Delta }m=\frac{\pi ^2}{12}\frac{\alpha _s}{\pi }G_{\mu \nu }^2\frac{1}{\omega _0^3},$$
and the infrared ambiguity in the mass caused by the corresponding renormalon, can be cancelled at
$$\omega _0^4=\frac{\pi ^2}{4}\frac{\alpha _s}{\pi }G_{\mu \nu }^2.$$
Of course, the conclusion is drawn to the given, linear order in $`\alpha _s`$, and the well known divergency of heavy quark pole mass with the increase of $`\alpha _s`$-order probably can be removed, if the higher order corrections to the Wilson coefficient of gluon condensate as well as the higher condensates will be included into the consideration in the same manner.
The author expresses the gratitude to A.L.Kataev for fruitful discussions and valuable remarks.
This work is in part supported by the Russian Foundation for Basic Research, grants 99-02-16558 and 96-15-96575.
|
no-problem/9907/cond-mat9907329.html
|
ar5iv
|
text
|
# Evidence for charge localization in the ferromagnetic phase of La1-xCaxMnO3 from High real-space-resolution x-ray diffraction
## I Introduction
The importance of the lattice to the colossal magnetoresistance (CMR) phenomenon is now fairly well established. There is a strong electron-lattice coupling due to the Jahn-Teller effect which affects Mn<sup>3+</sup> ions and the doped carriers tend to localize as small polarons at high temperature and low doping. However, exact agreement about the detailed nature of local Jahn-Teller (JT) and polaronic distortions is lacking. This information is important for separating competing models describing the CMR phenomenon.
Early diffraction, atomic pair distribution function (PDF) and extended x-ray absorption fine structure (XAFS) studies demonstrated that atomic disorder, measured as the Mn-O bond-length distribution, increases as samples pass through the metal-insulator (MI) transition with temperature. This is qualitatively what is expected if polarons are forming as the sample enters the insulating phase. These techniques also agree that the onset of polaron formation is gradual with temperature, taking place over a temperature range of 50-100 K below the MI transition temperature $`T_m`$. In general, PDF and XAFS results suggest that in CMR materials the local structure is significantly different from that observed crystallographically. In particular, Mn-O<sub>6</sub> octahedra can have a significant JT distortion locally even when globally the average JT distortion is zero or negligible. Although the local structural studies agree on this point there is disagreement on the amplitude of the distortions, in particular the length of the long JT bond. For instance, Louca et al. propose, based on the observation of a persistent negative fluctuation in the neutron PDFs of La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> at around 2.2-2.3 Å, that this is the length of the JT long-bond. This seems surprising given that the JT long-bond in the undoped material is shorter at 2.18 Å. On the other hand, XAFS measurements of the Ca doped system suggest that the JT long-bond is between 2.1- 2.2 Å and a difference modeling of the neutron PDF from the La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> system supports these findings, as we discuss below.
Another question which is not resolved is the nature of the charge ground-state of the ferromagnetic metallic (FM) phase. Local density approximation calculations suggest that a delocalized charge state would not have any JT distortion even when the $`e_g`$ band is not completely empty. Thus, the ferromagnetic metallic state, which we refer to as the Zener state (following Radaelli), would have regular undistorted octahedra. The observation of essentially undistorted MnO<sub>6</sub> octahedra at low temperature in the FM phase is supported by XAFS and PDF results at high enough doping (away from the low-temperature MI transition). However, there has been a prediction based on XAFS data that small octahedral distortions persist at low temperature in the FM phase suggesting that the ground-state is a large polaron state. PDF data have also been interpreted in terms of a three-site polaron model at low temperature persisting at least up to a doping level of $`x=0.3`$. It is important to determine the ground state of the FM phase.
Other interesting phenomena also take place in the FM phase when the MI transition is approached as a function of temperature or doping. Upon increase in temperature structural distortions start to appear in the local structure below $`T_c`$. They also appear when, at low temperature, doping is decreased towards $`x=0.170.18`$. In the vicinity of the MI transition the FM phase does not seem to be in a pure Zener state. The exact nature of this inhomogeneous state is, however, not fully characterized.
We have undertaken a high real-space resolution x-ray PDF study of the La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> system to try and resolve some of the issues discussed above. Specifically, by applying the PDF technique we would like to study the distribution of Mn-O distances in a series of manganites to elucidate the nature of the charge ground state of the FM phase: i.e., is it fully delocalized or not. We would like to investigate what is the nature of the local Jahn-Teller and polaronic distortions in this material and how they evolve as a function of doping and temperature. Finally, we would like to differentiate between the competing models for the evolution of the charge state away from the ground-state as the MI transition is approached as a function of temperature and doping.
By definition, the atomic pair distribution function PDF is the instantaneous atomic density-density correlation function which describes the atomic arrangement in materials. It is the sine Fourier transform of the experimentally observable structure factor obtained in a powder diffraction experiment. Since the total structure function includes both the Bragg intensities and diffuse scattering its Fourier associate, the PDF, yields both the local and average atomic structure of materials. By contrast, an analysis of the Bragg scattering intensities alone yields only the average crystal structure. Determining the PDF has been the approach of choice for characterizing glasses, liquids and amorphous solids for a long time. However, its wide spread application to crystalline materials, such as manganites, where some local deviation from the average structure is expected to take place, has been relatively recent.
We chose to use high-energy x-rays to measure the PDFs because it is possible to get high-quality data at high-$`Q`$ values ($`Q`$ is the magnitude of the wavevector) allowing accurate high real-space resolution PDFs to be determined. It was previously thought that neutrons were superior for high-$`Q`$ measurements because, as a result of the $`Q`$-dependence of the x-ray atomic form factor the x-ray coherent intensity gets rather weak at high-$`Q`$; however, the high-flux of x-rays from modern synchrotron sources more than compensates for this and we have shown that high quality high-resolution PDFs can be obtained using x-rays.
## II Experimental
The La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> samples were synthesized by standard solid-state reaction. Stoichiometric amounts of La<sub>2</sub>O<sub>3</sub>, CaCO<sub>3</sub> and MnO<sub>2</sub> were mixed with a mortar and pestle and placed in an alumina crucible. The material was fired at 1050 C, 1300 C and 1350 C for one day each with intermediate grindings. After the final grinding, the material was fired at 1400 C for an additional day and then slow-cooled over 20 hours to room temperature. Samples were characterized by conventional powder x-ray diffraction, temperature-dependent magnetization, and electrical resistivity.
Synchrotron powder diffraction experiments were carried out at the A2 56 pole wiggler beam line at Cornell High Energy Synchrotron Source (CHESS). This beam line is capable of delivering an intense beam of high energy x-rays required for high resolution PDF measurements. Data were collected in symmetric transmission geometry. The polychromatic incident beam was dispersed using a Si (111) monochromator and x-rays of energy 61 keV ($`\lambda =0.203`$Å) were used. An intrinsic Ge detector coupled to a multi-channel analyzer was used to detect the scattered radiation, allowing us to extract the coherent component of the scattered x-ray intensities by setting appropriate energy windows. The diffraction spectra were collected by scanning at constant $`Q`$ steps of $`\mathrm{\Delta }Q=0.025`$Å<sup>-1</sup>. Multiple scans up to $`Q_{max}=40`$Å<sup>-1</sup> were conducted and the resulting spectra averaged to improve the statistical accuracy and reduce any systematic error due to instability in the experimental set-up. The data were normalized for flux, corrected for background scattering and experimental effects such as detector deadtime and absorption. The part of the Compton scattering at low values of Q not eliminated by the preset energy window was removed analytically applying a procedure suggested by Ruland. The resulting intensities were divided by the average atomic form factor for the sample to obtain the total structure factor S(Q),
$$S(Q)=1+\frac{I_c(Q)_ic_if_i^2(Q)}{[_ic_if_i]^2}$$
(1)
where $`I_c`$ is the measured coherent part of the spectrum, $`c_i`$ and $`f_i(Q)`$ are the atomic concentration and scattering factor of the atomic species of type $`i`$ ($`i=`$ La, Ca, Mn and O), respectively. All data processing procedures were carried out using the program RAD. The measured reduced structure factors, $`F(Q)=Q[S(Q)1]`$, for $`x=0.12`$, 0.25 and 0.33 at $`T=20`$ K are shown in Figure 1.
The data are terminated at $`Q_{\text{max}}=35`$Å<sup>-1</sup> beyond which the signal to noise ratio became unfavorable. Note that this a a very high wavevector for x-ray diffraction measurements; for example, a conventional Cu K$`\alpha `$ x-ray source has a $`Q_{max}`$ of less than 8 Å<sup>-1</sup>. The corresponding reduced atomic distribution functions, $`G(r)`$, obtained via Fourier transform
$$G(r)=\frac{2}{\pi }_0^{\mathrm{}}Q[S(Q)1]\mathrm{sin}(Qr)𝑑Q,$$
(2)
are shown as open circles in Figure 2.
## III Results
### A Polarons versus JT distortion
For the sake of clarity we would like to define the terminology we will use in the following discussion. There are two types of octahedral distortions which are observed in manganites. The first is a quadrupolar symmetry elongation of the MnO<sub>6</sub> octahedron: i.e., it has two long Mn-O bonds and four shorter Mn-O bonds. This is associated with the presence of a Mn<sup>3+</sup> ion and is referred to as a Jahn-Teller distortion. Another possible distortion is an isotropic breathing-mode collapse of the MnO<sub>6</sub> octahedron where a regular octahedron stays regular (6 equal bond lengths) but the octahedron shrinks. This type of distortion can be associated with the presence of a Mn<sup>4+</sup> ion. We refer to this as a polaronic distortion since the Mn<sup>4+</sup> ions appear only when doped holes become localized.
We note that in the literature the Jahn-Teller distorted octahedra are often referred to as “Jahn-Teller polarons”. We avoid this terminology because the presence of Jahn-Teller distorted octahedra need not imply the presence of polarons in the sense of localized doped holes; for example, the undoped LaMnO<sub>3</sub> compound is fully Jahn-Teller distorted but contains no doped holes. Whilst it can be argued that these are polarons because the Jahn-Teller distortion splits the $`e_g`$ band making this compound insulating it confuses the discussion of the state, localized or delocalized, of the doped holes. In our discussion we confine the use of “polaron” to describe a doped hole localized with an associated lattice distortion.
These doped-hole polarons have also been described in the literature as “anti-Jahn-Teller polarons”. This terminology comes about when one considers what happens when a doped hole localizes in a background of Jahn-Teller distorted Mn<sup>3+</sup> octahedra, for example, in the lightly doped region of the phase diagram. On the site where the hole localizes the Jahn-Teller distortion is locally destroyed; thus the name. Again, we avoid this terminology because it is not appropriate when the polaronic state is approached from the delocalized ferromagnetic metallic state (the Zener state). In this case, as we discuss below, there are initially no Jahn-Teller distorted octahedra. The octahedra are regular and conform to those seen in the average crystal structure. As the metal-insulator transition is approached the doped holes begin to localize. When they localize both breathing mode collapsed doped-hole polarons (Mn<sup>4+</sup>) and Jahn-Teller distorted sites (Mn<sup>3+</sup>) are created. From this perspective it seems confusing to think of the polarons as “anti-Jahn-Teller” polarons. This also raises the point that, whilst we are not calling the Jahn-Teller distorted octahedra polarons, in the heavily doped material the presence of fully Jahn-Teller distorted octahedra implies the presence of localized Mn<sup>4+</sup> polarons and vice versa.
### B Comparison to the crystal structure
First we compare the present experimental PDFs to the average crystal structure determined by other independent studies. Experimental PDFs were fit with the crystallographic model The refinement was done using the program PDFFIT. Lattice parameters, isotropic thermal parameters and atomic positions were refined conserving the symmetry of the space group ($`Pbnm`$). The calculated PDFs corresponding to the best fit are shown in Figure 2 as solid lines. Inspection of the figure shows a satisfactory agreement between the calculated and measured PDFs for all three compositions which shows that the present experimental PDFs are, in general, consistent with the average crystal structure of doped manganites. Furthermore, the values refined reproduce the Rietveld found values very well. The rather large difference observed for the PDF peak at $`r4.0`$Å is believed to be related to dopant ion effects on the La/Ca site. Attempts to model these differences are currently under way.
Local structural deviations from the average structure will show up as deficiencies in the agreement since the fits were constrained so the model has the average structure. We are particularly interested in the size and shape of the local MnO<sub>6</sub> octahedron; we therefore concentrate on the low-$`r`$ region of the PDF. An enlarged view of the region around the nearest neighbor Mn-O distance is shown in Figure 3.
The experimental data are shown as open circles. Two model PDFs are shown: The solid line represents the PDF of the refined average structural model for doped manganites. Although the average structure is orthorhombic, the difference in the three distinct Mn-O bond-lengths is very small making the MnO<sub>6</sub> octahedra virtually regular. The dotted line is the PDF calculated from the average structure of undoped LaMnO<sub>3</sub> where all Mn-O octahedra have a large JT distortion, i.e. short and long Mn-O bonds are present. These are clearly resolved in the calculation. All the model curves are convoluted with the experimental real-space resolution function of the data which comes from the finite $`Q`$-range of the data.
It is apparent from Fig. 3 that the model based on the average structure fits the $`x=0.33`$ and 0.25 data quite well in this low-$`r`$ region but less well in the $`x=0.12`$ data set. In fact, in the $`x=0.12`$ data the dashed line representing the JT distorted octahedra does a qualitatively better job of reproducing the shape of the Mn-O bonds in the region $`2.02.5`$ Å and the shape of the second neighbor multiplet around 2.4 – 2.8 Å. This supports the idea that, locally, large JT distortions persist in the insulating phase although these do not show up in the average crystal structure. In the ferromagnetic metallic phase ($`x=0.25`$ and 0.33) the local structure is much closer to the average crystal structure.
### C Low temperature structure of the MnO<sub>6</sub> octahedra
We now focus on the region of the PDF from $`1.7r2.3`$ Å containing the peaks from the MnO<sub>6</sub> octahedra. This region is shown on an expanded scale in Fig. 4 for doping levels $`x=0.12`$, 0.25 and 0.33 at 20 K.
We are interested to know how the MnO<sub>6</sub> octahedron evolves as a function of doping. At $`x=0.33`$ the first PDF peak is fit with a single Gaussian. There is a suggestion of peak asymmetry, but there is negligible intensity in the region above 2.1 Å. This single Gaussian fit means that all six Mn-O bonds have almost the same length of $`r=1.96`$ Å at $`x=0.33`$ and $`T=20`$ K. This is what would be expected for a fully delocalized charge state.
The PDF for $`x=0.25`$ sample clearly has intensity on the high-$`r`$ side of its first peak at 1.95 Å which has been fit with a second Gaussian component. The presence of intensity at this $`r=2.15`$ Å position remains invariant as $`Q_{max}`$ is varied (although the resolution of the feature changes). This suggests that it is real and not artificial since noise artifacts and termination ripples change position and intensity as $`Q_{max}`$ is varied. The suggestion is, therefore, that even at $`T=20`$ K and $`x=0.25`$ long Mn-O bonds and, therefore, residual Jahn-Teller distorted sites persist in the material. There is no direct evidence for intensity on the low-$`r`$ side of the main 1.95 Å peak although it does not decrease as sharply as the $`x=0.33`$ sample.
The $`x=0.12`$ sample is in the insulating state and is expected to be fully localized and polaronic. In this case we see three components to the peak and have fit it with three Gaussians. At this composition there exist nominally Mn<sup>3+</sup> octahedra which are Jahn-Teller distorted. Based on the structure of undoped LaMnO<sub>3</sub>, we expect these to have four short bonds at 1.92–1.96 Å and two long bonds at 2.18 Å. The two higher-$`r`$ components of the peak seem consistent with this allocation. The third component at low-$`r`$ might then be expected to originate from the Mn<sup>4+</sup> polaronic sites. This is consistent with the prediction of the breathing mode model which suggests short polaronic bonds of $`<1.9`$ Å, and is also consistent with the crystal chemistry of Mn<sup>4+</sup>. Based on Shannon’s ionic radii the expected Mn<sup>4+</sup>(VI)-O<sup>2-</sup>(II) ionic radius is 1.88 Å. This short Mn-O bond length is also found in the material CaMnO<sub>3</sub> where all Mn sites are nominally Mn<sup>4+</sup>. It appears clear that Mn<sup>4+</sup> polarons exist with bonds $`1.9`$ Å together with Mn<sup>3+</sup> sites which have a JT distortion which is similar to that in the undoped material.
### D Temperature dependence of the MnO<sub>6</sub> octahedra
We now concentrate on the temperature evolution of MnO<sub>6</sub> octahedra. In Fig. 5 we show the evolution of the PDF peaks around $`r=2.0`$ Å as a function of temperature between $`T=20`$ K and $`T=300`$ K for the $`x=0.25`$ sample. The metal-insulator transition for this sample is at $`T=235`$ K.
The data-sets are offset for clarity. As we discussed in the previous section, at low temperature a large central peak centered around 1.97 Å is evident with a small high-$`r`$ component at 2.15 Å. As temperature is raised, the intensity in the high-$`r`$ component increases. It is dangerous to infer a bond length directly from the position of a maximum in the data because of the influence of noise on these data. The small intensity of these peaks is evident in Fig. 2 and noise contamination can cause a peak intensity to be shifted somewhat. However, the presence or absence of intensity at some position is a robust result. It is clear that the intensity of the peak grows in the region $`r=2.1`$–2.15 Å and below 1.9 Å. It is also apparent that there is no intensity above 2.2 Å suggesting that the JT long-bond is $`2.152.18`$ Å.
It is also interesting to see how the PDF of the MnO<sub>6</sub> octahedron of the $`x=0.33`$ sample evolves with temperature. This is shown in Fig. 6.
At low temperature this sample exhibited a single peak centered at $`1.96`$ Å. There is no evidence of any JT long-bond. At 100 K clear evidence of a component of intensity at $`r=2.18`$ Å appears. The central peak also comes down less steeply on the low-$`r`$ side and the peak centroid is shifted somewhat to lower-$`r`$ which suggest that some intensity is appearing on the low-$`r`$ side of the peak.
It is interesting to note the similarity between the $`x=0.33`$ sample at 100 K with the $`x=0.25`$ sample at 20 K which is shown in the top panel of Fig. 6.
## IV Discussion
The ability to collect high-quality data at high values of the wavevector, $`Q`$, using high-energy synchrotron radiation has allowed us to decompose the bond length distribution of the MnO<sub>6</sub> octahedra into its components more reliably than was previously possible with neutrons. This is well demonstrated by the fact that the present PDF data are consistent with the structure models derived by independent Rietveld studies and, furthermore, produce physically reasonable values for the short Mn<sup>4+</sup> and the short and long Mn<sup>3+</sup> bonds.
We draw the following conclusions from our results described above. If we assume that the $`x=0.12`$ sample is fully localized in the polaronic state at 10 K, as is suggested by by its exponential resistivity , we can interpret the three components of the first peak in the PDF as being due to JT distorted Mn<sup>3+</sup> octahedra and regular but contracted Mn<sup>4+</sup> octahedra. If we assume the number of doped holes, $`p`$, to be the nominal Ca concentration, $`x`$, then we expect the number of 1.88 Å bonds to be $`6p=6x=0.72`$. The number of JT distorted sites will be $`(1p)=(1x)`$. Then, if we assume that the JT distorted Mn<sup>3+</sup> octahedra have essentially 4 short and 2 long bonds with average lengths 1.95 and 2.18 Å, as observed in the undoped material, we expect a peak with intensity $`4(1x)=3.52`$ at 1.95 Å and $`2(1x)=1.76`$ at 2.18 Å. Fitting the first peak in the experimental PDF with Gaussians (Fig. 4(a)) yields subcomponents with intensity ratios of 1.0(5):4.0(5):1.0(5) centered at 1.84, 1.96 and 2.18 Å (the corresponding values for the model are 0.72:3.52:1.76 at 1.88, 1.95 and 2.18 Å). Given the noise level of the data the agreement is satisfactory providing some confidence to this interpretation.
We would like now to expand on the interplay between the polaron formation and the existence of JT distortions in the manganites studied. A simple picture could be constructed as follows: There are no distorted Mn-O octahedral units in the delocalized Zener phase and all Mn-O bond-lengths in the MnO<sub>6</sub> octahedron are $`1.97`$ Å, as found in the average crystal structure. This is the case observed at x=0.33 and T= 20 K so that the sample may be considered to be in fully delocalized charge state. In the insulating phase there coexist small, regular Mn<sup>4+</sup> octahedra with six Mn-O bonds of $`1.851.9`$ Å and Jahn-Teller distorted Mn<sup>3+</sup> octahedra with four bonds of $`1.97`$ Å and two bonds of $`2.18`$ Å length. This is the picture which we see at $`x=0.12`$, $`T=20`$ K and at $`x=25`$, $`T=300`$ K (see Figs. 4(a) and 5). These two samples are in the insulating phase and the charge carriers, as the measured Mn-O bond length distributions suggest, are essentially fully localized. Within the FM phase but at intermediate temperatures, and compositions approaching the MI transition, we see evidence for JT-long bonds appearing. This suggests that there is a coexistence of localized Jahn-Teller phase and delocalized Zener phase material. The sample is still conducting because the regions of Zener phase percolate. This is similar to the picture emerging for the MI transition in La<sub>0.625-y</sub>Pr<sub>y</sub>Ca<sub>0.375</sub>MnO<sub>3</sub> which occurs as a function of $`x`$, although the length-scale of the inhomogeneities is much smaller in this case.
Our picture is consistent with the earlier observation of a breathing mode distortion on one-in-four manganese sites which set in below the MI transition in La<sub>0.79</sub>Ca<sub>0.21</sub>MnO<sub>3</sub>. This was found to reproduce the changes in the local structure which occur at the MI transition in this sample when the amplitude of the collapse was $`\delta =0.12`$ Å. Since the starting value of the Mn-O bond-length at low temperature before the distortion set in was 1.97 Å this results in short Mn<sup>4+</sup> bonds of 1.85 Å shorter than, but similar to, what we observe here. Furthermore, because the model was evaluated at the special composition of $`x=0.25`$, this breathing mode collapse coincidentally resulted in Jahn-Teller-like distortions on all the remaining Mn sites with Mn<sup>3+</sup>-long bonds of $`1.97+0.12=2.09`$ Å. This is illustrated schematically in Fig. 7.
This model gives a very satisfactory agreement with the current results given its simplicity. We do not wish to imply here that the breathing mode collapse causes the Jahn-Teller distortion; merely that they coexist in the localized phase and that there is good consistency between the earlier neutron data and the current x-ray data.
It is interesting to note from Fig. 7 that $`x=0.25`$ is a special composition where small polarons can form an ordered lattice separated by JT distorted Mn<sup>3+</sup> sites which are unstrained. Each Mn<sup>4+</sup> site has 6 neighboring Mn<sup>3+</sup> sites whose long-bonds point towards it and these complexes fit together into a space filling 3d network. There is no experimental evidence that polarons order in this way in this system; rather charge stripes are observed. However, this model does show how orbitals can order locally around an Mn<sup>4+</sup> defect site to minimize strain.
So far, we have shown that by studying the size and shape of the MnO<sub>6</sub> octahedra we can determine whether the charge is localized as small polarons (observation of 1.88 Å and 2.18 Å Mn-O bonds in the PDF) or delocalized (observation of a single Mn-O bond length $`1.97`$ Å). These two states are exemplified by the $`x=0.12`$ sample at 20 K (Fig. 4(a)) and the $`x=0.33`$ sample at 20 K (Fig. 6(c)) respectively. As the temperature is increased below T<sub>c</sub> in the $`x=0.33`$ and $`x=0.25`$ samples, significant components of the long and short bonds become evident. This suggests that carriers are becoming localized in parts of the sample . The high-resolution PDF data therefore support the idea of an inhomogeneous sample with charge delocalized metallic regions of Zener phase coexisting with regions of charge localized JT phase. As T<sub>c</sub> is approached from below the amount of charge localized phase increases at the expense of the charge delocalized phase as evidenced by the growth of intensity at approximately 1.88 and 2.1 Å on increasing temperature in the $`x=0.25`$ sample (Fig. 5). This view is consistent with Booth et al.’s interpretation of their XAFS data and the interpretation of our earlier neutron PDF data. The MI transition and the onset of long-range ferromagnetic order presumably coincides with the percolation of the Zener phase. This is similar to the original proposition that the MI transition was a percolation transition by Louca et al., though our data support the idea that the Zener phase percolates rather than a network of connected 3-site polarons. A number of theories predict charge phase segregation or two-fluid behavior of the charge system. We note that our data is entirely consistent with the coexistence of delocalized Zener phase and localized JT phase below T<sub>c</sub> but does not directly imply the existence of charge segregation between these phases; rather it is just the state of localization of the charges which differs in the different regions of the sample, as is proposed for La<sub>0.625-y</sub>Pr<sub>y</sub>Ca<sub>0.375</sub>MnO<sub>3</sub>.
Finally, we address the issue of how the charge state evolves as the MI transition is approached as a function of doping $`x`$. As the experimental data suggest for $`x=0.33`$ and T=20 K no polaronic or JT distortions are present i.e. the true ground-state of the FM metallic phase is a completely delocalized Zener state. Although we have a sparse data-set one can notice the similarity of the Mn-O bond length distributions for $`x=0.33`$ at 100 K and $`x=0.25`$ at 20 K. Thus, as one moves away from the ground-state a localized state appears that coexists with the delocalized one. The volume of the localized state increases as the MI transition is approached, whether as a function of temperature (see Fig. 5) or doping (see Fig. 4). The MI transition itself then occurs when the proportion of delocalized phase is too small to percolate. This view is summarized in Fig. 8. In this figure the first peaks in the experimental PDFs, reflecting the MnO<sub>6</sub> octahedral bond length distribution, are plotted at the positions on the phase diagram where they were measured. The dark shading signifies essentially fully charge localized material; the light shaded areas indicate a coexistence of charge localized and delocalized phases and the white area is the fully charge delocalized region. The positions of the MI transitions are taken from standard phase diagrams of this system.
## V Conclusions
Using the PDF analysis of high energy x-ray diffraction we can distinguish the charge localized and charge delocalized states of La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> ($`x=0.12`$, 0.25, 0.33). We characterize the nature of the polaronic distortion around Mn<sup>4+</sup> as being an isotropic octahedron with Mn-O bond length of $`1.881.9`$ Å. The FM phase is only homogeneous at low temperature and high doping above $`x=0.3`$. As temperature is raised or doping lowered towards the MI transition the FM state becomes inhomogeneous with a coexistence of localized JT phase and delocalized Zener phase.
###### Acknowledgements.
We would like to thank J. D. Thompson and M. F. Hundley for help in characterizing the samples, Matthias Gutmann for help with data collection and E. Božin for a critical reading of the manuscript. We would like to acknowledge stimulating discussions with P. G. Radaelli. This work was supported by the NSF through grant DMR-9700966 and by the Alfred P. Sloan Foundation. CHESS is funded by NSF through grant DMR97-13424.
|
no-problem/9907/nucl-th9907038.html
|
ar5iv
|
text
|
# Transitional Lu and Spherical Ta Ground-State Proton Emitters in the Relativistic Hartree-Bogoliubov model
## Abstract
Properties of transitional Lu and spherical Ta ground-state proton emitters are calculated with the Relativistic Hartree Bogoliubov (RHB) model. The NL3 effective interaction is used in the mean-field Lagrangian, and pairing correlations are described by the pairing part of the finite range Gogny interaction D1S. Proton separation energies, ground-state quadrupole deformations, single-particle orbitals occupied by the odd valence proton, and the corresponding spectroscopic factors are compared with recent experimental data, and with results of the macroscopic-microscopic mass model.
PACS numbers: 21.60.Jz, 21.10.Dr, 23.50.+z, 27.60+j
The structure and decays modes of nuclei beyond the proton drip-line represent one of the most active areas of experimental and theoretical studies of exotic nuclei with extreme isospin values. In the last few years many new data on ground-state and isomeric proton radioactivity have been reported. In particular, detailed studies of odd-Z ground-state proton emitters in the regions 51$``$Z$``$55 and 69$``$Z$``$ 83, have shown that the systematics of spectroscopic factors is consistent with half-lives calculated in the spherical WKB or distorted-wave Born (DWBA) approximations . More recent data indicate that the missing region of light rare-earth nuclei contains strongly deformed systems at the drip-lines.
In the theoretical description of ground-state and isomeric proton radioactivity, two essentially complementary approaches have been reported. One possibility is to start from a spherical or deformed phenomenological single-particle potential, a Woods-Saxon potential for instance, and to adjust the parameters of the potential well in order to reproduce the experimental one-proton separation energy. The width of the single-particle resonance is then determined by the probability of tunneling through the Coulomb and centrifugal barriers. Since the probability strongly depends on the valence proton energy and on its angular momentum, the calculated half-lives provide direct information about the spherical or deformed orbital occupied by the odd proton. For a spherical proton emitter it is relatively simple to calculate half-lives in the WKB or DWBA approximations . On the other hand, it is much more difficult to quantitatively describe the process of three-dimensional quantum mechanical tunneling for deformed proton emitters. Modern reliable models for calculating proton emission rates from deformed nuclei have been developed only recently . A shortcoming of this approach is that it does not predict proton separation energies, i.e. the models do not predict which nuclei are likely to be proton emitters. In fact, if they are used to calculate decay rates for proton emission from excited states, the depth of the central potential has to to be adjusted for each proton orbital separately. In addition, the models of Refs. do not provide any information about the spectroscopic factors of the proton orbitals. Instead, experimental spectroscopic factors are defined as ratios of calculated and measured half-lives, and the deviation from unity is attributed to nuclear structure effects.
In Refs. we have used the relativistic Hartree Bogoliubov (RHB) theory to calculate properties of proton-rich spherical even-even nuclei with 14$``$Z$``$28, and to describe odd-Z deformed ground-state proton emitters in the region $`53Z69`$. RHB presents a relativistic extension of the Hartree-Fock-Bogoliubov theory, and it provides a unified framework for the description of relativistic mean-field and pairing correlations.Such a unified and self-consistent formulation is especially important in applications to drip-line nuclei. The RHB framework has been used to study the location of the proton drip-line, the ground-state quadrupole deformations and one-proton separation energies at and beyond the drip line, the deformed single particle orbitals occupied by the odd valence proton, and the corresponding spectroscopic factors. The results of fully self-consistent calculations have been compared with experimental data on ground-state proton emitters. However, since it is very difficult to use the self-consistent ground-state wave functions in the calculation of proton emission rates, one could say that the RHB model provides informations which are complementary to those obtained with the models of Refs. . It should be noted that in the relativistic framework the strength and the shape of the spin-orbit term are determined selfconsistently. This is essential for a correct description of spin-orbit splittings in regions of nuclei far from stability, where the extrapolation of effective strength parameters becomes questionable. The motivation for the present work are the very recent data on proton emission from the closed neutron shell nucleus <sup>155</sup>Ta , and the proposed experiment to search for direct proton emission from <sup>149</sup>Lu . The analysis of ground-state proton radioactivity in the Lu and Ta isotopes completes our study of deformed and transitional proton emitters in the region $`53Z73`$.
A very detailed description of the relativistic Hartree-Bogoliubov theory can be found, for instance, in Ref. . In the following we only outline the essential features of the model that will be used to describe nuclei at the proton drip-line. The ground state of a nucleus is represented by the Slater determinant of independent single-quasiparticle states, which are obtained as solutions of the relativistic Hartree-Bogoliubov equations
$`\left(\begin{array}{cc}\widehat{h}_Dm\lambda & \widehat{\mathrm{\Delta }}\\ \widehat{\mathrm{\Delta }}^{}& \widehat{h}_D+m+\lambda \end{array}\right)\left(\begin{array}{c}U_k(𝐫)\\ V_k(𝐫)\end{array}\right)=E_k\left(\begin{array}{c}U_k(𝐫)\\ V_k(𝐫)\end{array}\right).`$ (1)
The column vectors denote the quasi-particle spinors and $`E_k`$ are the quasi-particle energies. In the Hartree approximation for the self-consistent mean field, the single-nucleon Dirac Hamiltonian reads
$$\widehat{h}_D=i\alpha +\beta (m+g_\sigma \sigma (𝐫))+g_\omega \tau _3\omega ^0(𝐫)+g_\rho \rho ^0(𝐫)+e\frac{(1\tau _3)}{2}A^0(𝐫).$$
(2)
It describes the motion of independent Dirac nucleons in the mean-field potentials: the isoscalar scalar $`\sigma `$-meson potential, the isoscalar vector $`\omega `$-meson, and the isovector vector $`\rho `$-meson potential. The photon field $`A`$ accounts for the electromagnetic interaction. The meson potentials are determined self-consistently by the solutions of the corresponding Klein-Gordon equations. The source terms for these equations are calculated in the no-sea approximation. Because of charge conservation only the third component of the isovector $`\rho `$-meson contributes. For an even-even system, due to time reversal invariance the spatial vector components $`𝝎\mathbf{,}𝝆_\mathrm{𝟑}`$ and $`𝐀`$ of the vector meson fields vanish. In nuclei with odd numbers of protons or neutrons time reversal symmetry is broken. The odd particle induces polarization currents and the time-odd components in the meson fields. These components play an essential role in the description of magnetic moments and of moments of inertia in rotating nuclei. However, their effect on deformations and binding energies is very small and can be neglected to a good approximation. As in our previous studies of nuclei at the proton drip-lines, we choose the NL3 set of meson masses and meson-nucleon coupling constants for the effective interaction in the particle-hole channel: $`m=939`$ MeV, $`m_\sigma =508.194`$ MeV, $`m_\omega =782.501`$ MeV, $`m_\rho =763.0`$ MeV, $`g_\sigma =10.217`$, $`g_2=10.431`$ fm<sup>-1</sup>, $`g_3=28.885`$, $`g_\omega =12.868`$ and $`g_\rho =4.474`$.
The pairing field $`\widehat{\mathrm{\Delta }}`$ is defined
$$\mathrm{\Delta }_{ab}(𝐫,𝐫^{})=\frac{1}{2}\underset{c,d}{}V_{abcd}(𝐫,𝐫^{})\kappa _{cd}(𝐫,𝐫^{}),$$
(3)
where $`V_{abcd}(𝐫,𝐫^{})`$ are matrix elements of a two-body pairing interaction, and $`\kappa _{cd}(𝐫,𝐫^{})`$ is the pairing tensor. The pairing part of the phenomenological Gogny force
$$V^{pp}(1,2)=\underset{i=1,2}{}e^{((𝐫_1𝐫_2)/\mu _i)^2}(W_i+B_iP^\sigma H_iP^\tau M_iP^\sigma P^\tau ),$$
(4)
with the set D1S for the parameters $`\mu _i`$, $`W_i`$, $`B_i`$, $`H_i`$ and $`M_i`$ $`(i=1,2)`$, is used to describe pairing correlations.
The RHB single-quasiparticle equations (1) are solved self-consistently. The iteration procedure is performed in the quasi-particle basis. The chemical potential $`\lambda `$ has to be determined by the particle number subsidiary condition in order that the expectation value of the particle number operator in the ground state equals the number of nucleons. A simple blocking prescription is used in the calculation of odd-proton and/or odd-neutron systems. The blocking calculations are performed without breaking the time-reversal symmetry. The resulting eigenspectrum is transformed into the canonical basis of single-particle states, in which the RHB ground-state takes the BCS form. The transformation determines the energies and occupation probabilities of the canonical states.
The one-proton separation energies
$$S_p(Z,N)=B(Z,N)B(Z1,N)$$
(5)
for the Lu and Ta isotopes are displayed in Fig. 1, as function of the number of neutrons. The predicted drip-line nuclei are <sup>154</sup>Lu and <sup>159</sup>Ta. In the process of proton emission the valence particle tunnels through the Coulomb and centrifugal barriers, and the decay probability depends strongly on the energy of the proton and on its angular momentum. In rare-earth nuclei the decay of the ground-state by direct proton emission competes with $`\beta ^+`$ decay; for heavy nuclei also fission or $`\alpha `$ decay can be favored. In general, ground-state proton emission is not observed immediately after the drip-line. For small values of the proton separation energies, the width is dominated by the $`\beta ^+`$ decay. On the other hand, large separation energies result in extremely short proton-emission half-lives, which are difficult to observe experimentally. For a typical rare-earth nucleus the separation energy window in which ground-state proton decay can be directly observed is about 0.8 – 1.7 MeV . In Fig. 1 we have compared the calculated separation energies with experimental transition energies for ground-state proton emission in <sup>150</sup>Lu, <sup>151</sup>Lu , <sup>155</sup>Ta , <sup>156</sup>Ta , and <sup>157</sup>Ta . In all five cases an excellent agreement is observed between model predictions and experimental data. In addition to <sup>151</sup>Lu, which was the first ground-state proton emitter to be discovered , and <sup>150</sup>Lu, the self-consistent RHB calculation predicts ground-state proton decay in <sup>149</sup>Lu. The calculated one-proton separation energy $`1.77`$ MeV corresponds to a half-life of a few $`\mu s`$, if one assumes that the nucleus is spherical. Direct proton emission with a half-life of the order of few $`\mu s`$ is just above the lower limit of observation of current experimental facilities. An experiment to search for direct proton emission from <sup>149</sup>Lu has been proposed recently . For the Lu ground-state proton emitters, in Table I the results of the RHB model calculation are compared with the predictions of the finite-range droplet (FRDM) mass model: the projection of the odd-proton angular momentum on the symmetry axis and the parity of the odd-proton state $`\mathrm{\Omega }_p^\pi `$ , the one-proton separation energy , and the ground-state quadrupole deformation . We have also included the RHB spectroscopic factors, and compared the separation energies with the experimental transition energies in <sup>150</sup>Lu and <sup>151</sup>Lu. Both theoretical models predict oblate shapes for the Lu proton emitters, and similar values for the ground-state quadrupole deformations. On the other hand, while the FRDM assigns spin and parity $`5/2^{}`$ to the deformed single-particle orbitals occupied by the odd valence proton in all three proton emitters, the RHB model predicts the $`7/2^{}[523]`$ Nilsson orbital to be occupied by the odd proton. We also notice that the RHB separation energies are much closer to the experimental values. The spectroscopic factors of the $`7/2^{}[523]`$ orbital are displayed in the sixth column of Table I. The spectroscopic factor of the deformed odd-proton orbital $`k`$ is defined as the probability that this state is found empty in the daughter nucleus with even number of protons.
In the detailed analysis of odd-Z proton emitters $`(53Z69)`$ it has been shown that, while the proton-rich isotopes of La, Pr, Pm, Eu and Tb are all strongly prolate deformed ($`\beta _20.300.35`$), Ho and Tm isotopes at the proton drip-line display a transition from prolate to oblate shapes. Spherical shapes are expected as the nuclei with unbound protons approach the $`N=82`$ neutron shell. The Lu proton emitters are found in the transitional region between oblate and spherical shapes. This is illustrated in Fig. 2, where we plot the binding energy curve for <sup>151</sup>Lu as function of the quadrupole deformation parameter. The binding energies result from self-consistent RHB/NL3 calculations performed by imposing a quadratic constraint on the quadrupole moment. A very shallow minimum is found at $`\beta 0.15`$, but otherwise the potential is rather flat with a shoulder at $`\beta =0`$. In Fig. 3 we compare the ground-state quadrupole deformations of the proton-rich Lu isotopes with those of Ho and Tm . For $`N80`$ all three chains of isotopes display oblate deformations; starting with $`N=81`$ a sharp transition to the spherical shape is observed.
The proton-rich Ta isotopes are spherical. In Fig. 1 we compare the calculated one-proton separation energies with experimental transition energies for <sup>155</sup>Ta , <sup>156</sup>Ta , and <sup>157</sup>Ta . The predictions for the spherical orbitals occupied by the odd proton and the corresponding spectroscopic factors are displayed in Table II. Results of the FRDM calculation have also been included in the comparison. As in the case of the Lu ground-state proton emitters, an excellent agreement between RHB separation energies and experimental data on transition energies for proton emission is observed. In particular, our calculation reproduces the very recent data on proton emission from the closed neutron shell nucleus <sup>155</sup>Ta . The significant decrease in proton binding for <sup>155</sup>Ta, as compared to <sup>157,156</sup>Ta, has been associated with the $`N=82`$ closure. In comparison, the FRDM results are found to be in rather poor agreement with experimental data. Except for <sup>157</sup>Ta, the spherical orbitals predicted to be occupied by the odd proton agree with the experimental assignments, and the theoretical spectroscopic factor of the $`h_{11/2}`$ orbital in <sup>155</sup>Ta is very close to the experimental value. For <sup>157</sup>Ta the RHB model predicts ground-state proton emission from the $`h_{11/2}`$ orbital. The experimental assignment for the ground-state configuration is $`s_{1/2}`$, but an alpha decaying state is identified in <sup>157</sup>Ta at an excitation energy of only 22(5) keV and assigned to an $`h_{11/2}`$ isomer . We have also calculated the one-proton separation energy for <sup>156</sup>Ta<sup>m</sup>: $`S_p=1.250`$ MeV, the orbital is $`h_{11/2}`$ and the spectroscopic factor is 0.79. This is to be compared with the experimental transition energy $`E_p=1.103(12)`$ MeV , assigned to the $`h_{11/2}`$ orbital with the experimental spectroscopic factor 0.92(4) .
In conclusion, the relativistic Hartree-Bogoliubov model has been applied in the description of ground-state properties of transitional Lu and spherical Ta proton emitters. The NL3 effective interaction has been used for the mean-field Lagrangian, and pairing correlations have been described by the pairing part of the finite range Gogny interaction D1S. We would like to emphasize that this particular combination of effective forces has been used in most of our recent applications of the RHB model, not only for spherical and deformed $`\beta `$-stable nuclei, but also for nuclear systems with large isospin values on both sides of the valley of $`\beta `$-stability. The model parameters therefore have not been adjusted to the specific properties of nuclei studied in this work, or to the properties of deformed proton emitters discussed in Refs. . The self-consistent calculation reproduces in detail the observed transition energies for ground-state proton emission in <sup>150</sup>Lu, <sup>151</sup>Lu , <sup>155</sup>Ta , <sup>156</sup>Ta , and <sup>157</sup>Ta , as well as the assignments for the orbitals occupied by the valence odd proton and the corresponding spectroscopic factors. The model also predicts the one-proton separation energy of $`1.77`$ MeV for the possible proton emitter <sup>149</sup>Lu. Oblate ground-state deformations are predicted for all Lu proton emitters, while spherical shapes are calculated for the Ta isotopes at and beyond the proton drip-line. With the excellent agreement observed between the RHB results and the very recent data on proton emission from the closed neutron shell nucleus <sup>155</sup>Ta , we complete our study of ground-state properties of deformed and transitional proton emitters .
Figure Captions
|
no-problem/9907/hep-ph9907394.html
|
ar5iv
|
text
|
# A Global Study of Photon-Induced Jet Production
## 1 INTRODUCTION
A systematic examination of the manner in which data is described by Monte Carlo models can give valuable insight into the underlying physics involved. However, many such comparisons are to a single result, limiting the extent to which physics conclusions can be drawn and raising the possibility that agreement is achieved in one cross section at the expense of another. The aim of this study is to tune, for the first time, general purpose Monte Carlo models to the existing and expanding set of jet photoproduction data. To that end, we have compared model predictions to published inclusive jet, di-jet and 3-jet data from the ZEUS and H1 experiments at HERA, the OPAL experiment at LEP, and the TOPAZ and AMY experiments at TRISTAN. The inclusion of both $`\gamma p`$ and $`\gamma \gamma `$ data (see figure 1) is particularly significant for constraining the tuning.
Our tuning currently focuses on the role of the so-called “underlying event” in hadronic jet production and the extent to which this can be described by perturbative QCD inspired models. We constrain the multiparton interaction (minijet) models contained within the Monte Carlos described below, and use these tunings to estimate backgrounds at future colliders.
Such a tuning can also lead to a number of practical benefits. The method employed allows us to check the consistency of the data contained in the various publications, which ranges across different colliders, experiments, years, energies, kinematic regions etc. A better description of the data by the models can be expected to lead to a reduction in the systematic errors due to detector corrections in future measurements.
## 2 THE MODELS
The models tuned were HERWIG 5.9 , interfaced to the JIMMY eikonal model for multiparton interactions, and PYTHIA 6.125 , which contains a new (since version 5.7) mode aiming to simulate the virtuality of the bremsstrahlung photons . For HERWIG, a previous tuning to DIS data (CLMAX=5.5, PSPLT=0.65) was used. In addition to the default mode, preliminary investigations were carried out on a modification to HERWIG, whereby the intrinsic transverse momentum distribution of the partons in the photon takes the form of a power law rather than a gaussian. This modified version, referred to as HERWIG+$`k_t`$, was motivated by studies at HERA and at LEP which indicate that it can lead to a better description of the data.
The main free parameters investigated in the tuning are the description of the “underlying event” and the the choice of photon structure. Multiple hard scatters, illustrated in figure 2, can be enabled or not, and it is expected that some of the distributions studied will exhibit great sensitivity to this phenomenon. The other facet of the underlying event is the minimum transverse momentum of the hard scatter(s), hereafter referred to as $`\widehat{p}_T^{\mathrm{min}}`$. Sensitivity to this parameter is greatly increased when multiple interactions are turned on. As one goes to lower $`\widehat{p}_T`$, higher parton densities are being probed, leading to an enhanced probability that more than one hard scatter will occur. It should also be noted at this point that, whilst HERWIG employs a cluster fragmentation scheme, PYTHIA uses string hadronization.
A number of different parameterisations of the photon structure function were investigated. Only leading order sets were used since the matrix elements in the Monte Carlo models are leading order. The sets used were GRV 92 , WHIT2 , SaS1d and SaS2d , and LAC1 . Throughout this work the GRV 94 LO proton pdf was used in the simulation of HERA data.
Finally, the overall normalisation of the Monte Carlo was treated as a tunable parameter. This is justified within a range of around a factor of two or less because of the uncertainty in the scale of $`\alpha _s`$.
## 3 FITTING METHOD
This work was carried out within the framework of a package developed by the HERA community to permit easy comparison of data to Monte Carlo . Extensions were made to facilitate the inclusion of the $`\gamma \gamma `$ data.
The procedure for finding the best fit, for a given set of parameters, of the Monte Carlo to the data was as follows. An overall $`\chi ^2`$ per degree of freedom across all the distributions (some 50 in all) was defined as:
$$\chi ^2=\frac{1}{n1}\underset{i=1}{\overset{n}{}}\frac{(\text{MC}(i)\text{Data}(i))^2}{\sigma _{\text{MC}(i)}^2+\sigma _{\text{Data}(i)}^2}$$
(1)
where Data($`i`$) and MC($`i`$) are the values of the distribution in a given bin $`i`$ for the data and Monte Carlo respectively. The sum runs over the total number of bins $`n`$ in all the distributions.
The aim was to minimise this $`\chi ^2`$, and a good fit is when it is approximately one or less. The normalisation of the Monte Carlo was varied to find the best fit (The sum in equation (1) is divided by $`n1`$ rather than $`n`$ to take into account the resulting loss of a degree of freedom). For the HERA distributions, each data plot was allowed to float within the quoted correlated error (typically of 15-20%) and the number of plots was then subtracted from $`n`$ as well. Otherwise, all systematic and statistical errors were treated as uncorrelated and added in quadrature.
The distributions from TRISTAN were not included in the fitting procedure. There appears to be a discrepancy between the TOPAZ and AMY results. Indeed, this has been observed in a previous study .
## 4 RESULTS AND DISCUSSION
The results of the fits using HERWIG (not the $`k_t`$ version) are shown in table 1, and those for PYTHIA in table 2. Full details of all the fits, including the individual plots, can be found at http://www.hep.ucl.ac.uk/~jmb/HZTOOL/. Figure 3 shows a summary of the results in graphical form. In most cases there is a clear, favoured value of $`\widehat{p}_T^{\mathrm{min}}`$, and overall - at least where reasonable fits are found - there appears to be a favoured range of $`1.62.2`$ GeV. Note that although we do not constrain the $`\widehat{p}_T^{\mathrm{min}}`$ to have the same value for both LEP and HERA fits, when we combine to give an overall $`\chi ^2`$ we have taken those runs with the same $`\widehat{p}_T^{\mathrm{min}}`$. This is motivated by the idea that, as one moves toward low transverse momentum, two effects can render perturbative QCD inapplicable. Not only does $`\alpha _s`$ become large, but $`x`$ becomes small, potentially leading to large $`\mathrm{ln}(x)`$ corrections. Our conjecture is that if $`\alpha _s`$ effects are independent of the centre of mass energy, then $`\widehat{p}_T^{\mathrm{min}}`$ can be considered to be a universal parameter. The low $`x`$ effects are modelled by multiple interactions, and so the minijet model is “universal”, but different effects are seen depending on the cms energy and beam particle type. All the results shown have multiple interactions enabled. The agreement without multiple interactions is very poor. A particularly striking example of this is shown in figure 4, and it is low $`E_T^{Jet}`$ measurements such as this which are especially valuable for constraining multiple interaction models.
The scaling of the overall normalisation generally agrees between LEP and HERA for the good fits, and is in the range $`1.21.3`$. In addition, there is general consistency between different datasets for both LEP and HERA. With regard to pdf sets, WHIT and GRV lead to better fits for HERWIG, whereas for PYTHIA there are no good fits to LEP data for sets other than the SaS sets. In no circumstances does it appear possible to obtain anything close to a good description of the data with LAC1.
The DIS tuned parameters for HERWIG do help, if only marginally, but there is no firm evidence yet that the HERWIG+$`k_t`$ modification improves matters.
## 5 LINEAR COLLIDER
The results of the tuning can be used to estimate photon-induced minijet backgrounds at future colliders. The best fits found using HERWIG have been extrapolated to linear collider energies ($`\sqrt{s}=500`$ GeV), with the TESLA beamstrahlung and bremsstrahlung spectra included . The parameters thus used were the WHIT 2 photon pdf and a $`\widehat{p}_T^{\mathrm{min}}`$ of 2.0 GeV with multiple interactions enabled. The overall normalisation was scaled by a factor of 1.4. The estimate for the minijet transverse momentum cross-section is shown in figure 5.
The results are consistent at the 50% level with independent studies using PYTHIA and the DELPHI Monte Carlo . They indicate that the backgrounds will not be a concern for detector occupancy and dosage, but that minijets potentially present a very significant background to physics owing to the high $`\widehat{p}_T`$ tail.
## 6 CONCLUSIONS
The general purpose Monte Carlo models HERWIG and PYTHIA have been tuned to jet photoproduction data from LEP and HERA. An optimal range of hard scales appears to emerge, and some level of multiparton interactions is found to be necessary in these models in order that an adequate description of the data be achieved. Favoured sets of parton distribution functions are established for each generator but currently no generator independent conclusion on photon structure is drawn. The HERWIG tuned parameters are used to estimate photon-induced backgrounds at the linear collider. This work is ongoing and up-to-date results can be found on our web page http://www.hep.ucl.ac.uk/~jmb/HZTOOL/.
|
no-problem/9907/cond-mat9907454.html
|
ar5iv
|
text
|
# Charge Frustration Effects in Capacitively Coupled Two-Dimensional Josephson-Junction Arrays
\[
## Abstract
We investigate the quantum phase transitions in two capacitively coupled two-dimensional Josephson-junction arrays with charge frustration. The system is mapped onto the $`S=1`$ and $`S=1/2`$ anisotropic Heisenberg antiferromagnets near the particle-hole symmetry line and near the maximal-frustration line, respectively, which are in turn argued to be effectively described by a single quantum phase model. Based on the resulting model, it is suggested that near the maximal frustration line the system may undergo a quantum phase transition from the charge-density wave to the super-solid phase, which displays both diagonal and off-diagonal long-range order.
preprint: SNUTP 99-020
\]
In recent years, various types of cotunneling transport have been of great interest in ultrasmall tunnel junctions, which exhibit strong Coulomb blockade effect. In particular, cotunneling of the electron-hole pairs in two capacitively coupled one-dimensional (1D) arrays of small metallic junctions has been proposed theoretically and demonstrated experimentally, revealing the remarkable effects of the current mirror. More recently, in capacitively coupled 1D or two-dimensional (2D) Josephson-junction arrays (JJAs), the cotunneling of particle-hole pairs (with the particle and hole standing for the excess and deficit Cooper pair, respectively) has been proposed even to drive the quantum phase transition from superconductor to insulator (SI) at zero temperature. Here the particle-hole symmetry of the system may be broken by, e.g., the gate voltage applied between the array and the substrate. The resulting charge frustration is expected to affect the phase transition of the system in a crucial way. For example, when the particle-hole symmetry is broken maximally, the transport is governed by the cotunneling of the particle-void pairs (with the void denoting the absence of an excess or deficit Cooper pair) and the different nature of the associated phase transition has been pointed out in one dimension. On the other hand, existing studies of coupled 2D arrays with charge frustration have concentrated upon the charge-vortex duality, without appreciable attention to the phase transitions.
In this paper, we extend the previous work on two capacitively coupled 2D arrays of ultrasmall Josephson junctions to investigate the charge-frustration effects on the quantum phase transitions. In a manner similar to that of Ref. , we map the system to the $`S=1`$ anisotropic Heisenberg antiferromagnet near the particle-hole symmetry lines and to the $`S=1/2`$ one near the maximal-frustration lines. It is then argued that the two spin models can in effect be incorporated into a single 2D quantum phase model with the effective self-capacitance given by the coupling capacitance of the original two-array system and the junction capacitance by the intra-array junction capacitance. The resulting model indicates that near the maximal frustration line the system may exhibit a quantum phase transition from the charge-density wave (CDW) to the super-solid (SS) phase. In the SS state, the system possesses both diagonal and off-diagonal long-range order (DLRO and ODLRO): Namely, both the density-correlation of charges and the phase-correlation of superconducting order parameters remain finite as the distance grows arbitrarily large.
The system of coupled 2D square arrays, shown schematically in Fig. 1, is described by the Hamiltonian
$`H`$ $`=`$ $`2e^2{\displaystyle \underset{\mathrm{},\mathrm{}^{};𝐫,𝐫^{}}{}}[n_{\mathrm{}}(𝐫)n_g]_{\mathrm{}\mathrm{}^{}}^1(𝐫,𝐫^{})[n_{\mathrm{}^{}}(𝐫^{})n_g]`$ (2)
$`E_J{\displaystyle \underset{\mathrm{},𝐫,\mu }{}}\mathrm{cos}[\varphi _{\mathrm{}}(𝐫)\varphi _{\mathrm{}}(𝐫+\widehat{𝐞}_\mu )]`$
$``$ $`H_C+H_J,`$ (3)
where the number $`n_{\mathrm{}}(𝐫)`$ of the Cooper pairs and the phase $`\varphi _{\mathrm{}}(𝐫)`$ of the superconducting order parameter at site $`𝐫`$ on the $`\mathrm{}`$th array ($`\mathrm{}=1,2`$) are quantum-mechanically conjugate variables: $`[n_{\mathrm{}}(𝐫),\varphi _{\mathrm{}}(𝐫^{})]=i\delta _{\mathrm{}\mathrm{}^{}}\delta _{\mathrm{𝐫𝐫}^{}}`$. The Josephson coupling between neighboring sites $`𝐫`$ and $`𝐫+\widehat{𝐞}_\mu `$ (with $`\widehat{𝐞}_\mu `$ being the unit vector in the direction $`\mu =x,y`$) in each array is characterized by the coupling energy $`E_J`$, whereas the external charge $`n_gC_0V_g/2e`$ induced on each island by the applied gate voltage $`V_g`$ breaks the particle-hole symmetry of the system, introducing charge frustration. The two arrays are coupled through the capacitance $`C_I`$ between two grains at the same position $`𝐫`$ on the two arrays. (Note the difference from the Josephson coupled multi-layered system, where Cooper-pair tunneling between layers is allowed.) The capacitance matrix $``$ characterizing the charging energy part $`H_C`$ of the Hamiltonian in Eq. (2) can be written in the block form:
$$_{\mathrm{}\mathrm{}^{}}(𝐫,𝐫^{})C(𝐫,𝐫^{})\left[\begin{array}{cc}1& \mathrm{\hspace{0.17em}0}\\ 0& \mathrm{\hspace{0.17em}1}\end{array}\right]+\delta _{𝐫,𝐫^{}}C_I\left[\begin{array}{cc}\hfill 1& \hfill 1\\ \hfill 1& \hfill 1\end{array}\right],$$
(4)
where $`C(𝐫,𝐫^{})`$ is the usual intra-array capacitance matrix
$`C(𝐫,𝐫^{})`$ $``$ $`C_0\delta _{\mathrm{𝐫𝐫}^{}}`$ (5)
$`+`$ $`C_1{\displaystyle \underset{\mu }{}}\left[2\delta _{\mathrm{𝐫𝐫}^{}}\delta _{𝐫,𝐫^{}+\widehat{𝐞}_\mu }\delta _{𝐫,𝐫^{}+\widehat{𝐞}_\mu }\right].`$ (6)
with $`C_0`$ and $`C_1`$ being the self- and junction capacitance, respectively. Although it is not essential in the subsequent discussion as long as the interaction range is finite, we assume for simplicity that $`C_1/C_01`$, keeping only the on-site and the nearest-neighbor interactions between the charges. We also define charging energy scales $`E_0e^2/2C_0`$, $`E_1e^2/2C_1`$, and $`E_Ie^2/2C_I`$, associated with the corresponding capacitances.
In the regime of concern in this paper, $`C_IC_0(C_1)`$, i.e., $`E_IE_0(E_1)`$, the charging energy part of the Hamiltonian in Eq. (2) can be written conveniently as the sum $`H_C=H_C^++H_C^{}`$ with each component defined to be
$`H_C^+`$ $``$ $`U_0{\displaystyle \underset{𝐫}{}}[n_+(𝐫)2n_g]^2`$ (8)
$`+U_1{\displaystyle \underset{𝐫,\mu }{}}[n_+(𝐫)2n_g][n_+(𝐫+\widehat{𝐞}_\mu )2n_g]`$
$`H_C^{}`$ $``$ $`V_0{\displaystyle \underset{𝐫}{}}[n_{}(𝐫)]^2+V_1{\displaystyle \underset{𝐫,\mu }{}}n_{}(𝐫)n_{}(𝐫+\widehat{𝐞}_\mu ),`$ (9)
where we have defined new charge variables $`n_\pm (𝐫)n_1(𝐫)\pm n_2(𝐫)`$ and the interaction strengths are given by $`U_02E_0`$, $`V_0E_I`$, $`U_14(C_1/C_0)E_0`$, and $`V_1(C_1/C_I)E_I`$. This representation of the charging energy part $`H_C`$ allows us to distinguish clearly the two interesting regions from each other: near the particle-hole symmetry line $`n_g=0`$ and near the maximal-frustration line $`n_g=1/2`$, as one can see from the energy spectra of $`H_C`$ displayed in Figs. 2 and 3 for the two regimes, respectively (recall that $`U_0V_0`$). (Since the system is invariant with $`n_gn_g+1`$, we need to consider only the range $`0n_g<1`$). As pointed out for two coupled 1D arrays in Ref. , there follow the remarkable properties of the spectrum of $`H_C`$ in each regime: Near the maximal-frustration line, the charge configurations which do not satisfy the condition $`n_+(𝐫)=1`$ (for all $`𝐫`$) have a huge excitation gap of the order of $`E_0`$. (Note that we are interested in the parameter regime $`E_I,E_JE_0`$.) Furthermore, the ground states of $`H_C`$, separated from the excited states by the gap of the order of $`E_I`$, have two-fold degeneracy for each $`𝐫`$, corresponding to $`n_{}(𝐫)=\pm 1`$. This degeneracy is lifted by the Josephson coupling term $`H_J`$ in Eq. (2) as $`E_J`$ is turned on. As a result, it is convenient in this case to work within the reduced Hilbert space $`_d`$, where $`n_+(𝐫)=1`$ and $`n_{}(𝐫)=\pm 1`$ for each $`𝐫`$. Near the particle-hole symmetry line, on the other hand, the low-energy charge configuration should satisfy the condition $`n_+(𝐫)=0`$ for all $`𝐫`$. Unlike the former case, the ground state of $`H_C`$ is non-degenerate and forms a Mott insulator characterized by $`n_1(𝐫)=n_2(𝐫)=0`$ for all $`𝐫`$. As $`E_J`$ is turned on, the ground state of $`H_C`$ is mixed with the states with $`n_{}(𝐫)=\pm 2`$. Accordingly, the relevant reduced Hilbert space is given by $`_s`$, where $`n_+(𝐫)=0`$ and $`n_{}(𝐫)=0,\pm 2`$ for all $`𝐫`$.
Accordingly, it is instructive to project the Hamiltonian in Eq. (2) onto $`_s`$ ($`_d`$) for $`n_g1/4`$ (for $`|n_g1/2|1/4`$); this results in the effective Hamiltonian, up to the second order in $`E_J/E_0`$,
$$H_{\mathrm{𝑒𝑓𝑓}}P\left[H+H_J\frac{1P}{EH_C}H_J\right]P,$$
(10)
where $`P`$ is the projection operator. Explicit implementation of the projection near the particle-hole symmetry line can be achieved by first noting the correspondence between the charge picture of the original model and the pseudo-spin ($`S=1`$) picture in the reduced Hilbert space $`_s`$:
$`S^z(𝐫)P{\displaystyle \frac{n_1(𝐫)n_2(𝐫)}{2}}P`$ (11)
$`S^+(𝐫)\sqrt{2}Pe^{i\varphi _1(𝐫)}(1P)e^{+i\varphi _2(𝐫)}P`$ (12)
$`S^{}(𝐫)\sqrt{2}Pe^{i\varphi _2(𝐫)}(1P)e^{+i\varphi _1(𝐫)}P.`$ (13)
In particular, the spin-flip operators $`S^+`$ and $`S^{}`$ manifest the second-order cotunneling process of the particle-hole pairs via an intermediate virtual state, as depicted in Fig. 4(a), and mix the energy levels with unpaired particles or holes of energy $`U_0`$ and those with particle-hole pairs of energy $`4V_0`$ (see Fig. 2). It then follows that the effective Hamiltonian in Eq. (10) takes the form
$`H_{\mathrm{𝑋𝑋𝑍}}^{S=1}={\displaystyle \frac{1}{2}}\gamma _1J{\displaystyle \underset{𝐫}{}}\left[S^z(𝐫)\right]^2`$ (14)
$`{\displaystyle \frac{1}{4}}J{\displaystyle \underset{𝐫,\mu }{}}\left\{S^+(𝐫)S^{}(𝐫+\widehat{𝐞}_\mu )+S^{}(𝐫)S^+(𝐫+\widehat{𝐞}_\mu )\right\},`$ (15)
which describes the spin-1 2D XXZ antiferromagnet. Here the exchange interaction and the anisotropy ratio are given by
$$J\frac{E_J^2}{4E_0}\text{and}\gamma _1\frac{1}{K^2},$$
(16)
respectively, with the dimensionless coupling constant $`K\sqrt{E_J^2/32E_IE_0}`$. Near the maximal-frustration line, on the other hand, the effective Hamiltonian reduces to that for a spin-1/2 2D XXZ antiferromagnet
$`H_{\mathrm{𝑋𝑋𝑍}}^{S=1/2}=\gamma _{\frac{1}{2}}J{\displaystyle \underset{𝐫,\mu }{}}S^z(𝐫)S^z(𝐫+\widehat{𝐞}_\mu )`$ (17)
$`{\displaystyle \frac{1}{2}}J{\displaystyle \underset{𝐫}{}}{\displaystyle \underset{\mu }{}}\left\{S^+(𝐫)S^{}(𝐫+\widehat{𝐞}_\mu )+S^{}(𝐫)S^+(𝐫+\widehat{𝐞}_\mu )\right\},`$ (18)
with the exchange interaction and the anisotropic ratio given by
$$J\frac{E_J^2}{4E_0}\text{and}\gamma _{\frac{1}{2}}\frac{C_1}{2C_I}\frac{1}{K^2},$$
(19)
respectively. In this case, the definitions of the pseudo-spin operators in terms of the phase and charge operators are also different slightly from those in Eq. (12):
$`S^z(𝐫)P{\displaystyle \frac{n_1(𝐫)n_2(𝐫)}{2}}P`$ (20)
$`S^+(𝐫)Pe^{i\varphi _1(𝐫)}(1P)e^{+i\varphi _2(𝐫)}P`$ (21)
$`S^{}(𝐫)Pe^{i\varphi _2(𝐫)}(1P)e^{+i\varphi _1(𝐫)}P.`$ (22)
Such spin-flip operators are associated with the cotunneling of the particle-void pairs as displayed in Fig. 4(b).
In two dimensions, unlike the 1D case, neither of the two (spin-1 and spin-1/2) XXZ antiferromagnets described by Eqs. (14) and (17) allow exact solutions. The simple mean-field theory based on the Ginzburg-Landau approach indicates that the spin-1 XXZ antiferromagnet may exhibit a zero-temperature phase transition from the XY-like phase to the spin-1 Ising-like phase at $`K1`$ or $`\gamma _11`$. In the charge picture, the XY-like phase corresponds to the superconducting (SC) state displaying ODLRO while the spin-1 Ising-like phase characterized by $`S^z(𝐫)=0`$ describes the Mott insulator (MI) state with DLRO. On the other hand, mean-field-like approaches and numerical approaches to the spin-1/2 XXZ antiferromagnet suggest a zero-temperature phase transition from the XY-like phase to the spin-1/2 Ising-like phase with staggered magnetization at $`K\sqrt{C_1/2C_I}`$ or $`\gamma _{\frac{1}{2}}1`$, corresponding to the SC state and the CDW state, respectively.
Therefore, for the present, the projection of the Hamiltonian to get the effective spin model does not provide us with direct information about the phase transitions. Remarkably, however, the spin models, given by Eqs. (14) and (17) in the two regimes, can be obtained from a single 2D quantum phase model (QPM), via appropriate projections. This strongly indicates that both regimes can be described by the Hamiltonian for the QPM:
$`H_{\mathrm{𝑄𝑃𝑀}}`$ $`=`$ $`2e^2{\displaystyle \underset{𝐫,𝐫^{}}{}}[n(𝐫)\stackrel{~}{n}_g]C^1(𝐫,𝐫^{})[n(𝐫^{})\stackrel{~}{n}_g]`$ (24)
$`{\displaystyle \frac{E_J^2}{4E_0}}{\displaystyle \underset{𝐫,\mu }{}}\mathrm{cos}[\varphi (𝐫)\varphi (𝐫+\widehat{𝐞}_\mu )],`$
where the effective Josephson-coupling energy $`E_J^2/4E_0`$ is much reduced compared with the original intra-array value $`E_J`$, and the effective capacitance matrix reads
$`C(𝐫,𝐫^{})`$ $``$ $`C_I\delta _{\mathrm{𝐫𝐫}^{}}`$ (25)
$`+`$ $`{\displaystyle \frac{C_1}{2}}{\displaystyle \underset{\mu }{}}\left[2\delta _{\mathrm{𝐫𝐫}^{}}\delta _{𝐫,𝐫^{}+\widehat{𝐞}_\mu }\delta _{𝐫,𝐫^{}+\widehat{𝐞}_\mu }\right].`$ (26)
Note that the self-capacitance is given by $`C_I`$ instead of the original value $`C_0`$ while the junction capacitance is given by $`C_1/2`$. The value of charge frustration $`\stackrel{~}{n}_g`$ is related with that ($`n_g`$) of the original model given by Eq. (2) in the following way: At the symmetry line ($`n_g=0`$) and the maximal-frustration line ($`n_g=1/2`$) of the original model, we have $`\stackrel{~}{n}_g=n_g`$. Near those lines, however, the value of $`\stackrel{~}{n}_g`$ is rather insensitive to that of $`n_g`$: Namely, $`\stackrel{~}{n}_g`$ remains close to zero and to $`1/2`$ in the rather large ranges around $`n_g=0`$ and $`1/2`$, respectively, changing its value sharply near $`n_g1/4`$. Accordingly, the QPM in Eq. (24) is either near the symmetry line ($`\stackrel{~}{n}_g0`$) or near the maximal-frustration line ($`\stackrel{~}{n}_g1/2`$) except for the more or less narrow range around $`n_g=1/4`$.
The reduction of the above QPM to the spin-1 and the spin-1/2 XXZ models via appropriate projections can be recognized as follows: In the case $`\stackrel{~}{n}_g0`$, the charging energy reaches its minimum at $`n(𝐫)=0`$. This ground state becomes mixed with the states $`n(𝐫)=\pm 1`$, as the Josephson coupling is turned on. On the other hand, for $`\stackrel{~}{n}_g1/2`$, the minimum of the charging energy arises at $`n(𝐫)=0,1`$, yielding two-fold degenerate ground states. These situations are essentially the same as those of the original model with charge frustration $`n_g`$. We thus project the QPM onto the spaces $`\stackrel{~}{}_s\{n(𝐫)=0,\pm 1\}`$ and $`\stackrel{~}{}_d\{n(𝐫)=0,1\}`$ with the psuedo-spin operators redefined as
$`S^z(𝐫)`$ $``$ $`Pn(𝐫)P`$ (27)
$`S^+(𝐫)`$ $``$ $`\sqrt{2}Pe^{i\varphi (𝐫)}P`$ (28)
$`S^{}(𝐫)`$ $``$ $`\sqrt{2}Pe^{i\varphi (𝐫)}P`$ (29)
in space $`\stackrel{~}{}_s`$ and
$`S^z(𝐫)`$ $``$ $`Pn(𝐫)P1/2`$ (30)
$`S^+(𝐫)`$ $``$ $`Pe^{i\varphi (𝐫)}P`$ (31)
$`S^{}(𝐫)`$ $``$ $`Pe^{i\varphi (𝐫)}P`$ (32)
in space $`\stackrel{~}{}_d`$; these projections reproduce, in the zeroth order of $`E_J/E_I`$, both the spin-1 and the spin-1/2 XXZ models in Eqs. (14) and (17) for $`n_g1/4`$ and $`|n_g1/2|1/4`$, respectively.
As we proceed to higher orders, the projection of the single-layer QPM in Eq. (24) in general yields the coefficients of the $`n`$th-order terms $`(E_0/E_I)^{(n1)}`$ times larger than those in the projection of the original model in Eq. (2). In spite of such discrepancy in numerical coefficients, the two projections (of the original model and of the single-layer QPM) onto their own spin models should bring about quite similar structures. For example, mixing of the energy levels in $`_s`$ with those satisfying $`n_{}(𝐫)=\pm 4,\pm 6,\mathrm{}`$ (but still keeping $`n_+(𝐫)=0`$) always occurs via the virtual states with energies of the order of $`E_0`$. Consequently, at least in the two regimes of concern here, it is not irrelevant to consider the single-layer QPM in Eq. (24) as an effective model for the original system. Quite naturally, the deviation of the QPM in Eq. (24) from the original model increases with $`E_J`$.
The 2D QPM has been studied extensively in recent years (see, e.g., Ref. ). Remarkably, for $`|n_g1/2|1/4`$, it was suggested that there may exist an unusual SS phase with both the DLRO and ODLRO, i.e., the coexistence of the crystalline charge ordering together with superconductivity. The existence of the SS phase conflicts with the prediction of a direct transition from the CDW to the SC based on the spin-1/2 XXZ model in Eq. (17), but such conflict also appears when one simply truncates the effects of higher energy levels in the QPM. These arguments finally yield the schematic phase diagram shown in Fig. 5, where the thick solid lines represent the phase boundaries of the SI transitions, separating the SC from the MI (near the symmetry line) or from the CDW (near the maximal-frustration line depicted by the dashed-dotted line). Note that these boundaries (near the two lines) change rather gradually as $`n_g`$ is varied, which reflects that near the two lines the effective charge frustration $`\stackrel{~}{n}_g`$ in the QPM is insensitive to the original charge frustration $`n_g`$. The dashed lines in Fig. 5 represent the somewhat speculative boundaries discussed above; here the region occupied by the SS phase might be small because in the QPM the self-capacitance is much larger than the junction capacitance ($`C_IC_1/2`$).
In conclusion, we have investigated the quantum phase transitions in two capacitively coupled two-dimensional Josephson-junction arrays with charge frustration. The system has been mapped into the $`S=1`$ and the $`S=1/2`$ anisotropic XXZ antiferromagnets near the particle-hole symmetry line and the maximal-frustration line, respectively. We have then argued that the two spin models in effect can be incorporated into a single quantum phase model. Based on the resulting model, it has been suggested that near the maximal frustration line the system may exhibit a quantum phase transition from the charge-density wave to the super-solid phase, displaying both diagonal and off-diagonal long-range order.
This work was supported in part by the SNU Research Fund, by the Korea Research Foundation, and by the Korea Science and Engineering Foundation.
|
no-problem/9907/nucl-ex9907013.html
|
ar5iv
|
text
|
# Strange Meson Enhancement in PbPb Collisions
## 1 Introduction
Ultrarelativistic heavy ion collisions create a highly excited complex system, whose dynamics are governed by excitation of nucleonic, mesonic, resonance and, to some unknown extent, quark and gluon degrees of freedom. It has been predicted that the extreme conditions of temperature and density in such collisions may suffice to create a state, known as quark-gluon plasma (QGP), where the quarks are no longer confined in hadrons . This has stimulated experimental searches for evidence of the deconfinement phase transition. Interactions between liberated gluons in the deconfined phase are predicted to enhance the rate of strangeness production compared to the non-QGP scenarios.
Being the lightest strange hadrons, kaons are expected to dominate the strange sector by virtue of canonical thermodynamics . The observed kaon multiplicity yields information about the mechanism of strangeness production, hadronization and subsequent evolution in the hadron gas, before the gas becomes sufficiently dilute that the interactions cease. Inelastic hadronic rescattering can enrich the strangeness content of the system . We report the yields and distributions of charged kaons and pions measured in ultrarelativistic PbPb collisions by the NA44 Experiment, and discuss implications of these data on the physics of the above-mentioned hadronic processes.
## 2 Experiment and data analysis
The NA44 Collaboration has measured PbPb collisions at 158 A GeV/c using a focusing spectrometer at the CERN SPS. A magnet system of two dipoles and three focusing quadrupoles, together with a tracking complex (a pad chamber, three highly segmented scintillation hodoscopes H2, H3, H4 and two strip chambers) provides momentum resolution of 0.2%. The spectrometer accepts charged particles of a single charge at a time, has two angular positions (44 and 131 mrad) and is operated at two different field strengths. In the weak field mode, it accepts charged tracks in the momentum range of $`3.3<p<5.1`$ GeV$`/c`$, and of $`6.3<p<9.7`$ GeV$`/c`$ in the strong field mode. These two field modes are often called “the 4 GeV/c” and “the 8 GeV/c” settings, respectively. More details about the spectrometer are given in .
Low (predominantly single track) multiplicity in the spectrometer acceptance allows use of two Cherenkov counters (C1, C2) for threshold discrimination of particles of different mass. Collection of $`K/p`$ and $`\pi `$ samples uses different trigger requirements: in the $`K/p`$ mode, the absence of pions and electrons in the acceptance is enforced by a Cherenkov veto (on C1, or both C1 and C2, depending on the momentum setting), whilst for pions, no special trigger enrichment is needed.
Separation of kaons from protons in all settings is performed off-line using the time-of-flight difference between $`K`$ and $`p`$. The time-of-flight is measured using the beam counter (with 35 ps resolution) as start and H3 (with 100 ps resolution) as stop. In the weak magnetic field mode, the pions used in this analysis are identified by time-of-flight, while events with electrons in the acceptance are rejected off-line using C2. High ($`98\%`$) purity of the $`K`$ and $`\pi `$ samples is achieved. In the strong field mode, pions are obtained by subtracting identified kaons and protons from all charged tracks.
Inefficiencies due to the Cherenkov vetoes are evaluated by measuring the rejection by the Cherenkovs in untriggered runs. Such unwanted vetoes occur when a kaon or proton is accompanied by a pion, electron or muon in the Cherenkov counters. To evaluate the inefficiency in the weak magnetic field runs, the vetoed kaons are identified by time-of-flight and the Uranium calorimeter data is used for $`\pi /e`$ separation. In the strong magnetic field runs, the momentum of the particles is too high for reliable separation by time-of-flight, and subtraction of pions, utilizing knowledge of the pion line shape in $`m^2`$, is used to count vetoed kaons.
NA44 has two detectors to characterize event multiplicity: $`T_0`$ (a scintillator trigger counter covering $`1.4\eta 3.7`$ for an $`\eta `$-dependent fraction of azimuthal angle, $`0.22\mathrm{\Delta }\varphi /2\pi 0.84`$ respectively), and a Si pad array measuring $`dE/dx`$ in 512 pads covering $`1.5\eta 3.3`$ and $`2\pi `$ azimuthally. The multiplicity of a given particle, measured in the spectrometer, is an average over many events of a certain centrality class, set by the trigger. Accurate determination of the trigger centrality is performed by varying the centrality used in normalizing the yield of charged tracks in the spectrometer until this yield agrees with the multiplicity in the Si array. Correction for the acceptance difference between the spectrometer and the Si array is performed using the RQMD model , which is consistent with measured charged hadron distributions. Kaon and pion samples of identical Si multiplicity are selected via the $`T_0`$ signal amplitude.
Differential distributions of particles in rapidity, $`y`$, and transverse kinetic energy, $`m_Tm`$, carry information about the dynamics of the collision. In determining $`dN/dy`$ for kaons and pions we use spectrometer settings, or portions thereof, with $`\mathrm{\Delta }y=0.20.6`$. Any dependence of the slope parameter(s) upon $`y`$ is therefore negligible. Then
$$E\frac{d^3N}{dp^3}_{\mathrm{\Delta }y,2\pi }=\frac{1}{2\pi }\frac{dN}{m_Tdm_T}_{\mathrm{\Delta }y}=\frac{\frac{dn}{m_Tdm_T}\underset{\mathrm{\Delta }y}{}\frac{d\stackrel{~}{N}}{dy}𝑑y}{2\pi \mathrm{\Delta }y\underset{\mathrm{\Delta }y}{}A\frac{d\stackrel{~}{N}}{dy}𝑑y}$$
(1)
where $`A=A(y,m_T)`$ is the acceptance function from Monte Carlo simulation of the spectrometer, including effects of magnetic optics, detector response, momentum resolution, tracking efficiency and decays. $`dn/dm_T`$ is the number density of reconstructed tracks in $`m_T`$. $`d\stackrel{~}{N}/dy`$ is the shape of the rapidity distribution, taken to be Gaussian around midrapidity. Integration of the $`m_T`$ distribution with extrapolation to $`m<m_T<\mathrm{}`$, using the fitted slopes, results in $`dN/dy`$.
Table 1 shows the sources of uncertainty on $`dN/dy`$. The error in the extrapolation due to uncertainty in the slope parameter(s) is small because over 95% of particles around mid-rapidity have $`p_T`$ in the range covered by one of the two angle settings. Consequently, the systematic error in $`dN/dy`$ is dominated not by the extrapolation, but by uncertainties in determination of centrality and particle ID efficiency.
## 3 Results and discussion
Tables 2 and 3 give the $`m_T`$ slope parameters and values of $`dN/dy`$ for kaons and pions, along with the statistical and systematic uncertainties. The measured distributions for charged kaons of both signs in transverse kinetic energy and rapidity, are shown on Fig. 1 and Fig. 2, respectively.
The $`1/m_T`$ scaled spectra look approximately exponential in accordance with the behaviour typical for thermalized ensembles of interacting particles, or for particles in whose production the phase-space constraints played the dominant role . The spectra were fit with an exponential in $`(m_Tm)`$, and the resulting slopes are shown in the inserts in Fig. 1. The inverse slopes of the $`K^+`$ and $`K^{}`$ spectra are the same, within errors. Our event selection is sufficiently central that the slopes show no dependence on multiplicity.
In Fig. 2, it is clear that many fewer kaons are produced than pions, as was observed in $`p+p`$ collisions. There are approximately twice as many positive as negative kaons produced. This is typical for baryon rich systems, and was also observed in $`p+p`$ collisions. Preliminary NA49 measurements of $`K^+`$ and $`K^{}`$ $`dN/dy`$ are consistent with those reported here.
Both Fig. 1 and 2 compare the data with predictions of the transport theoretical approach RQMD . While RQMD tends to overpredict both the $`K^+`$ and $`K^{}`$ yields, for $`K^{}`$ the discrepancy appears to be larger. Running RQMD in the mode which does not allow the hadrons to rescatter (shown by the dashed line on the figure) decreases the number of kaons produced. This result illustrates the importance of the secondary scattering to the total kaon yields. Measurements of proton production at midrapidity and of the $`p\overline{p}`$ rapidity distribution indicate that RQMD somewhat overpredicts the degree of baryon stopping. Because $`\pi N`$ inelastic collisions can produce kaons, an increase in stopping translates naturally into kaon enhancement at midrapidity. The data show that the hadron chemistry via secondary scattering, as implemented in RQMD, successfully reproduces the general trends in the hadron distribution. However, the hadron chemistry in the model is not quantitatively correct.
Exothermic strangeness exchange reactions of the kind $`K+NY+\pi `$ and $`\overline{K}+\overline{N}\overline{Y}+\pi `$ are favoured by the cooling of the system, and redistribute strangeness from the mesonic to the baryonic sector. If larger systems reach lower temperature before freezing out , such reactions may be important in Pb+Pb collisions. These strangeness processes are in RQMD. The model seems to underpredict (preliminary) $`\mathrm{\Lambda }`$ yields and overpredict $`K^{}`$. This may indicate that the details of the description should be reexamined.
The $`K/\pi `$ abundance ratio allows estimation of the degree of strangeness enhancement and comparison of various colliding systems. Fig. 3 summarizes the existing midrapidity data in symmetric systems: ISR p+p ; AGS AuAu ; SPS SS ; and SPS PbPb. <sup>1</sup><sup>1</sup>1In this figure, as well as in Fig. 4, the hadron abundances we present are integrals over a fixed fraction of rapidity around midrapidity $`y_{CM}`$: $`|yy_{CM}||y_{proj}y_{targ}|/8`$. This enables the comparison between various energies and experiments, but involves an interpolation in $`y`$ for experiments with larger coverage.
Strangeness enhancement compared to the interpolated $`pp`$ collision data, shown as the line, is seen. The solid point, corresponding to ISR data at midrapidity, indicates the extent of the enhancement due to the midrapidity cut on the particles. The figure shows that $`K^+/\pi ^+`$ is enhanced in high multiplicity heavy ion collisions, but $`K^{}/\pi ^{}`$ is consistent with $`p+p`$ values. Higher multiplicity, or more central collisions, yields larger enhancement, independent of $`\sqrt{s}`$.
Secondary hadronic interactions of the type $`\pi +NY+\overline{K}`$ are important for the strangeness production , and their rate is proportional to the product of the participant’s effective concentrations.
Fig. 4 shows the dependence of the $`K^+/\pi ^+`$ ratio on the product of rapidity densities of the two ingredients of the associated strangeness production, $`N`$ (represented by $`p`$) and $`\pi ^+`$ in the AGS and SPS data, and RQMD calculations. This “$`p\times \pi `$” product serves as an observable measure of the strangeness-enhancing rescattering. The rate of change in the $`K^+/\pi ^+`$ ratio with this rescattering observable is initially very high. However, $`K^+/\pi ^+`$ nearly saturates after this initial rise. The figure shows why the enhancement is large as soon as the multiplicity becomes appreciable. The values of “$`p\times \pi `$” reached at the SPS and AGS are comparable, explaining the similarity of the kaon enhancement despite the different energies. RQMD reproduces the trend of the data very well, and the dotted lines (illustrating no rescattering) along with the shape of the rise with “$`p\times \pi `$” underscore the role of hadronic rescattering in kaon yields. The quantitative agreement of RQMD with the data is not as good, but the final results are undoubtedly quite sensitive to the magnitude of the cross sections used in the model.
## 4 Conclusions
Production of charged $`K`$ and $`\pi `$ mesons in central Pb+Pb collisions at 158 GeV/nucleon has been measured. Within the centrality range studied, no strong multiplicity dependence of the kaon $`m_T`$ slopes or $`K/\pi `$ ratios has been observed. We see no significant slope difference between $`K^+`$ and $`K^{}`$. $`K^+/\pi ^+`$ is enhanced by a factor of about two over $`p+p`$ collisions, whereas $`K^{}/\pi ^{}`$ is little enhanced. Our measurement of $`K^+/K^{}`$ in this saturated region may be used for chemical calculations of the hadron gas.
Comparison with the RQMD model shows that the model qualitatively reproduces the hadron chemistry, through the rescattering of the produced particles. Quantitative comparisons, however, show that the model overpredicts the $`K^{}`$, while the magnitude of $`K^+`$ enhancement is within the range explainable by the RQMD mechanisms. Deconfinement scenarios of the $`K^+/\pi ^+`$ enhancement can not, however, be ruled out or proven by these data alone.
## 5 Acknowledgements
We are grateful to Heinz Sorge for many helpful and illuminating conversations. The NA44 Collaboration wishes to thank the staff of the CERN PS-SPS accelerator complex for their excellent work, and the technical staff in the collaborating institutes for their valuable contributions. This work was supported by the Science Research Council of Denmark; the Japanese Society for the Promotion of Science; the Ministry of Education, Science and Culture, Japan; the Science Research Council of Sweden; the US Department of Energy and the National Science Foundation.
|
no-problem/9907/astro-ph9907031.html
|
ar5iv
|
text
|
# Integrated optics for astronomical interferometry
## 1 Introduction
Optical long baseline interferometry is one of the upcoming techniques that will undoubtedly provide compelling, high angular resolution observations in optical astronomy. The first attempt to use interferometry in astronomy was proposed by Fizeau (1868) and achieved by Stefan (1874) on a single telescope with a pupil mask. Michelson & Pease (1921) first succeeded in measuring stellar diameters, but their interferometer was not sensitive enough to enlarge their investigation. Interferometry is a rather complex technique which needs extreme accuracies directly proportional to the foreseen spatial resolution: 1 milliarcsecond on the sky translates to 0.5 $`\mu `$m in optical delay on a 100-m baseline. That is why modern direct interferometry started only in the 70’s with Labeyrie (1975) who produced stellar interference with 2 separated telescopes. Also interferometric experiments require very low noise detectors which became available only recently. In addition, the atmosphere makes the work even more difficult and dramatically limits the sensitivity of ground-based interferometers. Space-based interferometric missions are therefore being prepared, like the NASA *Space Interferometric Mission (SIM)* or the interferometry corner stone in the ESA Horizon 2000+ program: *Infrared Spatial Interferometer, (IRSI)*.
Long baseline interferometry is based on the combination of several stellar beams collected from different apertures and is aimed to either aperture synthesis imaging (Roddier & Léna 1984) or astrometry (Shao & Staelin 1977). A number of interferometers are currently working with only two apertures: SUSI (Davis et al. 1994), GI2T (Mourard et al. 1994), IOTA (Carleton et al. 1994), PTI (Colavita et al. 1994). COAST (Baldwin et al. 1996) and NPOI (Benson et al. 1997) have started to perform optical aperture synthesis with three apertures by using phase closure techniques. The increase in the number of apertures is one of the major feature of new generation interferometers, like CHARA with up to 7 apertures (McAlister et al. 1994) or NPOI with 5 siderostats (White et al. 1994). We are on the verge of new breakthroughs with the construction of giant interferometers like the VLTI (Very Large Telescope Interferometer) by the European community which will use four 8-m unit telescopes and three 1.8-m auxiliary telescopes (Mariotti 1998), or the Keck Interferometer (Colavita et al. 1998) which will have two 10-m telescopes and four 1.5-m outriggers. They will both achieve high sensitivity thanks to their large apertures and allow the combination of more than three input beams.
We propose in this article a new technology for beam combination that is inherited from the telecom field and micro-sensor applications. This technology will answer many issues related to astronomical interferometry. The technology is called integrated optics on planar substrate, or, in short, integrated optics. The principle is similar to that of fiber optics since the light propagates in optical waveguides, except that the latter propagates inside a planar substrate. In many aspects, integrated optics can be considered like the analog of integrated circuits in electronics.
We describe in Sect. 2 the optical functions required by an interferometer. We present in Sect. 3 the principle of integrated optics, the technology and the available optical functions. Section 4 presents the concept for an interferometric instrument made in integrated optics, and touches upon future possibilities. Section 5 discusses the different technical and scientific issues of this new way of doing interferometry. Results with a first component coming from micro-sensor application will be presented in paper II (Berger et al. 1999). They demonstrate the validity and feasability of the integrated optics technology for astronomical interferometry.
## 2 Description of a single-mode interferometer
To understand where and how integrated optics can play a role in astronomical interferometry, we review the different optical functions present within an interferometer (see Fig. 1). This comes after a summary of stellar interferometry principles. All interferometers but GI2T being single-mode beam combiners (the field is limited to the diffraction pattern of each aperture), we limit our study to the single-mode field, the most appropriate mode for integrated optics.
A two-telescope stellar interferometer provides the measure of interference fringes between two beams at the spatial frequency $`B/\lambda `$, where $`\lambda `$ is the wavelength, and $`B`$ the projection of the baseline vector $`B`$ defined by the two telescopes along $`s`$ the unit vector pointing to the source. The complex visibility of these fringes is proportional to the Fourier transform of the object intensity distribution (Van-Cittert Zernike theorem). Hereafter we call visibility $`V`$ the modulus of the degree of coherence at the spatial frequency $`B/\lambda `$ normalized to the value at the zero frequency,
$$V=\frac{|\stackrel{~}{I}(B/\lambda )|}{|\stackrel{~}{I}(0)|},$$
(1)
and phase $`\varphi `$ its argument. The phase is related to the position of the photo-centroid of the source $`s`$ by the relation:
$$\varphi =2\pi \frac{B.s}{\lambda }.$$
(2)
For ground-based interferometers, the source phase is corrupted by atmospheric turbulence. This prevents an absolute measurement of the source phase. However it is possible to measure the difference in source phase between two wavelengths<sup>1</sup><sup>1</sup>1Phase-closure and phase-reference techniques also provide ways of retrieving this phase..
### 2.1 Light collecting
The stellar light is collected by each individual aperture. These apertures can be either siderostats (e.g. Mark III, PTI, IOTA) or telescopes (e.g. GI2T, VLTI, Keck Interferometer). The coverage of the spatial frequencies is usually done by carefully locating the apertures in order to take advantage of the earth rotation which induces a variation of the length and orientation of the projected baselines (super-synthesis effect). If the structure of the object does not depend on wavelength, then observing at different wavelengths is equivalent to observing at different spatial frequencies. When the apertures are movable (GI2T, IOTA, SUSI), the interferometer can cover many different baselines with different geometrical configurations.
### 2.2 Beam transportation
The beams coming out from each telescope must be directed toward the beam combination table. Two different techniques can achieve this transportation:
* Bulk optics
Flat mirrors are usually used to carry the light from the single apertures toward the central beam combiner. Their main advantages are high throughput and low wavelength dependency. However they are sensitive to thermal and mechanical disturbance and they require many degrees of freedom to align the beams.
Two different philosophies have been developed for transportation. 1) The Coudé trains are symmetrical to prevent differential polarization rotations and phase shifts. It leads to a large number of mirrors and thus a low throughput especially in the visible. One still get residual polarization effects essentially due to optical coatings differences which are not negligible. 2) The number of optics is reduced to a minimum and the large resulting polarization effects are calibrated and corrected inside the interferometer (Sect. 2.4).
* Fiber optics
Froehly (1981) and Connes et al. (1984) were the first to propose fiber optics to connect different apertures. Major efforts have been achieved in this field by Shaklan & Roddier (1987); Shaklan (1990); Reynaud et al. (1994); Reynaud & Lagorceix (1996) with silica fibers and in the 2.2 $`\mu `$m range by Coudé du Foresto & Ridgway (1991); Coudé du Foresto et al. (1996) with fluoride fibers.
The optical fiber throughput is very high: 100-m silica fiber has a throughput of $`99.6\%`$ at $`\lambda =1.6\mu \text{m}`$ (0.15dB/km). In addition, fibers offer some flexibility since the only degrees of freedom are located at the entrance and output of the fiber. That is one reason why Turner & Brummelaar (1997) have proposed optical fibers to combine the visible beams of CHARA. Using fibers can be significantly less expensive than bulk optics.
The several drawbacks of using optical fibers are: chromatic dispersion if the optical path through the different fibers is not matched; mechanical and thermal sensitivity (optical fibers are also used as micro-sensors); and birefringence of the material. However Reynaud & Lagorceix (1996) have shown that one can overcome most of these difficulties by controlling actively the fiber length, carefully polishing the fiber ends and by using polarization maintaining fibers.
### 2.3 Optical path delay (OPD)
The optical path from the beam combiner upward to the stellar source are not identical for each beam. The interferometer must equalize the pathlength at the micron-level accuracy. Furthermore the path lengths change with time and the interferometer must take into account the sidereal motion. This optical function is performed with delay lines.
The classical solution consists in a retro-reflector based on a moving chart (Colavita et al. 1991). The retro-reflector can be either a cat’s eye or a corner cube. Reynaud & Delaire (1994); Zhao et al. (1995) have proposed to stretch optical fibers to delay the optical path. Laboratory experiments showed that this type of delay lines can achieve more than 2 m continuous delay with 100 m silica fibers (Simohamed & Reynaud 1996), and about 0.4 mm continuous delay with 3.4 m fluoride fibers (Zhao et al. 1995). However in the latter case, the maximum optical path delay is somewhat limited since the fiber length is restricted to the maximal accepted stretch: Zhao et al. (1995); Mariotti et al. (1996) proposed multi-stage delay lines which perform short continuous delays by fiber stretching and long delays by switching between fiber arms of different lengths. However the differential dispersion in fibers of different length still remains a limiting factor of this technology.
Optical path modulation using silica fibers has been implemented in the ESO prototype fringe sensor unit (Rabbia et al. 1996).
### 2.4 Beam quality control
The control of the beam quality is essential to maintain the intrinsic contrast of the interferometer.
* Wavefront correction
The stellar light goes through the atmosphere where the wavefront is disturbed. Depending on the wavelength and the size of the turbulent cell ($`r_0`$) compared to the aperture size, the incoming wavefront is corrugated and the stellar spot divided in several speckles with phase differences in the focal plane. Single-mode interferometers select only one speckle and therefore the atmospheric turbulence leads to signal losses proportional to the Strehl ratio. Using adaptive optics to correct at least partially the incoming wavefront increases the total throughput of an interferometer. The minimum wavefront correction is the tip-tilt correction used on many interferometers (IOTA, SUSI, PTI,…)
* Fringe tracking
Due to the same atmospheric perturbations but at the baseline scale, the optical path between two apertures will rapidly vary. When requiring a high sensitivity like for spectral analysis, one needs to increase the acquisition time. The interferometric signal must be analyzed faster than the turbulence time scale to prevent visibility losses due to fringe blurring. The fringe tracker analyzes the fringe position and actively control a small delay line to compensate the atmospheric delay. The fringes are stabilized.
* Polarization
Instrumental polarization can dramatically degrade the fringe visibility. The main effects are differential rotations and phase shifts between the polarization directions (Rousselet-Perraut et al. 1996). Even if special care is taken in designing the optical path to have the most symetrical path for each beam, in practice the incident angles are not exactly the same and the mirrors do not have the same coatings.
Differential rotations can be compensated by rotator devices (Rousselet-Perraut et al. 1998) whereas differential phase shifts can be corrected by Babinet compensators (Reynaud 1993) or Lefèvre fiber loops (Lefèvre 1980).
* Spatial filtering
The incoming wavefronts propagate through a spatial filter, a geometrical device which selects only one coherent core of the beams. It can be achieved either by a micrometer-sized hole or by an optical waveguide like a fiber (Shaklan & Roddier 1988). This principle has been applied successfully to the FLUOR interferometric instrument (Coudé du Foresto 1996). The beams including atmospheric turbulence effects are then characterized by only two quantities, the amplitude and the phase of the outcoming electric field<sup>2</sup><sup>2</sup>2In fact, this statement is correct only for long enough fibers ($`>1000\lambda `$ like a few centimeters) or small hole (a few tenths of the diffraction-limited pattern).. Combined with photometric calibration, this process leads to accurate visibilities (see Sect. 2.5).
### 2.5 Photometric calibration
The interference signal which is measured in stellar interferometry is directly proportional to each incoming beam intensity. These intensities fluctuates because of the atmospheric turbulence. The estimation of the fringe contrast is improved when these intensities are monitored as suggested by Connes et al. (1984) and validated by Coudé du Foresto (1996). Photometric calibration combined with spatial filtering leads to visibility accuracies better than 0.3% (Coudé du Foresto et al. 1996).
### 2.6 Beam combination
Mariotti et al. (1992) have classified the different types of beam combinations. In the single-mode case, there are two types of beam combination:
* co-axial combination, when the beams seems to propagate from the same direction as in the Michelson laboratory experiment (left part of Fig. 2);
* multi-axial combination, when the beams seems to propagate from different directions in the Young’s double slit experiment (right part of Fig. 2);
In bulk optics, the co-axial combination is performed with a beam splitter whereas the multi-axial combination is done by focusing the different beams on the same spot. In the case of multi-axial combination, the differential tilt between the beams produces fringes on the point spread function. The co-axial combination can be regarded as a particular case of the multi-axial mode where all the beams are superposed without tilts: the fringes disappears and the amplitude of the resulting spot depends on the phase difference between the two beams.
The fringe encoding is achieved, in the co-axial case, by modulating the optical path difference between the two beams which results in an intensity modulation, or, in the multi-axial case, by sampling the spatial fringes with a detector matrix<sup>3</sup><sup>3</sup>3If OPD modulation is used with the multi-axial combination, then the fringes appears to move underneath the fringe envelope.. Usually if the fringes are coded in one direction, the other direction is compressed to reduce the number of pixels.
### 2.7 Spectral information
This function is not always implemented in existing instruments, although it is useful for two objectives: to estimate the physical parameters of the source (temperature, kinematics,…), and, to determine the position of the central fringe at zero OPD. The distance between the fringes being directly proportional to the wavelength, one can derotate the fringe phase like in Mark III and PTI (Shao et al. 1988) or to measure the group-delay like in GI2T (Koechlin et al. 1996).
The spectral information can be obtained by dispersing the fringes with a dispersive element (GI2T, PTI). Mariotti & Ridgway (1988) also suggested to apply the concept of Fourier transform spectrography to interferometry by performing double Fourier transform interferometry.
### 2.8 Detection
In the visible, the detectors are either CCDs or photon-counting cameras. In the infrared, for long mono-pixel InSb detectors have been used, but with the availability of array detectors with low read-out noise, interferometers started to use detector matrices.
## 3 Integrated optics on planar substrate
The concept of integrated optics was born in the 70’s with the development of optical communications by guided waves. A major problem of transmission by optical fibers was the signal attenuations due to propagation and the need for repeaters to reformat and amplify the optical signals after long distances. The solution offered by classical optics was unsatisfactory and Miller (1969) suggested an integrated, all-optical component on a single chip, with optical waveguides to connect them.
### 3.1 Principle of guided optics
For sake of simplicity, we first consider the wave propagation of a collimated incident beam into a planar waveguide. This particular structure is formed of three step-index infinite planar layers (see Fig. 3). Light can be observed at the structure output provided that total reflection occurs at each interface and constructive interferences occur between two successive reflected wavefronts (A and C in the figure). The first condition implies that a high-index layer is sandwiched between two low-index layers and gives the range of acceptable incident angle. The second condition translates into a phase difference between the wavefronts A and C multiple of 2$`\pi `$. Therefore the range of acceptable incident angles is no longer continuous but discrete. A single-mode waveguide is a guide which can propagate only the direction parallel to the waveguide. The core layer thickness ranges between $`\lambda /2`$ and $`10\lambda `$ depending on the index difference. Multimode guide propagates beams coming from different directions.
In practice, one needs the full electromagnetic field theory to compute the beam propagation inside the waveguide. The continuity relations of the electromagnetic fields at each interface lead to the equations of propagation of guided modes (Jeunhomme 1990). Depending on the wavelength and the guide thickness ($`l`$ in the Fig. 3), these equations have either no solution (structure under the cutoff frequency), either only one solution (single-mode structure) or several ones (multi-mode structure). The number of solutions also depends on the difference of refractive index between the various layers of the structure. The larger the index difference are, the better the modes are confined. These equations also allow to estimate the energy distribution profile which can be approximated, to first order, by a Gaussian function. The major part of the energy lies in the channel, but evanescent waves can interact with evanescent waves coming from other close waveguides (see the directional coupler in Sect. 3.3).
In interferometry, multi-mode guided structures cannot be used since there exist optical path differences between the various modes. In the following, only single-mode waveguides are considered.
### 3.2 Current technologies
#### 3.2.1 Ion exchange
A first method to build integrated guides on planar substrate is based on glass ion exchange (Ramaswamy & Srivastava 1988; Ross 1989): the $`\text{Na}^+`$ ions of a glass substrate are exchanged by diffusion process with ions ($`\text{K}^+`$, $`\text{Tl}^+`$ or $`\text{Ag}^+`$) of molten salts. The local modification of the glass chemical composition increases the refractive index at the glass surface. A three-layer structure (air / ions / glass) is created and the light is vertically confined. By standard photo-masking techniques (see Fig. 4), the ion exchange can be limited to a compact area and create a channel waveguide. Since ion exchange only occurs at the surface of the glass, the last step of the process consists in embedding the guide, either by forcing the ions to migrate with an electric field or by depositing a silica layer on the waveguide. We obtain a component which guides the light like an optical fiber, the ion exchange area being the core and the glass substrate<sup>4</sup><sup>4</sup>4with or without the added silica layer being the cladding. According to the ions of the molten salt, the refractive index difference can vary between 0.009 and 0.1 (see Table 1).This technology provides various components for telecom and metrology applications.
#### 3.2.2 Etching technologies
Another method consists of etching layers of silicon of various indices (Mottier 1996). These layers can be either phosphorus-doped silica or silicon-nitride. Both technologies can create channels by etching layers of material, where light is confined like in an optical fiber (see Fig. 5). The channel geometry is defined by standard photo-masking techniques. According to the fabrication process, $`\mathrm{\Delta }n`$ can be either high (0.5) for very small sensors, or very low (between 0.003 and 0.015) for a high coupling efficiency with optical fibers. These technologies usually provide components for various industrial applications (gyroscopes, Fabry-Pérot cavities or interferometric displacement sensors).
#### 3.2.3 Polymers
Single mode waveguides made by direct UV light inscription onto polymers are in progress. Such a technology is still in development and the components present usually high propagation losses (Strohhöfer et al. 1998).
### 3.3 Available functions with integrated optics
The first two technologies provide many standard functions for wavelengths ranging between 0.5 $`\mu \text{m}`$ and 1.5 $`\mu \text{m}`$ (standard telecom bands). Several examples are presented (see Fig. 6):
1. The straight waveguide is the simplest component.
2. The curved waveguide allows some flexibility to reduce the size of integrated optics components. Its characteristics depend on the radius of curvature.
3. The direct Y-junction acts as an achromatic 50/50 power divider.
4. The reverse Y-junction is an elementary beam combiner similar to a beam-splitter whose only one output is accessible<sup>5</sup><sup>5</sup>5The flux is lost if the incident beams are in phase opposition..
5. The mirror is an Y-junction coupled with curved waveguides creating a loop. A straight transition between the Y-junction and the loop ensures a symmetrical distribution. The modes propagating through the loop in opposite directions interfere and then light goes back in the input straight waveguide.
6. The directional coupler consists in two close waveguides. According to their proximity and the length of the interaction area, modes can be transfered between them and a power divider can be realized. The power ratio obviously depends on the distance between the two guides, the length of the interaction area and the wavelength.
7. The characteristics of the X-crossing depend on the intersection angle. For high angles (e.g. larger than 10 degrees), the two waveguides do not interact: the crosstalk is negligible. For smaller angles, a part of power is exchanged between the two arms of the components.
8. The taper is a smooth transition section between a single-mode straight waveguide and a multi-mode one. It allows light to propagate in the fundamental mode of the multimode output waveguide. The output beam is thus collimated.
## 4 A coin-size complete interferometer
Many functions required by interferometry (see Fig. 1) can be implemented on a single integrated optics component made from a tiny glass plate. Based on the listed available functions, one can design a beam combiner for a multi-telescope interferometer.
### 4.1 Beam combination
Fig. 7 displays various types of integrated optics beam combiners for two telescopes. They can easily be upgraded to the combination of a larger number of beams. We have classified these beam combiners with the same terminology as in Sect. 2.6.
A co-axial beam combiner is made of waveguide junctions. Reverse Y-junctions allow to collect only the constructive part of the interferometric signal while X-crossing junctions with small angles get the whole interferometric signal provided that asymmetric waveguides are used for the two arms. Note that directional couplers can also be used despite the narrow bandpass.
A multi-axial beam combiner is formed by individual single mode waveguides assembled by a taper that feed a planar waveguide. The light propagates freely in the horizontal direction and the beams interfere at the output of the device whereas light remains confined in the vertical direction. The fringes can be sampled on a detector.
The multiplexer has no analogs in classical optics<sup>6</sup><sup>6</sup>6except if we pile up several coaxial beam combiners.. The light from a given input beam is mixed with the light from other input beams thanks to directional couplers. The output beams are a linear combination of the input beams whose ratios highly depend on the wavelength.
### 4.2 Optical Path Difference modulators
Small excursions are possible with integrated optics technologies. The phase can be modulated up to 100 $`\mu \text{m}`$ with on-the-chip electro-optics, thermo-optics or magneto-optics actuators (Alferness 1982). Such excursions are long enough to modulate the optical path difference around the zero-OPD location to scan the fringes.
### 4.3 Wavelength selection
Thin-film technology can be used to deposit any spectral filter at the output of waveguides (Richier 1996). A particular application of the thin-film coatings is the dichroic filters. Such components are usually integrated in telecom devices and are attractive for astronomical interferometry in order to perform various calibrations or controls.
### 4.4 Photometric calibration
Thanks to direct Y-junctions or direction couplers light can be partially extracted to achieve real-time photometric derivations.
### 4.5 Polarization control
The control of waveguide shapes permit to build polarizing components such as linear polarizers, polarization rotators or phase shifter (Lang 1997), which can be used to compensate residual instrumental polarization. In the future, integrated optics components could eventually be coupled with crystals (such as Lithium Niobate) which induces polarization thanks to Kerr or Pockels effects.
### 4.6 Detection
The size of waveguides ($`1`$ to $`10\mu \text{m}`$) is similar to the size of pixels in infrared arrays. Therefore, direct matching of the planar optics component with an infrared detector would lead to a completely integrated instrument with no relay optics between the beam combiner and the detector. Furthermore recent developments of Supra-conducting Tunnel Junctions (STJ, Feautrier et al. (1998)) show that one may build pixel size detectors with photon counting capabilities over a large spectral range (from ultra-violet to near-infrared) with a very high quantum efficiency. Given its natural spectral resolution (R=50) a STJ combined with an integrated interferometer allows multichannel interferogram detection as well as fringe tracking capabilities. Since STJ are manufactured with the same etching technology as some integrated optics component, one foresees a complete integrated interferometer including one of the most sensitive detector.
In the future, detection techniques using parametric conversion (Reynaud & Lagorceix 1996) could be implemented with optical waveguides.
### 4.7 Switches
Optical integrated switches (Ollier & Mottier 1996) already exists and can be coupled with an integrated interferometer to ensure the delay line function.
## 5 Discussion
Integrated optics is extremely attractive in astronomical interferometry for combining two or more beams and for various functions (Kern et al. 1996). In this section we discuss some intrinsic properties of integrated optics components. This analysis leads us to list some specific advantages and applications for this approach.
### 5.1 Optical losses
We have to distinguish several optical losses :
1. Fresnel losses.
At the air/waveguide or air/coupling fiber interfaces, Fresnel losses occur. They equal about 4% but can be reduced by anti-reflection coatings deposited at the inputs and the outputs of the waveguides.
2. Coupling losses.
The light injection in the waveguide can be either direct or, more usually, via an optical fiber. According to the chosen solution, coupling losses exist at the air/waveguide interface or air/coupling fiber and at the fiber/waveguide interface. For an efficient coupling, the incident energy has to match as much as possible with the propagating mode (numerical apertures, fiber core and waveguide sizes, waveguide profile shape).
All these conditions cannot be easily satisfied. In etching technology, the process provides channels with non spherical sections, leading to coupling losses of about 0.33 dB or 7$`\%`$ excluding Fresnel losses). With ion exchange technology, the coupling efficiency clearly depends on the diffusion process and more specifically on the channel depth. The losses are of the order of 2-3$`\%`$ if the waveguide is embedded inside the substrate.
3. Propagation losses.
Standard glasses provide low propagation losses for wavelengths less than $`2.5\mu \text{m}`$. With ion exchange technology, the propagation losses depend upon the diffused ions. For the more used ions ($`\text{K}^+`$, $`\text{Tl}^+`$ or $`\text{Ag}^+`$) these losses remain less than 0.2 dB/cm (a 1 cm-long component has a throughput of 94.5$`\%`$). Silicon etching technology exhibits propagation losses of 5 dB/m. Therefore integrated optics cannot be used to realize lengthy components aimed at transportation.
4. Losses intrinsic to the integrated optics structure.
Depending on the integrated optics design, light can be partially lost because of uncontrolled radiated modes like in the calssical reverse Y-junction. When used with two incident beams in opposite phases, the flux is radiated inside the substrate. This point is critical in astronomical interferometry where we wish to maximize the optical throughput. For this specific application, optimization and simulation of various components are in progress (Schanen-Duport et al. 1998).
### 5.2 Spectral behavior
#### 5.2.1 Available spectral ranges
The off-the-shelves components are generally designed for telecom spectral bands (0.8 $`\mu \text{m}`$, 1.3 $`\mu \text{m}`$ and 1.5 $`\mu \text{m}`$). They can directly be used to manufacture astronomical components for the I, J and H bands of the atmosphere. Standard glasses have an optical throughput higher than 90$`\%`$ in the visible and the near-infrared domain (up to $`2.5\mu \text{m}`$, see Schanen-Duport et al. 1996). Ion exchange technology provide integrated components for the K atmospheric bands ($`2.2\mu \text{m}`$). For higher wavelengths ($`5\mu \text{m}`$ or $`10\mu \text{m}`$), different technologies are under study.
Optical waveguides remain single-mode over a given spectral range (an octave in wavelength). This range is wide enough to cover a single atmospheric band but not for several bands. However the compactness of integrated optics components allow to use one optimized component for each band without increasing the overall size of the instrument.
Y-junctions provides achromatic power division and beam combination which make them attractive despite the loss of 50% of the information in the latter function. The other functions should be studied and optimized in order to limit the chromatic dependence over the spectral range. Finally we recommend to calibrate the device with spectral gain tables like in standard astronomical imaging in order to suppress any device-dependent effects.
#### 5.2.2 Chromatic dispersion
Like fiber optics, integrated optics components have intrinsic chromatic dispersion which could lead to a visibility loss over typical spectral bandwidths of 0.2-0.4$`\mu \text{m}`$ from the atmospheric bands. However the losses are greatly reduced since:
* the mask has been designed to provide symmetrical interferometric arms with identical lengths, curvature, etc…;
* the optical path difference between two arms is directly proportional to the length of the device. For a typical length of a few centimeters, the optical length difference cannot exceed 100$`\mu \text{m}`$ essentially due to machining defects (cutting and polishing);
* the process for both technologies (ion exchange and etching) provides a good homogeneity for the index difference inside the waveguides.
Therefore even if we cannot preclude any contrast losses due to chromatic dispersion, we think that this effect will remain small.
However since the integrated optics component is part of an instrument, special care must be taken to avoid other sources of chromatic dispersion. In particular, optical fibers if used to inject stellar light into the device have to be optimized accurately (Reynaud & Lagorceix 1996).
#### 5.2.3 Dispersion capabilities
Within the context of spectral interferometric measurements (Sect. 2.7) the waveguide output is equivalent to the input slit of a spectrograph and is able to directly feed a spectrograph grating avoiding the cylindrical optics used to compress the Airy pattern in the direction perpendicular to the fringes (Petrov et al. 1998).
### 5.3 Polarization behaviour
Both integrated optics technologies control the orientation of the neutral axes and thus provide components with intrinsic maintain of polarization properties. Provided that the design is symmetrical, the component does not introduce differential polarization, which is a crucial advantage for astronomical interferometry. Note that the coupling with polarization maintaining optical fibers has to be done with a great accuracy.
### 5.4 Thermal background
Because of their small size, integrated optics components can easily be integrated in a single camera dewar. Therefore no relay optics are needed between the component and the detector, reducing the photon losses. Moreover the waveguide can be cooled and put close to the detector and the dewar can be blind, which reduces the thermal background.
## 6 Conclusion and perspectives
### 6.1 Decisive advantages
We have shown the great interest of using integrated optics for astronomical interferometry. Figure 8 shows a three-way beam combiner with photometric calibrating channels made with the silicon etching technology (Severi et al. 1999).
We argue that integrated optics technology which is already industrially mature, presents the following main advantages for astronomical interferometry:
* Small size. A complete instrument can be integrated on a chip typically $`5\text{ mm}\times 20\text{ mm}`$.
* Stability. The instrument is completely stable while embedded in a substrate.
* Low sensitivity to external constraints: temperature, pressure, mechanical constrains.
* Few opto-mechanical mounts and little alignment required. The only concern is coupling light into the waveguides.
* Simplicity. For a complex instrument, the major efforts are shifted to the design of the mask, not in the construction phase.
* Intrinsic polarization capabilities (Sect. 5.3).
* Low cost. Integrated optics provides very low cost components and instrumentation set-up. Furthermore the price is the same for one or several components since the main cost is in the design and initial realization of the mask.
### 6.2 Application to interferometry
Integrated optics components are not well-suited for wide wavelength coverage, high spectral dispersion and large field of view. Therefore, we do not think that it will completely replace existing techniques in astronomical interferometry. However with the characteristics presented in this article, we think that integrated optics will be attractive for the two following applications:
* Interferometers with a large number of apertures.
Whatever the complexity of the instrumental concept, only one component in integrated optics allows combination of several beams and ensures the photometric calibrations at low cost and with limited alignments
* Space-based interferometers.
Integrated optics components with no internal alignments provide reliable beam combiners for spatial interferometry.
These specific advantages lead us to perform laboratory experiments to validate this approach. An interferometric workbench has been built to completely characterize various components realized by both technologies. First fringes with a white source have been obtained (Berger et al. 1999, , paper II) and the integration of an interferometric instrument dedicated to astronomical observations in the H and K atmospheric bands is in progress (Berger et al. 1998).
## 7 Acknowledgments
The authors are grateful to F. Reynaud (IRCOM - Univ. Limoges), P. Pouteau, P. Mottier and M. Séveri (CEA/LETI - Grenoble) for fruitful discussions, and to E. Le Coarer and P. Feautrier for the idea of combining integrated optics and STJ. We would like to thank K. Wallace for carefully reading the manuscript. The works have partially been funded by PNHRA/INSU, CNRS/Ultimatech and DGA/DRET (Contract 971091).
|
no-problem/9907/quant-ph9907052.html
|
ar5iv
|
text
|
# Can all neurobiological processes be described by classical physics?
## Dissipation
The interaction between system and environment produces dissipation. This phenomenon reflects the possiblity that during time evolution accessible modes change into inaccessible modes belonging to the environment. Our question here is what origin the dissipative processes described in MT99 really have. To discuss this problem we utilize the coarse-graining paradigm to distinguish between system and environment. This means that an original (classical or quantum mechanical) physical structure is subdivided into two parts which eventually become the system and the environment <sup>*</sup><sup>*</sup>*The original physical structure is actually a system which does not have an environment. This means that its temporal evolution does not depend on inaccessible modes. In what follows, we call this particlar structure subsystem.. Usually this division is realized by introducing a coarse graining procedure, i.e. small spatial lenghtscales are filtered out appropriately (for details, see ). Then there are two basic possibilties:
a. Dissipation is essentially a classical phenomenon. Then dissipation emerges after coarse graining a subsystem that is completely representable by classical physics. This kind of dissipation we call classical dissipation.
b. Dissipation is a quantum phenomenon. Then it is a by-product of coarse graining a subsystem that cannot be described by classical physics appropriately and the resulting dissipation cannot be obtained via coarse graining a classical subsystem. This kind of dissipation we call quantum dissipation.
Examples for both kinds are known. For instance, consider a subsystem that is represented by the Euler equation. Then the resulting (coarse grained) system exhibits dissipation arising from unresolved scales .
On the other hand in we showed that already non-interacting quantum field theories reveal dissipative behavior after being averaged over small spatial volumes. However, it turns out that quantum dissipation is inherently different from its classical counterpart. This is because the former acts non-locally in time, i.e. the state at an arbitrary time $`t`$ depends on the system’s history between $`t\tau _\mathrm{m}`$ and $`t`$, where $`\tau _\mathrm{m}`$ denotes a non-vanishing memory time. Contrary to that a classical subsystem that is completely determined by local (in space and time) equations of motion would – after spatial coarse graining – lead to a local evolution in time. Thus the resulting equations would not involve a memory term and in this situation $`\tau _\mathrm{m}`$ would become inifinitesimal small.
## Noise
As a next point we briefly discuss the nature of the noise term itself. Noise is generated by (hidden) modes of the environment which – during their temporal evolution – occasionally become part of the system. A detailed description of the noise arising from the coarse graining procedure for quantum systems is given in as well as in . Again it is obvious to introduce classical and quantum noise using the same kind of classification as in the previous section. But then it is not immediately clear whether there is a subtstantial difference between the latter and the former. However, since noise always appears together with dissipation it is natural to consider both – noise and dissipation – at the same time and therefore to investigate their cumulative effects. For instance, noise in a system with quantum dissipation would be represented by a non-Markovian random process (because of the dependence on the system’s history) while on the other hand classical dissipation would not change a given Markovian character of noise.
## Possible origins of dissipation and noise in neurons
Now we ask if quantum noise and dissipation are relevant for the processes taking place in neurons despite the fact that (according to the results in MT99) coherent quantum effects over spatial distances become neglegible. The decoherence analysis in MT99 can be discussed in the context of arguments given in . There we used the coarse graining paradigm to show that near the classical limit additional terms appear within the classical equations of motion. These terms account for quantum dissipation, quantum noise and for the quantum potential (or Bohm-potential). In fact, the latter term is responsible for non-local quantum effects in space. Thus the results in MT99 and in particular the expression given for the off-diagonal elements of the system’s density matrix suggest that coherent quantum effects coming from the quantum potential are unimportant. At the same time however, quantum dissipation and quatum noise need not to be neglegible. To proof that they actually are, one would additionally have to show that on timescales relevant for the macrosystem these terms provide a very small contribution. For example, one necessary condition would be the fact that the memory time, $`\tau _\mathrm{m}`$, is much smaller than the dynamical time of the system. At this point it is wothwhile to note that the dynamics of biological neurons involves some non-local temporal behavior. The so-called ’neuron firing’, which is the initial emission and following propagation of an action potential (spike) along the axon, prerequisites that incoming electric signals reach some certain threshold. More specificly, excitatory postsynaptic signals propagate towards the axon-hillock where they lead to a large propability for the emission of a spike when the sum of these incoming signals within a short period of time exceeds a threshold . This process is called temporal summation and the aforementioned ’short period of time’ is known to be of the order of tens of milliseconds This number is close to the dynamic timescale of a neuron being around $`10^310^1`$ s..
Thus signal processing in a neuron involves a memory effect. Facing this fact we ask the obvious question if the memory time present in neural signal processing has something to do with the memory time $`\tau _\mathrm{m}`$ that results from a quantum mechanical description of the system. If $`\tau _\mathrm{m}`$ turns out to be much smaller than memory timescales observed in neurons then we would have to look for a classical theory that explains the considered non-local effect. So far, it is not clear how such a theory might look like and what the physical assumptions are that form its basis. But since coarse grained quantum mechanics provides very naturally a non-local temporal behavior it seems – at least at the present stage – reasonable not to preclude quantum physics from neurobiological processes.
|
no-problem/9907/cond-mat9907038.html
|
ar5iv
|
text
|
# The diameter of the world wide web
Despite its increasing role in communication, the world wide web (www) remains the least controlled medium: any individual or institution can create websites with unrestricted number of documents and links. This unregulated growth leads to a huge and complex web, which is a large directed graph, whose vertices are documents and edges are the links (URLs) pointing from one document to another. The topology of this graph determines the web’s connectivity and, consequently, our effectiveness in locating information on the www. However, due to its large size (estimated to be at least $`8\times 10^8`$ documents), and the continuously changing documents and links, it is impossible to catalogue all vertices and edges. The challenge in obtaining a complete topological map of the www is illustrated by the limitations of the commercial search engines: Northern Light, the search engine with the largest coverage, is estimated to index only $`38\%`$ of the web. While great efforts are made to map and characterize the Internet’s infrastructure, little is known about what truly matters in searching for information, i.e., about the topology of the www. Here we take a first step to fill this gap: we use local connectivity measurements to construct a topological model of the www, allowing us to explore and characterize the large scale properties of the web.
To determine the local connectivity of the www, we constructed a robot, that adds to its database all URLs found on a document and recursively follows these to retrieve the related documents and URLs. From the collected data we determined the probability $`P_{out}(k)`$ ($`P_{in}(k)`$) that a document has $`k`$ outgoing (incoming) links. As Figs. 1a and b illustrate, we find that both $`P_{out}(k)`$ and $`P_{in}(k)`$ follow a power-law over several orders of magnitude, remarkably different not only from the Poisson distribution predicted by the classical theory of random graphs by Erdős and Rényi, but also from the bounded distribution found in recent models of random networks. The power law tail indicates that the probability of finding documents with a large number of links is rather significant, the network connectivity being dominated by highly connected web pages. The same is true for the incoming links: the probability of finding very popular addresses, to which a large number of other documents point, is non-negligible, an indication of the flocking sociology of the www. Furthermore, while the owner of each web page has complete freedom in choosing the number of links on a document and the addresses to which they point, the overall system obeys scaling laws characteristic only of highly interactive self-organized systems and critical phenomena.
To investigate the connectivity and the large-scale topological properties of the www, we construct a directed random graph consisting of $`N`$ vertices, assigning to each vertex $`k`$ outgoing (incoming) links, such that $`k`$ is drawn from the power-law distribution shown in Fig. 1a and b. To achieve this, we randomly select a vertex $`i`$ and increase its outgoing (incoming) connectivity to $`k_i+1`$ if the total number of vertices with $`k_i+1`$ outgoing (incoming) links is less than $`NP_{out}(k_i+1)`$ ($`NP_{in}(k_i+1)`$). A particularly important quantity in a search process is the shortest path between two documents, $`dl`$, defined as the smallest number of URL links one needs to follow to navigate from one document to the other. As Fig. 1c shows, we find that the average of $`d`$ over all pairs of vertices follows $`d=0.35+2.06\mathrm{log}(N)`$, indicating that the web forms a small-world network, known to characterize social or biological systems. Using $`N=8\times 10^8`$, we find $`d_{www}=18.59`$, i.e., two randomly chosen documents on the web are on average 19 clicks away from each other. Since for a given $`N`$, $`d`$ follows a Gaussian distribution, $`d`$ can be interpreted as the diameter of the web, a measure of the shortest distance between any two points in the system. Despite its huge size, our results indicate that the www is a highly connected graph of average diameter of only $`19`$ links. The logarithmic dependence of $`d`$ on $`N`$ is important to the future potential of the www: we find that the expected $`1000\%`$ increase in the size of the web over the next few years will change $`d`$ from $`19`$ to only $`21`$. The relatively small value of $`d`$ suggests that an intelligent agent, i.e., who can interpret the links and follow only the relevant one, can find in a short time the desired information by navigating the www. However, this is not the case for a robot, that locates the information based on matching strings: we find that such a robot, aiming to identify a document at distance $`d`$, needs to search $`M(d)0.53N^{0.92}`$ documents which, using $`N=8\times 10^8`$, leads to $`M=8\times 10^7`$, i.e., to $`10\%`$ of the full www. This indicates that robots cannot benefit from the highly connected nature of the web, their only successful strategy being indexing as large a fraction of the www as possible.
The uncovered scale free nature of the link distributions indicates that collective phenomena play an unsuspected role in the development of the www, requiring us to look beyond the traditional random graph models. A better understanding of the web topology, aided by modeling efforts, is crucial in developing search algorithms or designing strategies for making information widely accessible on the www. The good news is that, due to the surprisingly small diameter of the web, all that information is just a few clicks away.
Réka Albert, Hawoong Jeong and Albert-László Barabási
Department of Physics,
University of Notre-Dame, Notre Dame,
Indiana 46556, USA
email:[email protected]
|
no-problem/9907/astro-ph9907115.html
|
ar5iv
|
text
|
# Synthetic spectra of H Balmer and HeI absorption lines. I: Stellar Library
## 1 Introduction
The dominant characteristics of the optical spectrum of starburst galaxies are its nebular emission lines. These lines are formed in the surrounding interstellar medium of the starburst that is photoionized by photons emitted by stars with mass higher than 10 M$``$. These massive stars dominate the ultraviolet and contribute significantly to the optical continuum, that comes mainly from A, B and O stars. The most conspicuous features of the spectra of these early stars are their strong hydrogen Balmer and neutral helium absorption lines. In the spectrum of a starburst, the nebular emission lines are superimposed on the stellar absorption lines, and usually emission dominates over absorption. However, the contribution of the underlying absorption becomes increasingly important for higher terms of the Balmer series (H$`\gamma `$, H$`\delta `$,…) and some of the HeI lines ($`\lambda `$ 4471, 4387, 4026, …). Very often, the effect of the stellar population is also seen as absorption wings in the H$`\beta `$, H$`\gamma `$ and H$`\delta `$ lines. The observation of these lines with high spectral resolution makes it possible to estimate the contribution of the underlying absorption. On the other hand, the analysis of the profiles high-order of the Balmer series, which are dominated by absorption instead of emission, can yield information on the properties of the stellar content of the starbursts and on their evolutionary stage. Therefore, high-resolution absorption profiles of the high-term Balmer and He I lines covering a wide range in effective temperature and gravity are needed to predict the composite stellar spectra of starburst galaxies.
Observational and synthetic stellar spectra are available in the literature. However, the observational atlases have only intermediate or low spectral resolution (Burstein et al. 1984; Jacoby, Hunter & Christian 1984; Walborn & Fitzpatrick 1990; Cananzi, Augarde, & Lequeux 1993) and the synthetic stellar libraries do not cover the high-order Balmer series and HeI lines (Auer & Mihalas 1972; Kurucz 1979, 1993). For these reasons, a grid of stellar atmospheres and synthetic spectra in the wavelength range from 3700 to 5000 Å with a sampling of 0.3 Å and covering 50000 to 4000 K has been computed using the code of Hubeny (1988). In this paper, we present the grid of synthetic profiles, which are compared with observations and synthetic profiles computed by Auer & Mihalas (1972) and Kurucz (1993). In a companion paper (González Delgado, Leitherer & Heckman 1999; hereafter paper II), evolutionary synthesis profiles of H and He absorption lines for star forming regions are presented.
## 2 Stellar library
### 2.1 The Grid
The grid includes the synthetic stellar spectra of the most relevant H and He I lines from 3700 to 5000 Å, in five different wavelength ranges (Table 1). H$`ϵ`$ is not synthesized because it is coincident with the CaII H line, and this line shows a very strong interstellar component in individual stars and in the integrated spectrum of a galaxy. HeII lines, He I $`\lambda `$5876, HeI $`\lambda `$6678 and H$`\alpha `$ are not synthesized because in hot stars the profiles of these lines are affected by stellar winds (Gabler et al. 1989; Herrero et al. 1992; Bianchi et al. 1994). However, HeI at wavelengths shorter than 5000 Å, H$`\beta `$ and the higher order-terms of the Balmer series are only partially (mainly the core of the line but not the wings) or not at all affected by sphericity and winds (Gabler et al. 1989). The spectra span a range of effective temperature from 4000 to 50000 K, with a variable step from 500 K to 5000 K, and a surface gravity log$`g`$=0.0 to 5.0 with a step of 0.5 (Table 2). The metallicity is solar.
The spectra are generated in three different stages with a set of computer programs developed by Hubeny and collaborators (Hubeny 1988; Hubeny & Lanz 1995a; Hubeny, Lanz, & Jeffery 1995). First, the stellar atmosphere is calculated; then, the stellar spectrum is synthesized, and finally the instrumental and rotational convolution are performed.
### 2.2 The model atmospheres of the grid
To generate a synthetic stellar spectrum, a model atmosphere is needed. For $`T_{eff}`$25000 K, the atmosphere is produced using version 193 of the program TLUSTY (Hubeny 1988; Hubeny & Lanz 1995a,b); for $`T_{eff}`$25000 K we use a Kurucz (1993) LTE atmosphere. TLUSTY calculates a plane-parallel, horizontally homogeneous model stellar atmosphere in radiative and hydrostatic equilibrium. The program allows departures from local thermodynamic equilibrium and metal line blanketing, using the hybrid complete linearization and accelerated lambda iteration (CL/ALI) method.
To reduce computational time, non-blanketed non-LTE (NLTE) models are computed with TLUSTY. H and He are considered explicitly. The population of their levels (9 atomic energy levels of HI, 14 levels of HeI and 14 levels of HeII are considered) are determined by solving the corresponding statistical equilibrium equations. 25 additional atoms and ions contribute to the total number of particles and to the total charge, but not to the opacity.
The NLTE models are computed in three stages. First, an LTE-gray atmosphere is generated. Here, the opacity is independent of wavelength, and the populations of the energy levels are calculated assuming the local value of the temperature and electron density. This model is used as a starting approximation for the LTE model. Finally, the NLTE model is computed, where the gas and the radiation are coupled. Here, departures from LTE are allowed for 39 energy levels. Convection is suppressed in all the models. A depth-independent turbulence velocity of 2 km s<sup>-1</sup> is assumed. Doppler broadening is assumed for all the line transitions. The properties of the atmosphere are calculated at 54 depth points.
We use Kurucz (1993) LTE atmospheres for $`T_{eff}`$25000 K because for stars cooler than B1, NLTE effects are not very important, These models are line blanketed, and we take the models with a turbulence velocity of 2 km s<sup>-1</sup> and solar metallicity.
### 2.3 The synthetic profiles
The synthetic spectra are computed with the program SYNSPEC (Hubeny, Lanz, & Jeffery 1995). This program reads the input model atmosphere, either calculated by TLUSTY or from the Kurucz models, and solves the radiative transfer equation, wavelength by wavelength in a specified spectral range. The program also uses an input line list that contains the transitions in the six specified wavelength ranges that we synthesize here. The line list has a format similar to the Kurucz & Peytremann (1975) tables.
The continuum opacity is calculated exactly the same way as it was done in the atmosphere model. For $`T_{eff}`$ 25000 K, only continuum opacities from the atomic energy levels of H and He are considered. The opacity sources are: 1) photoionizations from all the explicit levels (39 levels in TLUSTY models); 2) free-free opacity for all the explicit ions (HI, HeI and HeII in TLUSTY models and H, Mg, Al, Si and Fe in the Kurucz models); 3) electron scattering. The line opacity is calculated from the line list. Although the structure of the atmosphere (for $`T_{eff}`$ 25000 K) is calculated with only H and He, the synthetic spectra are calculated assuming line blanketing from elements with atomic number Z$``$28. The atoms and transitions that were treated in NLTE in the atmosphere models are also treated in NLTE by SYNSPEC. For those lines that originate between levels for which the population is calculated in LTE, SYNSPEC uses an approximate NLTE, based on the second-order escape probability theory (Hubeny et al. 1986).
The line profiles have the form of a Voigt function and take into account the effect of natural, Stark, Van der Waals and thermal Doppler broadening. The broadening parameters for H and He are those tabulated by Vidal, Cooper, & Smith (1973) for the first four members of the Balmer series, and up to H10 by Butler (private communication). For HeI, the line broadening tables for $`\lambda `$4471 are from Barnard, Cooper, & Smith (1974) and those for $`\lambda `$4026, 4387, 4922 are from Shamey (1969). The maximum distance of two neighboring frequency points for evaluating the spectrum is 0.01 Å. A turbulence velocity of 2 km s<sup>-1</sup> is assumed.
Finally, the program ROTIN performs the rotational and instrumental convolution. A FWHM instrumental profile of 0.01 Å and a rotational velocity of 100 km s<sup>-1</sup> are assumed which is the typical rotational velocities in massive stars (Conti & Ebbets, 1977). The final sampling of the spectra is 0.3 Å.
## 3 Results
The synthetic spectra of the grid are available for retrieval at http://www.iaa.es/ae/e2.html and http://www.stsci.edu/science/starburst/. Figure 1 compares the spectra for $`T_{eff}`$=25000 K and log$`g`$= 4.0 computed with TLUSTY+SYNSPEC assuming NLTE with those computed with SYNSPEC and Kurucz LTE atmospheres. The two spectra are essentially identical; this justifies the use of LTE stellar atmosphere models for $`T_{eff}`$25000 K. Figure 2 shows the spectra of a typical O ($`T_{eff}`$=40000 K, log$`g`$=4.0), B ($`T_{eff}`$=20000 K, log$`g`$=4.0) and A ($`T_{eff}`$=10000 K, log$`g`$=4.0) star. The most important lines in the spectrum are labelled. The equivalent width of the most important H and HeI lines have been measured in the synthetic spectra. Several measurements were done with different spectral windows. This allows a calibration of the contribution of weaker lines to the spectral index that characterizes each of the Balmer lines. The spectral index was measured from the continuum equal to 1 (this represents the real equivalent width of the lines), but also from a pseudo-continuum that was determined by fitting a first order polynomial (except for the spectral range 3700-3900 Å, for which we used a third order) to the windows defined in Table 3. This spectral index simulates the equivalent width we can measure in observed stellar spectra. Tables 4 to 7 show the equivalent widths of H$`\delta `$, H8, HeI $`\lambda `$4471 and HeI $`\lambda `$3819, respectively.
Figure 3 shows the equivalent widths of H$`\beta `$, H$`\delta `$ and two of the high-order terms of the Balmer series (H8 and H9) as a function of $`T_{eff}`$ for main sequence stars (log$`g`$= 4.0). The maximum equivalent width occurs at $``$9000 K, corresponding to an early-type A star. The plot indicates that the high-order Balmer series lines show, like the lower terms, a strong dependence on $`T_{eff}`$ and on gravity (Figure 4); thus, they are an efficient tool for the determination of the fundamental stellar parameters. Figure 5 shows the equivalent width of the HeI lines as a function of $`T_{eff}`$ for main sequence stars (log$`g`$= 4.0) (see also Tables 6 and 7 for HeI $`\lambda `$4471 and HeI $`\lambda `$3819, respectively). They also show a strong dependence on $`T_{eff}`$ and gravity (Figure 6). The maximum is at $``$20000 K, corresponding to an early-type B star. The increase of the equivalent width at $`T_{eff}`$10000 K is not due to HeI absorption, but to the contribution of some metal lines (FeII $`\lambda `$4385, FeI $`\lambda `$4920, FeII $`\lambda `$4924) which fall in the windows where the equivalent width is measured.
## 4 Test of the profiles
In this section the synthetic profiles of some of the H and HeI lines are compared to the Auer & Mihalas (1972) and Kurucz (1993) profiles. They are also compared with observations. The goal is to test the advantages and limitations of our grid with respect to previous work.
### 4.1 Comparison to Auer-Mihalas models
Auer & Mihalas (1972) computed non-blanketed NLTE profiles of H (P$`\alpha `$, H$`\alpha `$, H$`\beta `$, H$`\gamma `$ and L$`\alpha `$) and HeI ($`\lambda `$4026, 4387, 4471 and 4922) for $`T_{eff}`$ from 30000 to 50000 K, and log$`g`$=3.3 to 4.5. Figure 7 compares the profiles of H$`\gamma `$ for $`T_{eff}`$=30000 and 40000 and log$`g`$=4.0 with the spectra of the grid. The profiles are very similar, the small discrepancy comes from the effect of rotation (we assume a rotation of 100 km s<sup>-1</sup> and none in the Auer-Mihalas models) and the inclusion of atomic transitions in the spectral range we have synthesized. The contribution of these lines makes the profile of H$`\gamma `$ asymmetric in our spectra but not in the Auer-Mihalas profiles. This represents an improvement of our synthetic profiles over those of Auer & Mihalas because our grid is more similar to the observed spectra of stars (see for example Walborn & Fitzpatrick 1990).
Figure 8 shows the profile of HeI $`\lambda `$4471 for $`T_{eff}`$=40000 K and log$`g`$=4.0. The apparent disagreement comes from the effect of the rotational convolution performed in the grid. In fact, the difference between the profiles disappears when the comparison is done between non-convolved profiles. The effect of the rotational convolution is more drastic in the profile of the HeI lines than in the Balmer lines because HeI lines are weaker and narrower than Balmer lines.
### 4.2 Comparison to Kurucz models.
For $`T_{eff}`$ 25000 K, the grid has been generated using the stellar atmosphere structure calculated by Kurucz (1993). Kurucz also synthesizes the profiles of H$`\alpha `$, H$`\beta `$, H$`\gamma `$ and H$`\delta `$. However, the synthesis does not include the atomic transitions that fall within $`\pm `$100 Å of the center of the Balmer lines. Figure 9 compares the profiles of our grid with those of Kurucz (1993). They are very similar, the apparent disagreement at $`T_{eff}`$ 7000 K is due to the effect of the rotational convolution and the inclusion of metallic lines in the spectral ranges synthetized here. However, note that the profile of the Balmer lines before performing the rotation is equal to the profile synthesized by Kurucz.
Our synthetic spectra assume fewer sources of continuum and line opacity than the Kurucz stellar atmosphere. This inconsistency leads to differences between the continuum calculated by TLUSTY+SYNSPEC and the low resolution spectra generated by Kurucz. However, the difference is less than 8$`\%`$ in the whole spectral range synthesized here if $`T_{eff}`$ 7000 K. For lower effective temperature, the deviation is more important since the shape of the continuum also changes. However, the normalized profiles of the Balmer lines of our grid and Kurucz (1993) are the same for all effective temperatures ($`T_{eff}`$ 25000 K). Thus, we can confidently use the normalized profiles of the grid in our evolutionary synthesis code, where these profiles are calibrated in absolute flux.
### 4.3 Comparison to observations
Figure 10 compares the synthetic profiles from 3700 to 3900 Å with the spectra of observed stars from the stellar library of Jacoby, Hunter, & Christian (1984). The observed spectra are normalized assuming a pseudo-continuum that was defined by fitting a third order polynomial to the windows defined in Table 3. The synthetic spectra are binned to the resolution of the observations ($``$4 Å). There is good agreement between the synthetic and observed profiles. The discrepancy between the observations of the O5V star and synthetic profiles at $`\lambda `$3760 Å is probably due to uncertainties in the wavelength and flux calibration of the data.
We have also compared the synthetic spectra with two stars observed by one of us for a different project. The stars HD24760 and HD31295 are classified as B0.5V and A0V, respectively. They were observed with the 2.5m Isaac Newton Telescope at the Roque de los Muchachos Observatory using the 500 mm camera of the Intermediate Dispersion Spectrograph and a TEK CCD detector. The dispersion is 0.4 Å/pix. Figures 11 and 12 compare the observed normalized spectra with the synthetic profiles with effective temperature of 27500 K and 9500 K, respectively. In both cases, the gravity is log$`g`$=4.0. The agreement between observations and synthetic spectra is very good, even if an optimization of the fit was not attempted; we have only taken the spectra of our grid with values of the $`T_{eff}`$ and gravity closer to the characteristic values of B0.5V and A0V stars.
### 4.4 Effect of the metallicity
Opacities play an important role in determing the properties of the structure of the atmosphere. Balmer lines, however, are not very much affected by the abundance of elements heavier than H and He if the temperature is higher than 7000 K. The comparison of the Balmer profiles synthesized by Kurucz (1993) for metallicity between Z$``$ and Z$``$/10 shows that the profiles are essentially the same in this range of metallicity if $`T_{eff}`$7000 K. Thus, we can use our normalized spectra to predict the synthetic spectra of a stellar population younger than 1 Gyr if the metallicity is higher than Z$``$/10.
## 5 Summary
We have computed a grid of stellar atmosphere models and synthetic spectra covering five spectral ranges between 3700 and 5000 Å that include the profiles of the Balmer (H13, H12, H11, H10, H9, H8, H$`\delta `$, H$`\gamma `$ and H$`\beta `$) and HeI ($`\lambda `$3819, $`\lambda `$4009, $`\lambda `$4026, $`\lambda `$4120, $`\lambda `$4144, $`\lambda `$4387, $`\lambda `$4471, and $`\lambda `$4922) lines with a sampling of 0.3 Å. The grid spans a range of effective temperature 50000 K$`T_{eff}`$4000 K, and gravity 0.0$``$log$`g`$5.0 at solar metallicity.
The spectra are generated using a set of computer programs developed by Hubeny et al. (1995a,b). The profiles are generated in three stages. First, for $`T_{eff}`$25000 K, we use the TLUSTY code (Hubeny 1988) to compute NLTE stellar atmosphere models. We assume 9 energy levels of HI, 14 levels of HeI and 14 levels of HeII explicitly in NLTE. For $`T_{eff}`$25000 K, we take the Kurucz (1993) LTE atmospheres. In the second stage, the synthetic spectra are produced with the program SYNSPEC (Hubeny et al. 1995b) that solves the radiative transfer equation using as input the model atmosphere and a line list that contains information about atomic transitions in the relevant spectral wavelength ranges. Although the NLTE models generated with TLUSTY are pure H and He stellar atmospheres, SYNSPEC assumes line blanketing for elements heavier than H and He.
Our grid of synthetic spectra has limitations due to the inconsistencies between the continuum and line opacities assumed in the stellar atmosphere and those assumed in calculating the transfer of the lines, which start to be important for $`T_{eff}`$7000 K. However, they reproduce very accurate the Balmer lines generated by Kurucz (1993) even at $``$7000 K. The profiles of the HeI lines are also very similar to those generated by Auer & Mihalas (1972) for $`T_{eff}`$ between 30000 and 50000 K.
This work was motivated by the importance of the HeI and the high-order Balmer lines in the study of the stellar population of galaxies with active star formation. These galaxies show Balmer and He lines in emission produced in their HII regions that are super-imposed on the stellar absorption lines. However, the equivalent width of the nebular lines decreases quickly with decreasing wavelength, whereas the equivalent width of the stellar absorption lines is constant with wavelength (e.g. the equivalent width of H8 ranges between 2 and 12 Å and H$`\beta `$ between 2.5 and 14 Å). Thus, the high-order Balmer series members are less contaminated by emission, and they can be very useful to study the underlying stellar population in starburst galaxies. The evolutionary synthesis profiles of H and He absorption lines for starbursts and post-starbursts are presented in paper II. The full set of the resulting models is available at our websites http://www.iaa.es/ae/e2.html and http://www.stsci.edu/science/starburst/, or on request from the authors at [email protected].
Acknowledgments
We thank Ivan Hubeny for his help during the initial phase of this project and for making his code available to the community, and to Enrique Pérez, Tim Heckman and Angeles Díaz for their helpful suggestions and comments. The spectra of some observed stars come from data of an ongoing project to build an observational library of stellar spectra in collaboration with Angeles Díaz, Pepe Vílchez and Enrique Pérez. This work was supported by the Spanish DGICYT grant PB93-0139.
|
no-problem/9907/astro-ph9907058.html
|
ar5iv
|
text
|
# SEEKING THE LOCAL CONVERGENCE DEPTH. V. TULLY-FISHER PECULIAR VELOCITIES FOR 52 ABELL CLUSTERS.
## 1 Introduction
Deviations from smooth Hubble flow arise as a result of large scale density fluctuations. Qualitatively, a “convergence depth” is the distance out to which significant contributions to a galaxy’s peculiar motion are made. In linear theory, the fraction of the peculiar velocity of a galaxy, contributed by the mass distribution within a radius $`R`$, can be written as
$$𝐕(R)\frac{H_{}\mathrm{\Omega }_{}^{0.6}}{4\pi }\delta (𝐫)\frac{\widehat{𝐫}}{r^2}W(R)d^3𝐫,$$
(1)
where the Hubble constant $`H_{}=100h`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, $`\mathrm{\Omega }_{}`$ is the cosmological matter density parameter, $`\delta `$ is the mass overdensity, and $`W(R)`$ is a window function of width $`R`$ (a Gaussian or top-hat, for example) centered at $`r`$=0 (Peebles 1993). If the distribution of matter approximates homogeneity on larger scales, then contributions to the peculiar velocity will eventually taper off with increasing distance.
Observational estimations of the local convergence depth are facilitated through measurements of the reflex motion of the Local Group with respect to spherical shells of increasing radii. The first such program was carried out by Rubin et al. (1976) on an all-sky sample of 96 Sc galaxies between redshifts of 3500 and 6500 km s<sup>-1</sup>. They reported the motion of the Local Group with respect to the shell of galaxies was 454$`\pm `$125 km s<sup>-1</sup> towards $`(l,b)=(163^{},11^{})`$, significantly different from the apex of the CMB dipole, suggesting a large bulk flow for the shell and a convergence depth significantly larger than $``$50$`h^1`$ Mpc. Subsequent work in this area has yielded conflicting results.
The situation can be grossly expressed into two main views. A picture of a small convergence depth (c$`z<5000`$ km s<sup>-1</sup>) was first suggested by Tammann & Sandage (1985) and successively by Dressler et al. (1987) and Lynden–Bell et al. (1988). This picture is also supported by early measurements of the distribution of IRAS and optical galaxies (Lahav, Lynden-Bell & Rowan-Robinson 1988; Lynden-Bell, Lahav & Burstein 1989; Strauss et al. 1992; Hudson 1993). It should be mentioned, however, that some IRAS dipoles suggest significant contributions to the Local Group motion arise from the distribution of objects between 5000 and 10,000 km s<sup>-1</sup> (e.g. Rowan-Robinson et al. 1990). The latter studies profit from flux dipoles or a peculiar velocity field that is solved for iteratively, initially supposing that recessional velocities are indicative of distance and that light traces mass. The above observational programs that utilize this technique find 80% or more of the Local Group motion derives from matter within $``$50$`h^1`$ Mpc.
Proposals for a large convergence depth (c$`z>10,000`$ km s<sup>-1</sup>; Scaramella, Vettolani & Zamorani 1994; Tini-Brunozzi et al. 1995; Branchini & Plionis 1996; Plionis & Kolokotronis 1998), mainly resulting from a similar analysis of the distribution of clusters of galaxies, suggest that much of our motion may be produced by mass concentrations as far as $``$13,000 km s<sup>-1</sup> (e.g. the “Shapley Supercluster”). The latter view is corroborated by reports of bulk motion in the local Universe (within 6000 km s<sup>-1</sup>) via Tully–Fisher (TF) measurements by Willick (1990), Courteau et al. (1993) and Mathewson, Ford & Buckhorn (1992), and by the recent analysis of the dipole of cluster brightest ellipticals by Lauer and Postman (1994, henceforth LP). The LP claim, based on the peculiar motions of all Abell clusters to 15,000 km s<sup>-1</sup>, suggests the local volume of space within $``$100$`h^1`$ Mpc is traveling towards $`(l,b)=(343^{},+52^{})`$ at 689 km s<sup>-1</sup>. The disagreement between these competing views of the convergence depth scale is wide, with important implications for cosmological models, which are generally unable to accommodate bulk flows with scales as implied by the large convergence depth camp (e.g. Gramann et al. 1995).
Fresh work from Hudson et al. (1999) and Willick (1999) challenge the direction of the LP bulk flow vector. Hudson et al. employ the fundamental plane relation for some 700 ellipticals in 56 clusters (3000 $``$ c$`z`$ 14,000 km s<sup>-1</sup>) to find a bulk flow of 630$`\pm `$200 km s<sup>-1</sup> in the direction $`(l,b)=(260^{},1^{})`$. A different bulk motion of $``$700$`\pm `$250 km s<sup>-1</sup>, towards $`(l,b)=(272^{},20^{})`$ originating from TF measurements in 172 cluster galaxies (9000 $``$ c$`z`$ 13,000 km s<sup>-1</sup>) and 72 other galaxies (c$`z`$ $`<`$ 30,000 km s<sup>-1</sup>), is claimed by Willick. Such large bulk flows are not consistent with other recent observational work. Riess, Press & Kirshner (1995) used 13 SN Ia observations and found evidence for a small local bulk flow. In a contribution based on TF distances of about 2000 galaxies within 9500 km s<sup>-1</sup>, Giovanelli et al. (1998a) similarly report evidence against the existence of a large scale local flow, and support for a relatively small convergence depth. Many of the aforementioned studies cannot convincingly exclude the existence of large-scale bulk flows, either because their sampling is too sparse (Watkins & Feldman 1995) or of limited depth. This is particularly important in view of the claims that asymptotic convergence of the Local Group reflex motion may only be reached at distances well in excess of 10,000 km s<sup>-1</sup> (Scaramella, Vettolani & Zamorani 1994, Tini Brunozzi et al. 1995, and Branchini & Plionis 1996). This work aims at the direct determination of the TF relation for an all-sky cluster set extending to distances exceeding the highest reported values of the convergence depth.
In addition to ensuring that our TF relation is valid for usefully large distances, a second issue of concern regards the amplitude of systematic errors in the TF template. The Giovanelli et al. (1998a) sample of peculiar velocities obtained using the TF method is referred to a template relation based on the SCI sample, a collection of 782 TF measurements in the fields of 24 separate clusters between 1000 and 9000 km s<sup>-1</sup> (Giovanelli et al. 1997a,b; hereafter G97a,b). Though the relative proximity of the SCI allows a broad stretch of observable galactic properties and thus makes it an ideal sample to study several characteristics of the TF template relation, it has limits as to how accurately the relation’s zero point can be pinned down. A larger and deeper cluster sample has two main advantages. First, the increased number of clusters reduces the impact of statistical “shot noise.” In addition, since the magnitude offset produced in the TF diagram by a given peculiar velocity decreases with the target distance, the scatter produced by peculiar velocities of distant clusters about the template relation is reduced; thus they are better suited to determining the template relation’s zero point. Here we present the results of a program designed to probe the large-scale peculiar velocity field to $``$200$`h^1`$ Mpc, consisting of spectroscopic and photometric data for an all-sky sample of 522 galaxies from the fields of 52 clusters between $``$50 and 200$`h^1`$ Mpc (hereafter the ‘SCII’ sample).<sup>1</sup><sup>1</sup>1The SCI and SCII are complementary samples of cluster TF data. The SFI is a completely independent sample of TF data for some 2000 field galaxies. Details on the SCI and SFI samples can be found in Giovanelli et al. (1998a,b) and references therein.
The overall scatter of the $`I`$ band TF relation of about one-third of a magnitude translates to an uncertainty of 15% in redshift-independent distance measurements. This means that for an individual galaxy at, say 15,000 km s<sup>-1</sup>, the method is able to predict the distance to within 2250 km s<sup>-1</sup>. This value is considerably larger than the typical value of peculiar velocities, which are of order 500 km s<sup>-1</sup> or less. Even if 50 objects per cluster were to be measured, the distance of a cluster at 15,000 km s<sup>-1</sup> would not be characterized to better than $`2250/\sqrt{50}300`$ km s<sup>-1</sup>; our measurements include many fewer galaxies per cluster field, typically 10, some of which turned out to be cluster members. The main purpose of this study was thus not the determination of accurate individual peculiar velocities of remote clusters, but rather to combine measurements in many clusters to obtain a global solution for the dipole of the velocity field.
This is the fifth paper in our series on the local convergence depth. The observational data are presented in Dale et al. (1997, 1998, 1999; Papers I, II, and IV; the data can be obtained by contacting the first author). The core result of this work, that the local dipole flow to about 200$`h^1`$ Mpc is consistent with a null bulk motion, is described in Dale et al. (1999; Paper III). Details of the sample and its selection are covered in Section 2 of this work, while Section 3 covers the construction of the universal TF template relation. Results for the peculiar velocity sample are given in Section 4, and we summarize our findings in Section 5.
## 2 Sample Selection
Clusters of galaxies are used as increasingly pliant tools in cosmology. For our particular concern, the cluster peculiar velocity distribution reliably matches that of the underlying smoothed matter’s velocity distribution (Bahcall et al. 1994a,b; Gramann et al. 1995). Furthermore, though observations of clusters of galaxies only sparsely sample the large scale velocity field, they can do so more accurately than individual galaxies can. This advantage arises because measurements of many galaxies within a single bound system can be made. As peculiar velocities are driven by gravitationally growing density fluctuations, precise comparisons of large scale peculiar velocities can be made with those predicted by cosmological theories (Watkins & Feldman 1995; Cen et al. 1994; Feldman & Watkins 1994; Strauss et al. 1995; Croft & Efstathiou 1994; Borgani et al. 1997).
### 2.1 Sample Definition
Clusters of galaxies are practical TF targets. Their outskirts are well populated by spiral galaxies, thus a small number of wide-field images with a modest-sized telescope can effectively “map” a cluster, and they will likely contain numerous TF candidates.
A second consideration that suggests the use of clusters in TF experiments is the determination of the TF template. As will be outlined in Section 3.1, the template is preferentially measured within a cluster, or determined as an average of templates from measurements in separate clusters. The template slope is best measured if a broad dynamic range of TF parameters can be observed, preferentially obtained from a relatively nearby sample. In contrast, the kinematical zero point is better characterized at higher redshifts, where cosmic peculiar motions play a smaller role in shifting objects away from the template. Our motive of improving the TF zero point accuracy plays an important role in characterizing the redshift distribution of our sample.
A third advantage to using clusters of galaxies in TF work is the $``$$`\sqrt{N}`$ increase in statistical accuracy per cluster they afford if $`N`$ TF measurements per cluster are available – the estimate of a system’s peculiar motion is more accurate when information from multiple objects is used, the distance differential between galaxies in a cluster being negligible for our sample. However, if an all-sky survey of peculiar motions is the goal, concentrating observations to a single cluster or to a small number of clusters parallels an increase in overall sample sparseness. The choice between a densely sampled volume of low accuracy peculiar velocities, and a sparse sample of accurate peculiar velocities ultimately depends on the particular goals of the study.
We early on investigated what sample characteristics would facilitate the most accurate determination of the TF zero point and the local bulk flow. To ascertain the optimal distribution of clusters and number of galaxies per cluster to observe, we relied on numerical simulations. The only limitation enforced was the total number of objects that could be observed, for target accuracy and a reasonable project timescale. A wide range in the number of clusters and the number of objects per cluster was explored. A variety of models of the peculiar velocity field (multi-attractor, linear bulk flows, quiescent, etc.) was imposed, and different models for the shape of the Zone of Avoidance (ZoA) were considered. These studies profited from numerical simulations of various types of cold dark matter models kindly provide by S. Borgani (see Paper III for further details). TF errors were assigned using random Gaussian deviates of the scatter distribution in G97b. The results, which took into consideration that clusters of galaxies have positions that are correlated in space (and therefore a random choice of $`N`$ clusters does not guarantee that $`N`$ independent points are sampled), suggested that about 50 clusters of galaxies needed to be observed, with measurements of at least 7-8 TF galaxy distances in each.
### 2.2 Cluster Selection
The initial characterization of the SCII cluster sample predates this study and originates from preparatory work done by R. Giovanelli for the SCI TF study. Clusters were selected using the Abell rich cluster catalog as a guide (Abell, Corwin & Olowin 1980; hereafter ACO). Expediency played an important role in choosing the parent sample of clusters, of which this sample is a subset. Nearby clusters (c$`z`$ 10,000 km s<sup>-1</sup>) with preexisting H I velocity widths and $`I`$ band fluxes were favored, and the more distant clusters (c$`z`$ 10,000 km s<sup>-1</sup>) that already had a large number of redshifts available were likewise strongly considered as that availability improved the prospects for their kinematical characterization. Table 1 lists the main parameters of the 52 chosen clusters for this work.
The parameters listed include:
Column 1: Standard name according to Abell/ACO catalog.
Columns 2 and 3: Adopted coordinates of the cluster center, for the epoch 1950; they are obtained from ACO, except for the entries A1983b and A2295b, systems found to be slightly offset from A1983 and A2295 in both sky position and redshift.
Columns 4 and 5: systemic velocities in the heliocentric and in the cosmic microwave background reference frame, respectively, where we assume the motion of the Sun with respect to the CMB is 369.5 km s<sup>-1</sup> towards ($`l,b`$)=(264$`\stackrel{}{\mathrm{.}}`$4,48$`\stackrel{}{\mathrm{.}}`$4) (Kogut et al. 1993). For all the clusters we derive a new systemic velocity, combining the redshift measurements available in the NED<sup>2</sup><sup>2</sup>2The NASA/IPAC Extragalactic Database is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA. database with our own measurements. An estimated error for the systemic velocity is parenthesized after the heliocentric figure.
Column 6: the number of cluster member redshifts used in determining systemic velocities.
Column 7: Abell richness class.
Column 8: Bautz-Morgan code (Bautz & Morgan 1970) as listed in the ACO.
Figure 1 displays the sample in Galactic coordinates. The symbol sizes are inversely proportional to the cluster redshifts; two examples are given in the lower left for scale.
An alternative display of the sample is shown in Figure 2, a stereographic view of the sample in Galactic Cartesian coordinates.
The dashed circles have radii of 5, 10, and 15 thousand km s<sup>-1</sup> and the solid lines in the \[X,Z\] and \[Z,Y\] plots are for $`|b|=20`$ and identify the ZoA. This distribution recalls our first criterion in selecting the sample: The data set should uniformly sample as much of the sky as prudently feasible. Since the main thrusts of this work are to recover a bulk flow measurement and to accurately determine a kinematical offset even in the presence of such a flow, an all-sky sample is required. Unfortunately, the paucity of clusters and the large and uncertain Galactic extinction in the direction of the ZoA prohibit us from sampling that portion of the sky. In addition, the likelihood of foreground star contamination increases dramatically for objects at low Galactic latitudes. Our formal criterion was to select clusters at $`|b|`$ 20 from the whole sky.
A histogram of the SCII redshift distribution is presented in Figure 3. The range of redshifts runs from 5000 km s<sup>-1</sup> to 25,000 km s<sup>-1</sup>, with fully 90% of the clusters lying between 7000 km s<sup>-1</sup> and 19,000 km s<sup>-1</sup>. The average CMB redshift of the SCII clusters is 12,050 km s<sup>-1</sup> when clusters are weighted according to the square root of the number of TF measurements available. It is evident from Figure 3 that our sample’s distance range provides an opportunity to effectively test claims of bulk flow motions on scales of 100$`h^1`$ Mpc.
### 2.3 Galaxy Selection
To determine locations of target fields to be imaged, we visually scanned the Palomar Observatory Sky Survey plates for regions in the clusters containing promising disk systems appropriate for TF work. The selection of target galaxies for this study stemmed directly from the images obtained. A discussion of the imaging for this project is contained in Papers I, II, and IV. Each image in clusters chosen for spectroscopy was searched for spiral disks with the following properties:
(i) disk inclinations $``$ 40;
(ii) lack of dominating bulges – bulgy disk systems tend to be gas deficient and thus undesirable for emission line spectroscopy. Moreover, morphological homogeneity is preferable, to limit the effects of morphological bias (Section 3.3.5);
(iii) no apparent warps/interacting neighbors; and
(iv) no nearby bright stars which may affect flux measurements.
It should be noted that the above properties served as guidelines for selecting TF candidates, but occasionally the guidelines were not strictly followed, as occasionally our hand was forced by the vagaries of telescope allocations and weather conditions. Coordinates and position angles for approximately 2250 TF candidates were obtained from the Digitized Sky Survey<sup>3</sup><sup>3</sup>3The Digitized Sky Surveys were produced at the Space Telescope Institute under U.S. Government grant NAG W-2166. The images of these surveys are based on photographic data obtained using the Oschin Schmidt Telescope on Palomar Mountain and the UK Schmidt Telescope. and are accurate to better than 2″. The full list is available from D.A.D. upon request.
A TF display of the collective raw data can be seen in Figure 4. The term “raw” implies that the absolute magnitudes $`M_I`$ and the rotational velocity widths $`W`$ are corrected for all the effects described in Papers I and II (extinction, inclination, etc), but the effects of cluster incompleteness bias and peculiar motion are not accounted for. Included is the template relation obtained later in Section 3.4 (cf. Equation 13). Hereafter we will use $`y=M_I5\mathrm{log}h`$ and $`x=\mathrm{log}W2.5`$.
## 3 The Tully-Fisher Relation
The most widely applied methods for the determination of redshift-independent distances of galaxies rely on the combination of photometric (distance dependent) and kinematic (distance independent) parameters. Among them, and arguably the most accurate, are the TF and fundamental plane relations, used for spiral and elliptical galaxies, respectively. With both these methods distances for individual galaxies, and therefore peculiar velocities, are computed by comparison with a fiducial template relation which must be observationally derived. The template relation defines the rest reference frame, against which peculiar velocities are to be measured. The importance of accurately calibrating such a tool cannot be overemphasized. It is possible that the discrepancies between different claims of bulk motions may be partly related to insufficiently well determined template relations.
Ideally, the template employed is tied to a kinematical rest frame. In practice, the template originates from data within the sampled volume itself. We outline here our approach to calibrate the TF relation using our sample of data from 52 rich Abell clusters distributed throughout the sky. As we shall see, the TF zero point is obtained through an iterative process: the computation of the cluster incompleteness bias requires prior knowledge of the TF template zero-point, while the computation of the zero point demands a correction for the cluster incompleteness bias. The process does converge, however, in a small number of iterations.
### 3.1 The Calibration of the Tully-Fisher Relation
For an assumed linear TF relation we need to determine two main parameters: a slope and a magnitude offset, or zero point. The slope of the TF relation is best determined by a sample that maximizes the dynamic range in $`(M,\mathrm{log}W)`$, i.e. one that preferentially includes nearby objects. On the other hand, the magnitude zero point of the relation is best obtained from a sample of distant objects for which a peculiar motion of given amplitude translates into a small magnitude shift. An often-adopted practice to calibrate the TF relation uses one relatively distant cluster of galaxies, for example the rich Coma cluster at c$`z`$$``$7200 km s<sup>-1</sup>. Such a choice relies on the assumption that galaxies within a cluster are essentially at the same redshift – any differences in their radial velocities are attributed to the cluster’s virial stretch. Thus, they all equally participate in the local peculiar velocity field and they should all obey the same local TF relation. Another reason to choose a cluster like Coma for calibration is that, as mentioned above, the selection of a distant object limits the impact of the object’s peculiar velocity $`V`$, at least to the extent that the physical size of the cluster is small in comparison to its mean distance; peculiar velocities introduce relative distortions of redshifts that are larger for nearby objects than for distant ones. In terms of magnitude, if $`V_{\mathrm{pec}}`$$`<<`$c$`z`$, the zero point will be off by $`\mathrm{\Delta }m2.17V_{\mathrm{pec}}/\mathrm{c}z`$ magnitudes (cf. Equation 14).
There are several problems with the above calibration scheme. First, a “template” cluster needs to have a large sample of spiral galaxies. Second, even for a cluster as distant as Coma, typical cosmic velocities may bias the relation’s zero point. If the cluster were moving at a plausible speed of 500 km s<sup>-1</sup>, then all other estimates of peculiar velocities that use the template will systematically be off by 500(c$`z`$/c$`z_{\mathrm{template}}`$), regardless of statistical uncertainties.
To avoid such systematics in the TF relation, G97a,b use a “basket of clusters” approach. Their template is derived through an iterative procedure that simultaneously determines the TF zero point and the cluster motions of their sample. They assume that the mean peculiar velocity of the clusters farther than c$`z=4000`$ km s<sup>-1</sup> is null, i.e. they zero the monopole of the more distant cluster peculiar velocity distribution function. This approach does not, however, affect the value of the dipole or higher moments, and thus still allows an effective measure of possible bulk flows. The proximity of the SCI sample provides the stretch in galactic properties necessary to determine accurately the TF slope (it is for this reason that we adopt the slope computed in G97b). Also, the large database afforded by the SCI allows the statistical uncertainty of the magnitude zero point to be reduced to $`0.020.03`$ magnitudes. The systematic uncertainty in the zero point is larger. For a sample of $`N`$ objects with an rms velocity of $`V^2^{1/2}`$ at a mean redshift of $`\mathrm{c}z`$, the expected accuracy of the zero point is limited by systematic concerns to
$$\sigma _a\frac{2.17V^2^{1/2}}{\mathrm{c}z\sqrt{N}}\mathrm{mag}.$$
(2)
G97b conclude the SCI systematics only allows the universal TF zero point to be determined to within 0.04 magnitudes. Thus the overall zero point accuracy obtainable from the SCI sample is $``$0.05 magnitudes. We adopt the procedure described in G97b to calibrate the TF zero point using the SCII sample. The resulting improvement in its calibration is discussed in Section 3.4.
### 3.2 The Scatter of the Tully-Fisher Relation
Any relation involving observed parameters has a limited accuracy described by the amplitude of the relation’s scatter. Claims of the scatter in the TF relation vary from as low as 0.10 mag (Bernstein et al. 1994) to as high as 0.7 mag (Sandage et al. 1994a,b; 1995; Marinoni et al. 1998). The amplitude of the scatter does depend on wavelength, and studies in the $`I`$ band typically yield the tightest relations. The efforts with the largest samples yield 1$`\sigma `$ dispersion values of $``$0.3–0.4 mag (Mathewson, Ford & Buckhorn 1994; Willick et al. 1995; G97b). Uncertainties in observational measurements are not the only factors that lead to the overall spread in the data. The corrections to the observed fluxes and disk rotational velocities described in Papers I and II are not exactly known, nor are the methods we detail later to account for inherent sample biases such as cluster incompleteness. Moreover, there is an intrinsic component to the TF dispersion since individual galaxies have diverse formation histories. In fact, Eisenstein & Loeb (1996) advocate an intrinsic scatter of 0.3 magnitudes, a number greater than most estimates from observational work. In light of this fact, they make the interesting claim that either spirals formed quite early or that there must be a type of feedback loop that promotes galactic assimilation.
As already established in G97b and Willick (1999), Figure 5 reinforces the notion of low intrinsic scatter.
The two dotted lines indicate the velocity width and magnitude uncertainties $`ϵ_x`$ and $`ϵ_y`$, with $`ϵ_x`$ multiplied by the TF slope $`b`$ to put it on a magnitude scale; the solid line labeled $`ϵ_m=\sqrt{(bϵ_x)^2+ϵ_y^2}`$ represents the average measurement uncertainty. The data displayed in Figure 5 are generated using equal numbers of galaxies per data point. The circles plotted represent the average standard deviations of the residuals from the fiducial TF relation. We see that the velocity width errors dominate those from the $`I`$ band fluxes, which are approximately independent of velocity width. Furthermore, the logarithmic velocity widths become increasingly uncertain for slower rotators (cf. G97b; Willick et al. 1997). We approximate the total observed scatter with a simple linear relation that depends on the velocity width:
$$\sigma _{\mathrm{tot}}=0.40x+0.38\mathrm{mag}.$$
(3)
The total scatter found here is in general larger than that found in G97b ($`\sigma _{\mathrm{tot}}=0.32x+0.32`$). This is unsurprising given our use of optical rotation curves instead of 21 cm profiles, a comparatively easier source from which to estimate velocity widths, and since the nearer SCI galaxies generally have better determined disk inclinations. The gap between the observed scatter and the measured errors is attributed to an intrinsic scatter contribution: the top line is a sum in quadrature of our observed measurement errors, $`ϵ_m`$, and the intrinsic scatter found in G97b:
$$\sigma _{\mathrm{int}}=0.28x+0.26\mathrm{mag}.$$
(4)
Simply put, our findings for an intrinsic component of the TF relation agree with those of Giovanelli and Willick and their collaborators.
### 3.3 Observational Biases
#### 3.3.1 Cluster Population Incompleteness
Quite possibly the most important selection effect to consider for this observing program is that of cluster incompleteness, the preferential sampling of the bright end of the luminosity function (LF). Several authors have cautioned that cluster incompleteness can significantly alter inferred distances to clusters, though this ultimately depends on the amplitude of the scatter in the TF relation (see G97b, Willick 1994, Sandage, Tammann & Federspiel 1995, and the review by Teerikorpi 1997). Schechter (1980) first advanced the notion that the repercussions of this selection effect can be circumvented through an inverse fitting procedure, whereby the roles of the ordinate and abscissa in the TF diagram are reversed. The argument goes as follows: if there are no observational effects working against the selection of velocity widths and if the errors on the absolute magnitudes are negligible, then a fit to log$`W`$ vs. $`M`$ will not be affected by a cutoff at faint magnitudes. Unfortunately, magnitude errors cannot be ignored (see Figure 5), and moreover, the errors in $`M`$ and log$`W`$ are coupled through inclination corrections; TF data do not obey a sharp faint magnitude limit. Inevitably, the absence of faint galaxies in a TF sample artificially brightens the zero point and lowers the slope. We describe next our methodology to account for the effect.
Our first concern is to quantify the characteristics of the observed luminosity distribution with respect to the actual LF. As an aside, we note that current work by Andreon (1998) supports the notion of a canonical spiral LF by proposing that the LF for the separate morphological classes E, S0, and S is independent of environment – observed differences in the overall LF for different environments are merely due to varying proportions of the morphological classes (see, however, Iovino et al. 1999). Our first step is to sum up the observed luminosities in bins of absolute magnitude. Computing such absolute magnitude histograms for each individual cluster, where the membership counts can be as small as $``$5, would not be statistically indicative on the whole. We therefore compute average histograms for several distance ranges of width 2000 km s<sup>-1</sup>.
We define the completeness function as the ratio of the observed luminosity distribution to the assumed intrinsic luminosity distribution. The following figures have been constructed assuming a Schechter LF with $`M^{}=21.6`$ and $`\alpha =0.5`$ (see Figure 20 of Paper IV), but final TF templates using other Schechter parameters will also be provided. Figure 6 displays a smoothed representation of the completeness functions, along with our fitted approximation. As in G97b, we borrow a relation from Fermi-Dirac statistics to model completeness:
$$c(y)=\frac{1}{e^{(yy_\mathrm{t})/\beta }+1}$$
(5)
where $`y_\mathrm{t}`$ represents a transition luminosity in the fit, and $`\beta `$ characterizes the steepness of the drop. It should be noted that the final TF template is rather robust in terms of the completeness function construction. The template is largely insensitive to the choice of distance regimes, variations in the Fermi-Dirac fit profile, and the luminosity bin widths. Reasonable alterations in these parameters induce negligible changes in the TF template zero point. With the estimation of the completeness function in hand, we can now quantify a given cluster’s incompleteness bias via Monte Carlo simulations.
#### 3.3.2 Monte Carlo Simulations of Cluster Parent Samples
Our task is simplified by assuming the bias-corrected slope derived in G97b is valid for our sample, a justifiable assumption given the relative propinquity of their data – a broader dynamic range can be more easily sampled in nearby clusters. Furthermore, least-square fits to the data yield slopes that, within the errors, agree with the TF slope from G97b. On a more qualitative level, inspection of Figure 4 indicates our data agree with the slope from G97b. It is thus sufficient to only determine the bias in the TF offset for each cluster.
We compute the bias as follows: for each galaxy a large number ($`N_{\mathrm{iter}}=10^3`$) of trial TF data points are generated that mimic the general characteristics of the actual data. Within each trial, a random (Gaussian deviate) magnitude offset $`\mathrm{\Delta }y_{\mathrm{trial}}`$ is first chosen according to the TF scatter relation, Equation 3, which depends on the galaxy’s measured velocity width parameter $`x`$. The addition of this offset to the magnitude inferred from the final, template TF relation yields the trial absolute magnitude:
$$y_{\mathrm{trial}}=b_{\mathrm{tf}}x+a_{\mathrm{tf}}+\mathrm{\Delta }y_{\mathrm{trial}}.$$
(6)
The trial magnitude is kept only if it is a likely magnitude, i.e. a second random number from the interval must be less than $`c(y_{\mathrm{trial}})`$, the completeness value for the trial luminosity. Otherwise, the process is repeated until a likely magnitude is found. After all $`N_{\mathrm{iter}}`$ iterations are complete for a given galaxy, the incompleteness bias is taken to be the mean difference between the trial luminosities and that expected from the TF relation, i.e.
$$\mathrm{\Delta }y_{\mathrm{icb}}=\frac{\underset{i}{\overset{N_{\mathrm{iter}}}{}}y_{\mathrm{trial},i}}{N_{\mathrm{iter}}}(b_{\mathrm{tf}}x+a_{\mathrm{tf}}).$$
(7)
Figure 7 gives the biases calculated as a function of velocity width and the polynomials we fit to them.
The general property of the computed bias is as expected: incompleteness biases “turn on” at higher velocity widths for more distant objects. An incompleteness bias-corrected zero point for each cluster is then extracted from the cluster’s distribution of observed velocity widths and bias-corrected absolute magnitudes. A tabulation of the offsets from the template zero point, $`aa_{\mathrm{tf}}`$, is provided in Table 2. The cluster names are listed first and are followed by the number of cluster members $`N_{\mathrm{tf}}`$ with reliable photometry and velocity widths. The next two columns are measures of the cluster offsets from the template zero point, the second of which is corrected for the effect of cluster incompleteness and includes an indication of its uncertainty $`ϵ_a`$ in parentheses, e.g. $`0.06`$(10) implies $`0.06\pm 0.10`$ magnitudes. The last column of data listed is $`\sigma _a`$, the dispersion in the difference between the template and the cluster’s set of absolute magnitudes. It is used to compute the offset uncertainty:
$$ϵ_a=\sigma _{\mathrm{max}}/\sqrt{N_{\mathrm{tf}}},$$
(8)
where $`\sigma _{\mathrm{max}}`$ is taken to be the maximum of \[0.35 mag, $`\sigma _a`$\] to avoid overly optimistic measures of offset uncertainty in cases of chance alignment of the data due to small number statistics. Lastly, we remark that the proper incorporation of selection effect corrections is vital to determining the TF template, but the exact evaluation of such biases necessarily requires a TF template. A stable solution to this circular process is found within a few iterations.
#### 3.3.3 Homogeneous Malmquist Bias
An observational bias commonly referred to as the “homogeneous Malmquist bias” is a distance underestimate for a galaxy drawn from a uniform distribution of galaxies. The bias is a direct result of the error on the measured distance – a galaxy taken from a Poissonian spatial distribution and measured to be at a distance modulus $`\mu _\mathrm{m}\pm \mathrm{\Delta }\mu `$ is more likely to actually lie between $`\mu _\mathrm{m}+\mathrm{\Delta }\mu `$ than in the range $`\mu _\mathrm{m}\mathrm{\Delta }\mu `$, due to the larger volume of the more distant shell. Following the reasoning given in Lynden-Bell et al. (1988), it can be shown that a measured distance $`R_\mathrm{m}`$ is, on average, an underrepresentation of the true distance by the factor $`\mathrm{exp}(3.5\mathrm{\Delta }^2)`$:
$$V=\mathrm{c}zH_{}R_\mathrm{m}\mathrm{c}zH_{}R_\mathrm{m}e^{3.5\mathrm{\Delta }^2}.$$
(9)
The factor $`\mathrm{\Delta }=10^{0.2ϵ_a}1`$ is the fractional error in the TF distance measurement, roughly 17% for individual galaxies and 6% for our clusters. It was shown in G97b that such a correction for clusters of galaxies is rather small, almost to the point of being negligible (of order 1% on individual cluster distances). The inclusion of the above correction factor in later computations in this work does not change results appreciably. The influence of the homogeneous Malmquist bias on the inferred local bulk flow is discussed in Paper III.
The Abell/ACO clusters do not represent a homogeneously distributed population. Thus, the application of a homogeneous Malmquist bias correction may appear incorrect. However, a correction that takes into account the clustering properties of clusters differs from the homogeneous Malmquist bias correction only in the second order. Given the miniscule size of the correction and the uncertainty of a possible “inhomogeneous” Malmquist bias correction, we consider the application of a homogeneous Malmquist bias correction a satisfactory approach.
#### 3.3.4 Cluster Size
In calculating absolute magnitudes, we assume that each cluster member within one Abell radius of the cluster center is at a distance equal to the average of all the redshifts available for the cluster from the literature. It is unlikely that a given galaxy will actually be at this distance, but it is a useful approximation if the distance to the cluster is significantly larger than the cluster’s virial size. For all but a handful of our clusters this is the case, so cluster size biases play a small role in our analysis.
Our concern here is the method of averaging used. The calculation of a cluster offset involves averaging over absolute magnitudes. Thus, averages of the logarithms of the distances are computed, when the logarithm of the average distance is actually desired. This results in a systematic underestimate of cluster distances. We can use either the angular distribution on the sky of the cluster galaxies or the Abell radius to infer the approximate physical size of the each cluster. Simple analytic calculations using either estimate of the cluster size show that the amplitude of the bias is at most of order 0.001 magnitudes for even our closest clusters, so we shall not concern ourselves with this effect any further. A more serious concern, the morphological dependence on the TF relation, is investigated next.
#### 3.3.5 Morphology
Ideally, scientific models are simple and are thus constructed with the fewest number of free parameters the data demand. In TF work, it would be preferable to limit the range of morphologies sampled, since physical parameters are known to vary along the Hubble sequence (Roberts & Haynes 1994). For example, it has been shown that, for a given (optical) luminosity distribution, the velocity width distribution for early-type galaxies is shifted to higher rotational speeds with respect to the distribution for later types (Roberts 1978). There is evidence, however, that the TF differences between morphological types appear to diminish at wavelengths longer than $`I`$ band (e.g. Aaronson & Mould 1983). Regardless, we are fortunate that one sample selection criterion of ours, that of disks being rich in ionized gas, encouraged a rather homogeneous sample comprised mainly of Sc types. Our sample has the following population properties: 14% are of type earlier than Sb, 16% are type Sb, and 70% are classified as Sbc or later. The majority (52%) of the galaxies are type Sc.
Our sample affords statistically significant tests of TF morphological dependencies. We plot in Figure 8 the TF parameters with symbols differentiated according to morphological class.
Filled circles symbolize Sb types, whereas open circles and asterisks represent types later and earlier than Sb, respectively. The plot includes the data for all galaxies deemed to have reliable velocity widths, with each cluster member corrected for cluster peculiar motion. The solid line drawn uses the fiducial TF slope of $`b_{\mathrm{tf}}=7.68`$ and zero point $`a_{\mathrm{tf}}=20.91`$ mag (cf. Section 3.4). As found in G97b, a clear distinction is evident between the three Hubble types in the form of a fainter zero point for earlier types. The error-weighted averages of the offsets $`\mathrm{\Delta }m_T`$ from the template zero point for the three morphological classes differ in the following ways for our sample:
Types earlier than Sb: $`0.27`$ $`(0.32)`$ mag
Type Sb: $`0.11`$ $`(0.10)`$ mag
Other types: unchanged
The numbers given in parentheses are those from G97b. For consistency, we will continue to utilize the offsets obtained in G97b.
#### 3.3.6 Environment
Yet another possible bias in our sample is the effect of environment. For instance, the more distant clusters in our sample (and the Abell/ACO catalog in general) tend to be richer. A richer cluster typically has a stronger intracluster medium X-ray flux and a more regular, elliptical-dominated core (see, for example, Sarazin 1986). Spiral galaxies predominantly lie in a rich cluster’s periphery and the closer a spiral disk is to the cluster center, the less likely it is to contain neutral hydrogen gas (Giovanelli & Haynes 1985). This lack of interstellar gas within clusters of galaxies may be due to evaporation into the hotter intracluster gas, or it may be attributed to “stripping” originating from either tidal galaxy–galaxy interactions or ram pressure ablation on intracluster gas. Rubin, Ford, & Whitmore (1988) and Whitmore, Forbes, & Rubin (1988; WFR hereafter) claim that the inner spiral galaxies within clusters exhibit falling rotation curves (RCs), as opposed to the asymptotically flat or rising RCs usually seen in the cluster periphery and field. Furthermore, they find cluster RCs may be of lower amplitude than field RCs. They offer the explanation that the falling (and lower amplitude) RCs arise from mass loss — the inner galaxies have had their dark matter halos stripped — or that the cluster environment simply inhibits halo formation. A related finding by WFR is a monotonic increase in the mass to light ratio, with cluster radius, which they ascribe to the changing RC shape with cluster radius. This view has been contested, however, by Amram et al. (1993) and Vogt (1995) who find little evidence for any gradients in the outer portions of RCs. If RCs in the inner regions of rich clusters do indeed differ from RCs in other environments, the ramifications are significant. The dependence of the TF relation and/or its dispersion on environment are possible consequences. Finally, we point out that rich clusters represent high density peaks and may thus be home to a recent merger; at least one third of all rich clusters are home to a recent inhomogeneous superposition of two or more separate systems (Girardi et al. 1997). It is prudent to verify whether environment plays a significant role in our data.
Figure 9 displays one such test of environmental bias.
For each cluster member we have plotted residuals from the TF relation as a function of projected distance from the nominal cluster center. Panel (a) displays the data for all cluster members, while panels (b) and (c) differentiate between Abell cluster richness. All data are corrected for peculiar velocity and morphological offsets. We see no apparent trend with projected cluster radius in any of the panels. This is in agreement with the work of Biviano et al. (1990) and G97b, which showed no change in the TF relation for different environments. We consequently do not consider environmental bias to be a serious concern with our data.
### 3.4 The Template Relation
Let $`a_k`$ be the TF zero point for cluster $`k`$ and $`\mathrm{\Delta }y_{\mathrm{pec},k}=a_ka_{\mathrm{tf}}`$ be the shift due to the cluster’s peculiar motion. The $`i^{\mathrm{th}}`$ cluster member’s absolute magnitude can then be expressed as
$$y_i=y_{\mathrm{cor},i}\mathrm{\Delta }y_{\mathrm{icb},\mathrm{i}}\mathrm{\Delta }y_{\mathrm{pec},k}$$
(10)
where $`\mathrm{\Delta }y_{\mathrm{icb}}`$ is the cluster’s incompleteness bias correction described in Section 3.3.1 and $`y_{\mathrm{cor},i}=m_{\mathrm{cor}}5\mathrm{log}_{10}(\mathrm{c}z_{\mathrm{clus}}/100)25`$, $`m_{\mathrm{cor}}`$ being the corrected apparent magnitude given by Equation 1 in Paper II and $`\mathrm{c}z_{\mathrm{clus}}`$ is the cluster systemic recessional velocity in units of km s<sup>-1</sup>. We compute the $`k^{\mathrm{th}}`$ cluster’s zero point by averaging over the individual cluster members, i.e.
$$a_k=\frac{\underset{i}{\overset{N_k}{}}(y_ibx_i)/ϵ_i^2}{_i^{N_k}1/ϵ_i^2},ϵ_i^2=(ϵ_{x,i}b)^2+ϵ_{y,i}^2+ϵ_{\mathrm{int}}^2$$
(11)
with $`N_k`$ representing the number of cluster members in the $`k^{\mathrm{th}}`$ cluster.<sup>4</sup><sup>4</sup>4Though we are only solving for the relation’s offset here, this mode of calculation is similar in spirit to the bivariate calculations described in G97b, in that both magnitude and velocity width errors are considered. The uncertainty $`ϵ_{a,k}`$ of the above zero point computation is described by Equation 8. The calculation of the template zero point $`a_{\mathrm{tf}}`$ follows. Alternative estimators of $`a_{\mathrm{tf}}`$ are:
$$a_{\mathrm{tf}}=\frac{\underset{k}{}N_ka_k}{_kN_k}\mathrm{or}\frac{\underset{k}{}a_k/ϵ_{a,k}^2}{_k1/ϵ_{a,k}^2}\mathrm{or}\frac{\underset{j}{\overset{N}{}}(y_jbx_j)/ϵ_j^2}{_j^N1/ϵ_j^2}.$$
(12)
The first two estimators are averages over the individual cluster zero points, weighted either by the number of cluster members or by the cluster zero point uncertainties. The third calculation is a simple average over all $`N`$ cluster galaxies, with each galaxy weighted by its total uncertainty. The three estimators give global zero points that agree within the estimated errors (0.02 magnitudes); we adopt the third computation. Our definition of a TF template zero point is based on the assumption that the average peculiar velocity of the cluster set is null.
Table 3 lists results for several subsets of the data and for two different Shechter LFs: $`\alpha =0.50,M^{}=21.6`$ and $`\alpha =0.75,M^{}=22.0`$. A few fit parameters are listed, including the number of galaxies used, the TF template zero point and its associated total uncertainty (i.e. considering both statistical concerns and the kinematic uncertainty described by Equation 2), an estimate of the scatter, and chi squared divided by the number of degrees of freedom.
Notice that the dispersion and the zero point uncertainty remain relatively unchanged across all subsets. The fact that the zero point differs slightly for late and early type galaxies is expected – we saw earlier that the morphological type correction of $`0.32`$ magnitudes advocated in G97b for Sa and Sab galaxies is possibly too large for our sample by a few hundredths of a magnitude. We should also point out that the slightly brighter zero point seen for objects beyond 10,000 km s<sup>-1</sup> works against recent claims by Tammann (1998) and Zehavi (1998) that the Hubble constant is higher within $``$10,000 km s<sup>-1</sup> than it is beyond this distance. For our purposes, a fractional decrease by $`ϵ_H`$ in $`H_{}`$ beyond 10,000 km s<sup>-1</sup> should yield a zero point that is fainter by $`ϵ_H5\mathrm{log}_{10}`$$`e`$, or 0.04 mags for $`ϵ_H=0.02`$. In contrast, our data reflect a slightly brighter zero point beyond 10,000 km s<sup>-1</sup>.
Figure 10 gives the TF plots for each cluster and Figure 11 combines the data for all SCII galaxies. The data are corrected for cluster incompleteness bias and cluster peculiar motion, in accordance with Equation 10. In the A2877, A1736, A1983, and A2295 panels, the error bars containing filled circles represent members of “A2877b,” “A1736b,” “A1983b,” and “A2295b,” respectively. The new TF template is drawn as well:
$$M_I5\mathrm{log}h=7.68(\mathrm{log}W2.5)20.91\mathrm{mag}.$$
(13)
The residuals given in Figure 12 indicate the quality of the cluster membership assignments.
Since the abscissa is the difference between individual and cluster redshifts, the center of each cluster corresponds to the point (0,0). Moreover, residuals from field galaxies that are receding according to Hubble expansion but were incorrectly declared cluster members should, on average, lie on the line of slope 5. There is no clear evidence for a significantly large subsample of improperly assigned memberships.
## 4 The Peculiar Velocity Sample
We interpret the departure of a cluster’s average zero point from that of the template relation as an indication of peculiar motion, with larger departures from the template implying larger amplitude peculiar velocities. Quantitatively, for a cluster at a redshift $`z`$ with an average departure from the template of $`aa_{\mathrm{tf}}`$, we write the peculiar velocity as
$$V=\mathrm{c}z(110^{0.2(aa_{\mathrm{tf}})}).$$
(14)
We compute peculiar velocities and a measure of their errors for the SCII cluster sample and display the results in Table 4 and Figure 13, an Aitoff projection of Galactic coordinates.
The symbols plotted in the figure reflect both the radial directions of the peculiar velocities and the strengths of the measurements – in the CMB reference frame, open (filled) circles represent approaching (receding) clusters and the circle size is inversely proportional to the accuracy of the measurement. The largest cluster peculiar velocities, e.g. those for A3266 and A3667, are also the most uncertain, as the clusters are poorly sampled.
Our sample includes the central portions of various high density peaks and/or superclusters in the local Universe. It is significant to note that their motions are consistent with small departures from rest in the CMB frame, within the quoted error. For instance, the Shapley Supercluster, represented here by its core (A3558) and some peripheral members (A1736, A3528, and A3566), has an average CMB velocity of $`118\pm 495`$ km s<sup>-1</sup>. The Hercules region (A2147 and A2151) and the A2572/2589/2593/2657 supercluster are also slow movers, with average peculiar velocities of $`307\pm 301`$ km s<sup>-1</sup> and $`222\pm 372`$ km s<sup>-1</sup>, respectively. These small motions are consistent with the notion that such massive systems best represent “kinematic anchors” in the local velocity field.
### 4.1 The One-Dimensional Peculiar Velocity Distribution
It is useful to estimate the line-of-sight distribution of peculiar velocities. The amplitude of that distribution has been known to be a very sensitive discriminator of cosmological models (see, for example, Bahcall & Oh 1996). The SCII cluster sample is relatively distant and the cluster membership counts are relatively anemic when compared to the SCI sample of G97a. Consequently, SCII peculiar velocities are much less certain and the overall distribution shown in Figure 14 is significantly broadened by measurement errors. The peculiar velocities are represented by equal area Gaussians centered at the peculiar velocity of each cluster with dispersions equal to the estimated peculiar velocity errors. The thick dashed line superimposed on the plot is the sum of the individual Gaussians (its amplitude has been rescaled for plotting purposes).
The 1$`\sigma `$ dispersion in the observed distribution of peculiar velocities is found from a Gaussian fit to the dashed line: $`\sigma _{1\mathrm{d},\mathrm{obs}}=796`$ km s<sup>-1</sup>. This value, however, is biased high by measurement errors. Recovering an estimate of the true value can easily be obtained via Monte Carlo simulations, yielding $`\sigma _{1\mathrm{d}}=341\pm 93`$ km s<sup>-1</sup>where the error estimate derives from the scatter in the dispersions of the simulated samples. This value of $`\sigma _{1\mathrm{d}}`$ is consistent with a relatively low density Universe (Giovanelli et al. 1998b; Bahcall & Oh 1996; Borgani et al. 1997; Watkins 1998; Bahcall, Gramann & Cen 1994a; Croft & Efstathiou 1994).
## 5 Summary
We have presented TF data and estimated the peculiar velocities of 52 rich Abell clusters spread across the sky and distributed between $``$50 and 200$`h^1`$ Mpc. Optical rotation curves and $`I`$ band photometry for 522 spiral galaxies in the fields of these systems have been obtained and presented in separate publications.
In conjunction with the robust TF slope extracted from the relatively nearby SCI cluster sample of Giovanelli and coworkers, we find the $`I`$ band TF relation to follow
$$M_I5\mathrm{log}_{10}h=7.68(\mathrm{log}_{10}W2.5)20.91\mathrm{mag}.$$
(15)
The relation has an average scatter of 0.38 magnitudes. The zero point of the TF template has a statistical accuracy of 0.02 mag; combined with a kinematical uncertainty of 0.01 mag, which is limited by the assumption that the 52 clusters’ average peculiar velocity is null, the overall uncertainty of the TF zero point is 0.02 mag.
Peculiar velocities are obtained for each of the 52 clusters, with reference to the global template relation. The typical uncertainty on the peculiar velocity of each cluster is $`\pm `$0.06c$`z`$, where c$`z`$ is the mean cluster velocity. The rms line-of-sight component of the cluster peculiar velocity for our sample, debroadened for measurement errors, is $`341\pm 93`$ km s<sup>-1</sup>. This number agrees with that determined for the SCI sample.
The results presented here are based on observations carried out at the Palomar Observatory (PO), at the Kitt Peak National Observatory (KPNO), at the Cerro Tololo Inter–American Observatory (CTIO), and the Arecibo Observatory, which is part of the National Astronomy and Ionosphere Center (NAIC). KPNO and CTIO are operated by Associated Universities for Research in Astronomy and NAIC is operated by Cornell University, all under cooperative agreements with the National Science Foundation. The Hale telescope at the PO is operated by the California Institute of Technology under a cooperative agreement with Cornell University and the Jet Propulsion Laboratory. This research was supported by NSF grants AST94-20505 and AST96–17069 to RG and AST95-28960 to MH. LEC was partially supported by FONDECYT grant #1970735.
|
no-problem/9907/hep-ph9907564.html
|
ar5iv
|
text
|
# References
In this paper we consider bound states and resonances of particles with identical charge in the presence of the strong magnetic field. It is known that in the presence of the strong magnetic field electron movement in Coulomb potential can be considered as one-dimensional.On the other hand, it is known that the term $`\alpha ^2/(2mr^2)`$ provide an attraction even for the positron. Thus, in one-dimensional case this attraction leads to the existence of the bound states of the same charge (e.g. $`e^+p`$,$`leptonlepton`$,$`antileptonantilepton`$ atoms), because in one-dimensional case any attraction is enough for bound states formation. Thus, we prove that in the strong magnetic field bound states of the same charge exist. Below we calculate energy levels of these bound states.
Dirac equation for an electron in case of attractive potential has been derived in where, however, relativistic term ( the second term in the formula (2) below) has been neglected.In this term has been taken into account. However in repulsive Coulomb potential has not been considered. Analoguosly the Dirac equation for positron in the repulsive Coulomb potential with the presence of magnetic field ($`e^+p`$-atoms) has the following form:
$$(\frac{1}{2m}(\frac{d^2}{dr^2}+\frac{1}{r}\frac{d}{dr}+\frac{1}{r^2}\frac{d}{d\varphi ^2}+\frac{1}{r}\frac{d^2}{dz^2}\gamma ^2r^2+2i\gamma \frac{d}{d\varphi })+V(r,z))\psi (x)=E_{eff}\psi (x)$$
(1)
where $`E_{eff}=\frac{E^2m^2}{2m}\frac{eH}{2m}`$,
$$V(r,z)=\frac{E}{m}\frac{Z\alpha }{\sqrt{r^2+z^2}}\frac{Z^2\alpha ^2}{2m}\frac{1}{r^2+z^2},$$
(2)
$`0<E<m`$ Z-charge of nuclei. Of course,in principle it is possible to solve this equation numerically without any assumption.
In accordance with if magnetic field is strong (i.e. $`\sqrt{\frac{1}{eH}}<<a_0=\frac{1}{mZ\alpha }`$,where $`a_0`$ \- is Borh radius) transverse motion of electrons is defined only by magnetic field and we find the solution in the following form:
$$\psi (r,\varphi ,z)=\frac{1}{\sqrt{2\pi }}R_{00}(\rho )\chi (z)$$
(3)
where $`R_{00}(\rho )=\mathrm{exp}(\frac{\rho }{2})`$ is the function of the ground state $`n=0,l=s=0`$, $`\rho =\gamma r^2`$ ($`\gamma =\frac{eH}{2}`$)whereas $`\chi (z)`$ must be found below as a solution to one-dimensional Schredinger equation.Substituting (2)(3) in (1) and multiplying by $`\frac{1}{\sqrt{2\pi }}R_{00}`$ and integrating over $`d^2r`$ we obtain:
$$(\frac{d^2}{2mdz^2}+(E_{eff}\frac{\gamma }{2m})V(z))\psi (x)=0$$
(4)
where
$$V(z)=\frac{E}{m}\alpha \sqrt{\gamma }_0^{\mathrm{}}\frac{d\rho \mathrm{exp}(\rho )}{\sqrt{r^2+z^2}}\frac{\alpha ^2\gamma }{2m}_0^{\mathrm{}}\frac{d\rho \mathrm{exp}(\rho )}{r^2+z^2}$$
(5)
The behaviour of V(z) at different $`\gamma `$ is shown on Fig.1. Thus, we have one-dimensional task with attraction at sufficiently small $`z`$ which guarantees the existence of the bound states and resonances.In $`\rho `$ has been neglected in denominators in formula for $`V(z)`$ above. We however do not neglect it, because at small $`z`$ it is not correct and besides the presence of $`\rho `$ regularized the behaviour of the potential $`V(z)`$ at small $`z`$.If we take into account formfactor of proton we obtain more smooth behaviour of potential at small $`z`$.
We solve this one dimensional Schredinger equation (4) numerically. Our numerical results for energy of the ground state versus H at fixed charge of nuclei Z is shown on the Fig.2.
The author express his sincere gratitude to E.B.Prokhorenko for helpful discussions.
Figures Caption
Fig.1 The behaviour of V(z) at different $`\gamma `$.(available after request)
Fig.2 Energy of the ground state versus $`H`$ at fixed $`Z`$(available after request)
|
no-problem/9907/astro-ph9907374.html
|
ar5iv
|
text
|
# Untitled Document
Developments in High Energy Neutrino Astronomy<sup>1</sup><sup>1</sup>1To appear in Europhysics News, 1999
R.J. Protheroe, University of Adelaide, Australia
Nature produces cosmic ray particles, probably protons, with energies well above $`10^{20}`$ eV – how are they produced? Where do they come from? Gamma rays with energies above $`10^{13}`$ eV are produced in jets of active galaxies – are these produced by energetic electrons or protons? What is the correct model of Gamma Ray Bursts? These are just some of the fundamental questions in high energy astrophysics to be answered by observations made with large area neutrino telescopes.
When gamma rays result from hadronic interactions of protons neutrinos are also produced, but they are not produced when energetic electrons Compton scatter X-rays to gamma-ray energies. So neutrino observations may distinguish between models of active galactic nuclei. Similarly, models for the origin of the highest energy cosmic rays (almost certainly extragalactic) – acceleration of protons in hot spots of giant radio galaxies, acceleration by Gamma Ray Bursts, decay of massive X particles produced by topological defects – may be distinguished as they have very different neutrino signatures, as do different models for Gamma Ray Bursts.
One big advantage of neutrino astronomy is that because of their low interaction cross section neutrinos can escape from regions opaque to photons, but this is also their biggest disadvantage as most neutrinos pass unobserved through the Earth.
Predicted fluxes and detection rates are very low, and telescopes of area approaching 1 km<sup>2</sup> are needed to detect diffuse background intensities. More than three decades ago, Russian and American physicists thought of instrumenting a huge volume of water with photomultiplers to look for Cherenkov light produced by an upward-going muon produced by an interaction below the detector of a muon-neutrino which had passed upwards through the Earth.
The AMANDA telescope located at the South Pole has recently detected sixteen upward-going neutrino events with energies in the range $`10^{11}`$ eV to $`10^{12}`$ eV. The detectors used consisted of ten “strings” of photomultipliers (a total of 289) placed in deep vertical holes in the transparent Antarctic ice. Each string extends from 1.5 km to 2 km depth, and the holes are spread over $`10^4`$ m<sup>2</sup> at the surface. All sixteen events are probably produced by cosmic rays interacting with air nuclei in the Northern Hemisphere, but they are detected well above any background. The neutrino-induced muon tracks are clearly seen and the angular resolution is very good, currently about 2. One of the events has a muon track which is nearly vertical and is almost coincident with String 6 – shown in the diagram (provided by Francis Halzen of the University of Wisconsin) where the signal in each detector is indicated by the size of the circle. The graph shows the time each photomultiplier triggered plotted against its depth, giving a perfect correlation for detectors on String 6, and showing the muon’s speed was close to the speed of light. This exciting result shows that the goal of constructing viable telescopes for high energy neutrino astronomy is achievable at reasonable cost.
|
no-problem/9907/cond-mat9907346.html
|
ar5iv
|
text
|
# Charge carrier density collapse in La_0.67Ca_0.33MnO₃ and La_0.67Sr_0.33MnO₃ epitaxial thin films
## I Introduction
The colossal magnetoresistance (CMR) in ferromagnetic perovskite manganites has reattracted strong theoretical and experimental interest. Experimental evidence exists that the origin of such a behaviour is the presence of magnetic polarons. This concept of dynamic phase segregation is similar to the copper-oxide superconductors. Small-angle neutron scattering measurements and magnetic susceptibility data on manganites reveal small ferromagnetic clusters in a paramagnetic background . Detailed magnetotransport measurements in the paramagnetic phase well above the Curie temperature confirm this picture . In the temperature and magnetic field range where the longitudinal transport shows a negative temperature coefficient an electronlike, thermally activated Hall coefficient was found. This is in agreement with the expectations from polaron hopping. However, for a given magnetic field lowering of the temperature will lead to the formation of a polaron band and a ’metallic’ transport, i.e. positive temperature coefficient. In the vicinity of the metal-insulator transition the polaronic bands are stabilised by external magnetic fields. Therefore we investigated the linear high field Hall resistivity in high magnetic fields up to 20 T. The experimentally determined increase of the Hall coefficient at the metal-insulator transition translates into a charge carrier density collapse (CCDC) in the band picture. While such a CCDC in low magnetic field was proposed by Alexandrov and Bratkovsky due to formation of immobile bipolarons, our high field results indicate the influence of the structural phase transition at the Curie temperature on the band structure.
## II Experimental
Thin films of La<sub>0.67</sub>Sr<sub>0.33</sub>MnO<sub>3</sub> (LSMO) were prepared by pulsed laser deposition (KrF Laser, $`\lambda =248`$ nm). As substrates we used (100) SrTiO<sub>3</sub> and (100) LSAT \[(LaAlO<sub>3</sub>)<sub>0.3</sub>-(Sr<sub>2</sub>AlTaO<sub>6</sub>)<sub>0.7</sub> untwinned\]. The optimised deposition conditions were a substrate temperature of 950C in an oxygen partial pressure of 14 Pa and annealing after deposition at 900C for 1 h in an oxygen partial pressure of 600 hPa. La<sub>0.67</sub>Ca<sub>0.33</sub>MnO<sub>3</sub> (LCMO) was deposited by magnetron sputtering on (100) MgO substrates. Further details on preparation and characterisation are published elsewhere . In X-ray diffraction in Bragg-Brentano geometry only film reflections corresponding to a ($`l`$00) orientation of the cubic perovskite cell are visible for both compounds. The LSAT substrates ($`a_0=3.87`$ Å) have a low lattice mismatch to the Sr-doped films (3.89 Å). Rocking angle analysis shows epitaxial $`a`$-axis oriented growth with an angular spread smaller than 0.03. The in-plane orientation was studied by $`\varphi `$-scans of (310) reflections. The cubic perovskite axes of the films are parallel to that of the substrates with an angular spread smaller than 1 . The temperature dependence of the unit cell volume of the LCMO sample was determined measuring in- and out of plane lattice constants using a helium flow cryostat with Beryllium windows on a four circle x-ray diffractometer.
The samples were patterned photolithographically to a Hall bar structure. For measuring in the temperature regime from 4 K up to room temperature we used a standard 12 T magnet cryostat. An 8 T superconducting coil and a 20 T Bitter type magnet system, both with room temperature access, have been used for measurements above 270 K. The procedure used for measuring the Hall effect is described in detail elsewhere . The magnetic moments of the films were measured with a SQUID magnetometer in a small field of $`B=20`$ mT.
## III Results and Discussion
In Fig. 1, the longitudinal resistivities of the Ca- and Sr-doped compounds $`\rho _{xx}(T)`$ are shown in zero field (solid lines) and in 8 T (dashed lines). The Curie temperatures of both samples are indicated by arrows, 232 K for LCMO and 345 K for LSMO, respectively. For LCMO the maximum in resistivity is close to $`T_C`$. For very high and very low temperatures the curves are asymptotic, i.e. the magnetoresistance vanishes. The resistivity as function of temperature is up to $`T/T_C=0.6`$ given by
$$\rho =\rho _0+\rho _2T^2+\rho _{4.5}T^{4.5}.$$
(1)
The parameters of Eq. 1, obtained by fitting the experimental data, are listed in Table 1. The quadratic contribution is not changed in presence of a high magnetic field. This was also observed by Snyder et al. while Mandal et al. found both factors to be magnetic field dependent. One possible origin of a quadratic temperature dependence of the resistivity is due to emission and absorption of magnons. But in this case a magnetic field dependence would be expected. Furthermore in this processes an electron reverses its spin and changes its momentum. However, in the manganites spin flip processes play no role at low temperatures due to strong spin splitting of the states. Therefore we attribute this $`T^2`$ dependent term to electron-electron scattering in a Fermi liquid. The term proportional to $`T^{4.5}`$ results from electron-magnon scattering in the double exchange theory of Kubo and Ohata . In these scattering events the electron spin is conserved, while momentum is exchanged between the electron and magnon system. Its contribution to the resistivity in Eq. 1 is
$$\rho _{4.5}=\frac{ϵ_0\mathrm{}}{e^2k_F}\frac{1}{S^2}(ak_F)^6\left(\frac{m}{M}\right)^{4.5}\left(\frac{k_B}{E_F}\right)^{4.5}$$
(2)
with the manganese spin $`S`$ and the lattice constant $`a`$. A small correction relevant only for effective mass ratio $`M/m>1000`$ is neglected in Eq. 2. The strong temperature dependence of this scattering mechanism is partly determined by the number density of excited magnons $`T^{3/2}`$. In magnetic field this term is suppressed reflecting the increased magnetic order. Evaluating $`\rho _{4.5}`$ in the free electron approximation yields a value 3 orders of magnitudes smaller than experimentally determined. However, due to the strong influences of effective mass renormalizations in Eq. 2 effective mass ratios of $`M/m`$ in the range 3-6 will give quantitative agreement.
For the Ca-doped compound the transport above $`T_C`$ is thermally activated and can be described by small polaron hopping . The Sr-doped compound has a lower resistivity and a lower magnetoresistance (MR), because the absolute MR decreases with increasing $`T_C`$ . Here, the resistivity above $`T_C`$ is described by a crossover between two types of polaron conduction . Scattering of polarons by phonons just above $`T_C`$ results in a positive $`d\rho /dT`$, so that in the case of LSMO $`T_C`$ does not coincide with the maximum in resistivity. The Curie temperature is at a lower value (see Fig. 1) where an anomaly in the temperature dependence of the resistivity is seen .
The transverse resistivity in a ferromagnet as a function of a magnetic field $`B`$ is expressed by
$$\frac{\mathrm{d}\rho _{xy}}{\mathrm{d}B}=\frac{t}{I}\frac{\mathrm{d}U_H}{\mathrm{d}B}=R_H+R_A\mu _0\frac{\mathrm{d}M}{\mathrm{d}B},$$
(3)
with the Hall voltage $`U_H`$, film thickness $`t`$, current $`I`$, magnetisation $`M`$, ordinary Hall coefficient $`R_H`$, and anomalous Hall coefficient $`R_A`$ . The Hall resistivity $`\rho _{xy}(B)`$ for LSMO for several constant temperatures as a function of magnetic field $`B`$ is shown in Fig. 2. At low magnetic fields a steep decrease of the Hall voltage is seen, which is strongest at $`T_C`$ and becomes less pronounced at low temperatures and above $`T_C`$. This part is dominated by the increase in magnetisation with magnetic field. Therefore the electronlike anomalous Hall effect, $`R_A>R_H`$, dominates the Hall voltage. At higher fields the magnetisation saturates and a linear positive slope due to the ordinary Hall effect is seen. This behaviour is very similar to the Ca-doped compound, if one compares the reduced temperatures $`T/T_C`$ . The initial slopes $`\mathrm{d}\rho _{xy}/\mathrm{d}B(B0)`$ are highest at the Curie-temperature for both compounds in agreement with the Berry phase theory of the anomalous Hall effect . The temperature dependence of the electronlike anomalous Hall constant is thermally activated, similar to the longitudinal resistivity and consistent with the theory of Friedman and Holstein . This was also observed by Jaime et al. and provides another strong evidence of small polarons in manganites. More details to the interpretation of the anomalous Hall effect are published elsewhere .
In the following, we consider the high-field regime where the slopes $`\mathrm{d}\rho _{xy}/\mathrm{d}B`$ are positive and constant indicating hole conduction. For 4 K, we obtain in a single-band model 1.4 holes per unit cell for LSMO. A smaller value $``$ 1 hole per unit cell was found by Asamitsu and Tokura on single crystals while for a thin film a value of 2.1 was reported . For Ca-doped thin films similar values were found . This large charge carrier density in manganites requires the concept of a partly compensated Fermi surface .
To investigate the charge carrier concentration just above $`T_C`$, it is important to have as high magnetic field as possible in order to saturate the magnetisation. Therefore, we performed for LCMO Hall effect measurements up to 20 T. In this field a positive linear slope $`\mathrm{d}\rho _{xy}/\mathrm{d}B`$ can be observed up to $`T/T_C=1.3`$. The Hall resistivites $`\rho _{xy}(B)`$ are shown in Fig. 3. For clarity several curves ($`T=`$ 275 K, 300 K, 305 K, 310 K and 315 K) are omitted. Just above $`T_C`$ at 285 K it is yet possible to almost saturate the magnetisation of the sample and a broad field range with a linear positive slope remains to evaluate $`R_H`$. The fit of the slope is seen in Fig. 3 by the dashed line. At 350 K ($`T/T_C=1.5`$) the shift of the minimum in the Hall voltage to higher fields allows no longer a quantitative analysis of $`R_H`$.
Assuming full saturation of the magnetisation in high magnetic field the charge carrier concentration $`n=1/eR_H`$ as a function of the reduced temperature $`T/T_C`$ for LCMO (circles) and LSMO (triangles) is plotted in Fig. 4. LCMO has a constant carrier concentration at low temperatures, whereas for LSMO $`n`$ increases with temperature. But for both doped manganites clearly a decrease of $`n`$ at the Curie temperature is seen. This temperature dependence of $`n`$ seems to be a characteristic behaviour of the manganites in the vicinity of $`T_C`$, since this was also observed by Wagner et al. in thin films of Nd<sub>0.5</sub>Sr<sub>0.5</sub>MnO<sub>3</sub>, indirectly by Ziese et al. in thin films of La<sub>0.67</sub>Ca<sub>0.33</sub>MnO<sub>3</sub> and La<sub>0.67</sub>Ba<sub>0.33</sub>MnO<sub>3</sub> and by Chun et al. in single crystals of La<sub>0.67</sub>(CaPb)<sub>0.33</sub>MnO<sub>3</sub>. The latter data are also shown as open symbols in this figure for comparison. According to Fig. 4 above $`T_C`$ the number of charge carriers seems to reincrease. The error bars indicate the accuracy of the determination of the linear slopes $`\mathrm{d}\rho _{xy}/\mathrm{d}B`$, as shown for $`T=285`$ K in Fig. 3. However, well above the Curie temperature the magnetisation of the sample cannot be saturated in experimentally accessible magnetic fields. From Equation 3 it is obvious, that a linearly increasing magnetisation in this paramagnetic regime will not affect the linearity of the slope $`\mathrm{d}\rho _{xy}/\mathrm{d}B`$ but change its value. Without quantitative knowledge of the anomalous Hall coefficient $`R_A`$ and the sample magnetisation $`M(T,B)`$ it is not possible to separate this contribution. Nevertheless, since the sign of the anomalous Hall contribution is electronlike, it is clear that the apparent charge carrier density shown in Fig. 4 has to be corrected to lower values above the Curie temperature, thus enhancing the CCDC.
This CCDC indicates strong changes in the electronic distribution function close to the Fermi energy. Since coincidence of structural, magnetic and electronic phase transitions has been reported for La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> with $`x=0.25`$ and 0.5 , we investigated the temperature dependence of lattice constants, magnetisation, longitudinal resistivity, and transversal resistivity for the same sample. Fig. 5 shows a compilation of the results. The charge carrier density is constant up to 0.7 $`T_C`$. In this temperature range the resistivity follows Eq. 1 and the volume of the unit cell increases slowly with temperature. The fact that the CCDC is accompanied by a strong increase in unit cell volume and longitudinal resistivity and a decay of the spontaneous magnetisation shows the coincidence of structural, magnetic, and electronic phase transitions in La<sub>0.67</sub>Ca<sub>0.33</sub>MnO<sub>3</sub>.
Since the transport in the manganites above $`T_C`$ is dominated by polaron hopping we want to discuss the relation between our experimental observation of the CCDC and the polaronic CCDC as proposed recently by Alexandrov and Bratkovsky . They worked out the theory for a CCDC due to a phase transition of mobile polarons to immobile bipolaronic pairs. At low temperatures the charge carriers form a polaronic band. With increasing temperature the polaronic bandwidth decreases due to the increase in electron phonon coupling. Depending on the polaron binding energy and the doping level a first or second order phase transition to a bound bipolaronic state takes place. At still higher temperatures thermal activation of the bipolarons leads to a reincrease of the number of mobile polarons. Indeed their calculated temperature dependence of the number density of mobile polarons in zero field is very similar to our data shown in Fig. 4. However, in this model the density of mobile polarons around $`T_C`$ is a strong function of magnetic field, which is responsible for the colossal magnetoresistivity. In high fields the polaronic CCDC is strongly reduced, since the formation of bound bipolarons is suppressed and accordingly the number of mobile polarons remains almost constant at $`T_C`$. Therefore our observation of a CCDC in high fields cannot be identified with the CCDC due to bipolaron formation, but is related to structural changes. We cannot determine the charge carrier density in the low field regime due to strong anomalous Hall contributions. In their presence the type of phase transition proposed by Alexandrov and Bratkovsky cannot be verified by Hall effect measurements.
## IV Summary
We performed detailed Hall effect measurements in high magnetic fields in LCMO and LSMO thin films. The charge carrier concentration was investigated as a function of temperature below and above the Curie temperature. In the low temperature range, where the charge carrier density is constant, we identified electron-magnon scattering in the longitudinal resistivity. At the ferromagnetic transition temperature a charge carrier density collapse was observed for both compounds. The data indicate a simultaneous structural, magnetic and electronic phase transition in doped manganite thin films.
###### Acknowledgements.
This work was supported by the Deutsche Forschungsgemeinschaft through project JA821/1-3 and the European Union TMR-Access to Large Scale Facilities Plan.
|
no-problem/9907/hep-ph9907397.html
|
ar5iv
|
text
|
# DFAQ/99/TH/01 DFPD/99/TH/28 Towards a grand unified picture for neutrino and quark mixings Talk given by A. Rossi at the Int. Workshop “Particles in Astrophysics and Cosmology: from Theory to Observation”, May 3-8, 1999, Valencia, Spain.
## 1 Introduction
One of the mysteries of particle physics is the manifest hierarchy in the fermion spectrum and mixing angles. The masses of the quarks and charged leptons are spread over five orders of magnitude, from MeVs to hundreds of GeVs and the quark mixing angles are:
$`\theta _{23}^q`$ $`=`$ $`(2.3\pm 0.2)^{},\theta _{12}^q=(12.7\pm 0.1)^{},`$
$`\theta _{13}^q`$ $`=`$ $`(0.18\pm 0.04)^{}`$ (1)
As for the neutrinos, the recent data from the atmospheric and solar neutrino (AN and SN) experiments providing information on their masses and mixings, have made the mystery of “flavour” even more intriguing. On the one hand, the ranges of $`\delta m_{\mathrm{atm}}^2`$ and $`\delta m_{\mathrm{sol}}^2`$ needed for the explanation of the AN and SN anomalies, can be translated directly into values of the neutrino masses. Namely, assuming the mass hierarchy $`m_3>m_2>m_1`$ for the neutrino mass eigenstates $`\nu _{1,2,3}`$ we find a mass hierarchy $`m_2/m_3`$ similar to that of the charged leptons:<sup>1</sup><sup>1</sup>1 Below we concentrate on the small-mixing angle MSW solution for the SN problem , barring other possibilities such as the large-mixing angle MSW or vacuum oscillation solutions.
$$m_3=(5.7_{2.2}^{+2.7})10^2\mathrm{eV},m_2=(2.5_{0.5}^{+0.7})10^3\mathrm{eV}$$
(2)
On the other hand, the magnitudes of the neutrino mixing angles<sup>2</sup><sup>2</sup>2 For $`\delta m_{\mathrm{atm}}^2>210^3`$ eV<sup>2</sup> the limit $`\theta _{13}^l<13^{}`$ follows from the CHOOZ experiment. Moreover, taking into account all the experimental data, $`\theta _{13}^l0`$ provides the best data fit both for AN and SN cases .
$`\theta _{23}^l`$ $`=`$ $`(45\pm 11)^{},\theta _{12}^l=(2.0\pm 1.2)^{},`$
$`\theta _{13}^l`$ $`<`$ $`(1320)^{}`$ (3)
are in clear contrast with the corresponding quark angles (1). In short: the AN anomaly points to maximal 23 mixing in the leptonic sector to be compared with the very small 23 mixing of quarks, and on the contrary, the MSW solution implies a very small 12 lepton mixing angle versus the reasonably large value of the Cabibbo angle.
In the standard model (SM) or in its supersymmetric extension the masses of the charged fermions $`q_i=(u_i,d_i)`$, $`u_i^c`$, $`d_i^c`$; $`l_i=(\nu _i,e_i)`$, $`e_i^c`$ ($`i=1,2,3`$ is a family index) emerge from the Yukawa terms:
$$\varphi _2u_i^c𝐘_u^{ij}q_j+\varphi _1d_i^c𝐘_d^{ij}q_j+\varphi _1e_i^c𝐘_e^{ij}l_j$$
(4)
where $`\varphi _{1,2}`$ are the Higgs doublets: $`\varphi _{1,2}=v_{1,2}`$, $`(v_1^2+v_2^2)^{1/2}=v_w=174`$ GeV and $`𝐘_{u,d,e}`$ are arbitrary matrices of coupling constants. The neutrino masses emerge only from the higher order effective operator :
$$\frac{\varphi _2\varphi _2}{M}l_i𝐘_\nu ^{ij}l_j,𝐘_\nu ^{ij}=𝐘_\nu ^{ji}$$
(5)
where $`Mv_w`$ is some cutoff scale and $`𝐘_\nu `$ is a matrix of dimensionless coupling constants. The fermion mass eigenstates are identified by diagonalizing the Yukawa matrices $`𝐘_{u,d,e,\nu }`$ by bi-unitary transformations:
$$U_f^T𝐘_fU_f=𝐘_f^D,f=u,d,e,\nu $$
(6)
(for the neutrinos it is $`U_\nu ^{}U_\nu `$). In this way the Cabibbo-Kobayashi-Maskawa (CKM) matrix $`V_q=U_u^{}U_d`$ and the leptonic mixing matrix $`V_l=U_e^{}U_\nu `$, describing the neutrino oscillation phenomena, are also determined:
$`V_q=\left(\begin{array}{ccc}V_{ud}& V_{us}& V_{ub}\\ V_{cd}& V_{cs}& V_{cb}\\ V_{td}& V_{ts}& V_{tb}\end{array}\right),`$ (10)
$`V_l=\left(\begin{array}{ccc}V_{e1}& V_{e2}& V_{e3}\\ V_{\mu 1}& V_{\mu 2}& V_{\mu 3}\\ V_{\tau 1}& V_{\tau 2}& V_{\tau 3}\end{array}\right)`$ (14)
For both mixing matrices, we adopt the “standard” parametrization utilizing the angles $`\theta _{12},\theta _{23},\theta _{13}`$ and a CP-phase $`\delta `$.<sup>3</sup><sup>3</sup>3 The leptonic mixing matrix contains two additional phases that are not relevant for the neutrino oscillations. In the following, we distinguish the quark and lepton mixing angles in $`V_q`$ and $`V_l`$ by the subscripts ‘$`q`$’ and ‘$`l`$’, respectively.
As already mentioned, the SM does not provide any theoretical hints to constrain the matrices $`𝐘_{u,d,e}`$ and $`𝐘_\nu `$, leaving the issue of the fermion mass hierarchy and mixing pattern unexplained. Concerning the neutrinos, also the mass scale $`M`$ remains a free parameter. One can only conclude that if the maximal constant in $`𝐘_\nu `$ is of order the top Yukawa constant, $`Y_3Y_t1`$, then the mass value $`m_3`$ in (2) points to the scale $`M10^{15}`$ GeV, rather close to the grand unified scale.
In this respect, the grand unified theories can be very useful. In these theories, as a consequence of the larger gauge group, relationships between quark and lepton masses or between CKM angles and quark mass ratios can emerge naturally. Moreover, the assumption of further symmetries in the Yukawa sector — well known examples being the “horizontal” or “family” symmetries — implies further predictions and thus a possible clue to discern the “flavour” mystery .
A popular Yukawa texture is that suggested by Fritzsch :
$$𝐘_{u,d,e}=\left(\begin{array}{ccc}0& A_{u,d,e}^{}& 0\\ A_{u,d,e}& 0& B_{u,d,e}^{}\\ 0& B_{u,d,e}& C_{u,d,e}\end{array}\right)$$
(15)
where all elements are generically complex and obey the additional condition:
$$|A_f^{}|=|A_f|,|B_f^{}|=|B_f|;f=u,d,e$$
(16)
The presence of zero elements as well as the “symmetricity” property (16) can be motivated by “ horizontal” symmetries . This pattern has many interesting properties. For instance, it links the observed value of the Cabibbo angle, $`V_{us}\sqrt{m_d/m_s}`$, to the observed size of the CP-violation in the $`K\overline{K}`$ system and the predicted magnitude $`|V_{ub}/V_{cb}|\sqrt{m_u/m_c}`$ is in good agreement with the data. Unfortunately, this texture implies cannot account at the same time for the small value of $`V_{cb}`$ and the large top mass.
However, this shortcoming can be cured just by embedding the ansatz in a $`SU(5)`$ grand unified theory and breaking the symmetricity condition<sup>4</sup><sup>4</sup>4 The need of such an asymmetry was invoked in the context of $`SO(10)`$ models . in the 23-family sector with $`b_e=\left|B_e/B_e^{}\right|>1`$ and $`b_d=\left|B_d^{}/B_d\right|>1`$. The $`SU(5)`$ symmetry ensures the following product rule for the mixing angles:
$$\mathrm{tan}\theta _{23}^d\mathrm{tan}\theta _{23}^e\left(\frac{m_\mu m_s}{m_\tau m_b}\right)^{1/2}$$
(17)
This rule is certainly exact when the down-quark and charged-lepton matrices have the symmetric Fritzsch texture from which one derives $`\mathrm{tan}\theta _{23}^d=(m_s/m_b)^{1/2}`$ and $`\mathrm{tan}\theta _{23}^e=(m_\mu /m_\tau )^{1/2}`$. However, these two relations are unsatisfactory as $`|V_{cb}|<(m_s/m_b)^{1/2}`$ and $`\mathrm{sin}\theta _{atm}<(m_\mu /m_\tau )^{1/2}`$. On the other hand, whenever the symmetricity condition is broken, the rule (17) is only approximate since none of those angles can be predicted in terms of mass ratios. Indeed their values now depend on the amount of asymmetry between the 23 and 32 entries, i.e. on the factors $`b_d`$ and $`b_e`$. One can easily realize that the increasing of $`b_e`$ goes in parallel with that of $`b_d`$ since in SU(5) the Yukawa matrices are related as $`𝐘_e=𝐘_d^T`$, modulo certain Clebsch factors. As a result the 23 mixing becomes larger in the leptonic sector and smaller in the quark sector. Therefore, if $`\mathrm{tan}\theta _{23}^d`$ decreases below $`(m_s/m_b)^{1/2}`$, then $`\mathrm{tan}\theta _{23}^e`$ should correspondingly increase above $`(m_\mu /m_\tau )^{1/2}`$, and when the former reaches the value $`|V_{cb}|0.05`$, the latter becomes $`1`$ (this happens for $`b_{d,e}8`$). Though these estimates are not precise, they qualitatively demonstrate the ‘seesaw’ correspondence between the quark and lepton mixing angles whenever their magnitudes are dominated by the rotation angles coming from the down fermions. A similar argument can be applied also to the 12 mixing:
$$\mathrm{tan}\theta _{12}^d\mathrm{tan}\theta _{12}^e\left(\frac{m_em_d}{m_\mu m_s}\right)^{1/2}$$
(18)
The relation $`V_{us}(m_d/m_s)^{1/2}`$ suggests that the 12 block of $`𝐘_d`$ should be nearly symmetric, and hence we expect that $`\mathrm{sin}\theta _{sol}(m_e/m_\mu )^{1/2}`$.
The above discussion is the key-point that will be extensively developed and discussed in the next section.
## 2 Modifying the Fritzsch ansatz in $`SU(5)`$
In the $`SU(5)`$ model the masses of the fermions $`\overline{5}_i=(d^c,l)_i`$, $`10_i=(u^c,e^c,q)_i`$ arise from the following Yukawa terms:
$$\overline{H}10_i𝐆^{ij}\overline{5}_j+H10_i𝐆_u^{ij}10_j+\frac{HH}{M}\overline{5}_i𝐆_\nu ^{ij}\overline{5}_j$$
(19)
where $`H=(T,\varphi _2)5`$ and $`\overline{H}=(\overline{T},\varphi _1)\overline{5}`$
are the Higgses. The Yukawa constant matrices $`𝐆_u`$ and $`𝐆_\nu `$ are symmetric due to $`SU(5)`$ symmetry reasons while the form of $`𝐆`$ is not constrained. Upon breaking the $`SU(5)`$ symmetry, we recover the SM Yukawa couplings (4) with
$$𝐘_e=𝐆,𝐘_d=𝐆^T,𝐘_u=𝐆_u,𝐘_\nu =𝐆_\nu $$
(20)
To simplify the discussion we shall assume, without loss of generality, that the matrices $`𝐆_u`$ and $`𝐆_\nu `$ are diagonal. Then the weak mixing matrices in the quark and leptonic sectors are just $`V_q=U_d`$ and $`V_l=U_e^{}`$. On the other hand, since $`𝐘_d=𝐘_e^T`$, we get that $`U_d=U_e^{}`$ and $`U_e=U_d^{}`$, so that the rotation angles of the left down quarks (charged leptons) are related to the unphysical angles rotating the right states of the charged leptons (down quarks). In the minimal $`SU(5)`$ model the entries of the matrix $`𝐆`$ are just constants and one faces the well-known problem of the down-quark and charged-lepton degeneracy at the GUT scale. While the $`Y_b=Y_\tau `$ unification is a success of the SUSY $`SU(5)`$ GUT, the other predictions $`Y_{s,d}=Y_{\mu ,e}`$ are clearly wrong.
A more satisfactory picture emerges if the terms $`\overline{H}10_i𝐆^{ij}\overline{5}_j`$ are understood as effective cubic couplings originating from higher-order operators, such as $`\overline{H}10_i(\frac{\mathrm{\Phi }}{M_s}\widehat{G}_{ij})\overline{5}_j`$, where $`\mathrm{\Phi }`$ is the $`SU(5)`$ adjoint and $`M_s`$ is some fundamental scale larger than the GUT scale. As a consequence, the corresponding entries in $`𝐘_e`$ and $`𝐘_d`$ can be distinguished by Clebsch coefficients.
In this perspective the matrices $`𝐘_e`$ and $`𝐘_d`$ can assume the asymmetric form given in Eq. (15). Phenomenological arguments impose these further relations:
$`C_d=C_e(=C),`$ (21)
$`A_d=A_d^{}=A_e^{}=A_e(=A),`$
$`B_d^{}=k^{}B_e,B_d=kB_e^{}`$
where the coefficients $`k`$ and $`k^{}`$ are nontrivial $`SU(5)`$ Clebsches breaking the quark and lepton symmetry. Introducing the 23-sector asymmetry parameters $`b_e=B_e/B_e^{}`$ and $`b_d=B_d^{}/B_d=\frac{k^{}}{k}b_e`$ we finally end up with the following textures:
$`𝐘_e=\left(\begin{array}{ccc}0& A& 0\\ A& 0& \frac{1}{b}B\\ 0& B& C\end{array}\right),`$ (25)
$`𝐘_d=\left(\begin{array}{ccc}0& A& 0\\ A& 0& k^{}B\\ 0& \frac{k}{b}B& C\end{array}\right)`$ (29)
This ansatz depends on six parameters: three Yukawa entries $`A,B,C`$ and three Clebsch factors $`k,k^{}`$ and $`b`$. Through these parameters we have to determine six eigenvalues – $`Y_{e,\mu ,\tau }`$ and $`Y_{d,s,b}`$ – and six mixing angles – $`s_{12}^q,s_{23}^q,s_{13}^q`$ and $`s_{12}^l,s_{23}^l,s_{13}^l`$. Hence at the GUT scale we are left with six relations between the physical observables. The leptonic mixing angles can be then expressed in terms of ratios of the corresponding lepton masses and the asymmetry parameter $`b_e`$. Fig. 1 illustrates the $`b`$-dependence of the leptonic mixing angles and of the corresponding parameters $`\mathrm{sin}^22\theta _{23}^l=4|V_{\mu 3}|^2(1|V_{\mu 3}|^2)`$ and $`\mathrm{sin}^22\theta _{12}^l=4|V_{e2}|^2(1|V_{e2}|^2)`$.
For $`b=1`$ the 23 mixing angle is rather small for explaining the AN anomaly, while the 12 mixing is somewhat above the upper limit obtained by the MSW fit of the SN data (c.f. (1)). However, for larger $`b`$, $`|V_{\mu 3}|`$ increases roughly as $`\sqrt{b}`$ and becomes maximal around $`b=8.4`$, while $`|V_{e2}|`$ slowly decreases (roughly as $`\sqrt{c_{23}^e}`$). Thus, the AN bound, $`\mathrm{sin}^22\theta _{23}>0.86`$, requires $`6<b<12`$, while the SN data favour $`b>7`$, when $`\mathrm{sin}^22\theta _{12}^l`$ drops below $`1.510^2`$.
Analogously the quark masses and mixing angles can be expressed in terms of the lepton mass ratios and of the three parameters $`b_e,k,k^{}`$. Then we show the behaviour of the mixings (Fig. 2), of the masses<sup>5</sup><sup>5</sup>5The re-normalization scaling has been taken into account for the bottom mass. $`m_s`$, $`m_b`$ and the ratio $`m_s/m_d`$ (Fig. 3) for several values of $`kk^{}`$ . For $`kk^{}`$ and large values of $`b`$, ($`b=712`$ as required from the lepton mixing) we achieve quite a satisfactory description also of the quark sector. The pattern with $`k=k^{}=1/2`$ looks somehow favoured. We also learn from Fig. 3 that rather small values of $`Y_t0.51`$ are needed to obtain the correct bottom mass for $`b>7`$. This special feature arises from the substantial correction to the $`b\tau `$ Yukawa unification due to the large $`b`$.
In a more general case, we have to expect also $`𝐘_u,𝐘_\nu `$ to have a Fritzsch-like form. This would occur in the presence of some underlying horizontal symmetry. Such a scenario would provide some different features. In this case smaller values of $`b_{e,d}`$ can suffice since now the mixing angles will be contributed also by the unitary matrices $`U_u`$ and $`U_\nu `$: $`V_q=U_u^{}U_d`$ and $`V_l=U_e^{}U_\nu `$. For the CKM mixing angles we have:
$`|V_{cb}|=s_{23}^q\left|s_{23}^de^{i\phi }s_{23}^u\right|,`$
$`|V_{us}|=s_{12}^q\left|s_{12}^de^{i\delta }s_{12}^u\right|,\left|{\displaystyle \frac{V_{ub}}{V_{cb}}}\right|s_{12}^u`$ (30)
where the phases $`\phi `$, $`\delta `$ etc are combinations of the independent phases in the Yukawa matrices. The $`\theta _{23}^u`$, $`\theta _{12}^u`$ are the analogous angles diagonalizing $`𝐘_u`$: $`\mathrm{tan}\theta _{23}^u=\sqrt{Y_c/Y_t}`$ and $`\mathrm{tan}\theta _{12}^u=\sqrt{m_u/m_c}`$. By varying the phase $`\phi `$ from $`0`$ to $`\pi `$, the value of the 23 mixing angle in the CKM matrix can vary between its minimal and maximal possible values:
$$\theta _{23}^{q()}=\theta _{23}^d\theta _{23}^u$$
(31)
Analogously, for the leptonic mixing we have
$$\theta _{23}^{l()}=\theta _{23}^e\theta _{23}^\nu $$
(32)
where $`\mathrm{tan}\theta _{23}^\nu =\sqrt{m_2/m_3}`$. Thus, for the range of the neutrino masses indicated in (2) we obtain $`\theta _{23}^\nu =(11.8_{3.0}^{+5.0})^{}`$. In case of moderate asymmetry in $`𝐘_{d,e}`$, the entries in (31) are big as compared to the experimental value of $`\theta _{23}^q`$ while each of the entries in (32) is too small for the magnitude of $`\theta _{23}^l`$ required by the AN oscillation. However, by properly tuning the phases, $`\theta _{23}^q`$ can get close to $`\theta _{23}^{q()}=\theta _{23}^d\theta _{23}^u`$ while $`\theta _{23}^l`$ can approach $`\theta _{23}^{l(+)}=\theta _{23}^e+\theta _{23}^\nu `$, Therefore, even for small values $`b_{e,d}2`$, one could achieve a proper fit of the mixing angles. In ref. an example of realization of such a scenario, implementing the $`U(2)`$ horizontal symmetry, is illustrated.
## 3 Conclusions
We have discussed how the present pattern of the leptonic mixing angles, characterized by a maximal mixing between the second and third generation, can be linked to the CKM mixing angles in the $`SU(5)`$ grand unification thanks to the fermion multiplet structure. In particular, this has been shown assuming the fermion Yukawa matrices to have a Fritzsch-like form with an asymmetric 23-block and (essentially) symmetric 12-block.
We remark that alternative and realistic ansätze – with diagonal $`𝐘_{u,\nu }`$ – (accounting e.g. for CP-violation) can be motivated in the context of $`U(3)`$ horizontal symmetry .
## 4 Acknowledgements
A. R. wishes to thank Jose Valle and all organizers of the Conference for the pleasant and interesting atmosphere.
|
no-problem/9907/quant-ph9907100.html
|
ar5iv
|
text
|
# Quantum Trajectories for Brownian Motion
\[
## Abstract
We present the stochastic Schrödinger equation for the dynamics of a quantum particle coupled to a high temperature environment and apply it to the dynamics of a driven, damped, nonlinear quantum oscillator. Apart from an initial slip on the environmental memory time scale, in the mean, our result recovers the solution of the known non-Lindblad quantum Brownian motion master equation. A remarkable feature of our powerful stochastic approach is its localization property: individual quantum trajectories remain localized wave packets for all times, even for the classically chaotic system considered here, the localization being stronger as $`\mathrm{}0`$.
\]
The understanding of the dynamics of open or dissipative quantum systems is of fundamental importance both from a practical and conceptual point of view. The archetype of such a system is the standard quantum Brownian motion model which describes a particle with Hamiltonian $`H(q,p)`$, coupled to an environment of harmonic oscillators $`(q_\lambda ,p_\lambda )`$ via its position $`q`$, such that the total Hamiltonian of system and environment reads
$`H_{tot}(q,p,q_\lambda ,p_\lambda )`$ $`=`$ $`H(q,p)+`$ (2)
$`{\displaystyle \underset{\lambda }{}}\left\{{\displaystyle \frac{p_\lambda ^2}{2m_\lambda }}+{\displaystyle \frac{1}{2}}m_\lambda \omega _\lambda ^2(q_\lambda {\displaystyle \frac{g_\lambda }{m_\lambda \omega _\lambda ^2}}q)^2\right\}.`$
Up to now, in order to determine the time dependent dynamics of the open ‘system’, the standard procedure was the derivation of a master equation for the reduced density operator, which, for the high temperature case considered below, is widely accepted to read
$$\mathrm{}\dot{\rho }_t=i[H,\rho _t]i\frac{\gamma }{2}[q,\{p,\rho _t\}]\frac{m\gamma kT}{\mathrm{}}[q,[q,\rho _t]],$$
(3)
where $`\gamma `$ is the damping rate. This master equation is a Markov master equation not, however, of Lindblad form and indeed it turns out that it may violate the positivity of $`\rho _t`$ on very short time scales, which has led to an ongoing debate about its range of applicability . We will briefly address this issue later on in this Letter.
Our new approach to quantum Brownian motion is very different and circumvents the derivation of a master equation for $`\rho _t`$ altogether. Instead, we use a stochastic Schrödinger equation, derived straight from the microscopic model (2), for pure states $`\psi _t(z)`$ (quantum trajectories). Our construction recovers the reduced density operator as the ensemble mean $`M[\mathrm{}]`$ over many of these quantum trajectories, in principle without any approximation:
$$\rho _t=M\left[|\psi _t(z)\psi _t(z)|\right].$$
(4)
The mean $`M[\mathrm{}]`$ is taken over the process $`z_t`$ which drives the stochastic Schrödinger equation. We are thus able to determine $`\rho _t`$ in a Monte-Carlo sense without an explicit master equation for its time evolution.
Quantum trajectory methods have been used extensively in recent years, mainly in the quantum optics community, due to their numerical efficiency, their intimate connection to (continuous) measurement, and their illustrative power helping to gain physical insight. The master equations encountered in quantum optics are of standard Lindblad type, for which Markov quantum trajectory methods are known for some time now: there are jump processes and diffusive processes recovering the reduced density operator. Despite being maybe the best known of all master equations, the Quantum Brownian motion master equation (3), being not of Lindblad form, has so far been excluded from a treatment with these powerful methods.
Only recently the authors managed to extend the quantum trajectory concept to non-Markovian situations , more precisely, we were able to determine a stochastic Schrödinger equation for the dynamics of a quantum system coupled to a bath of harmonic oscillators as in (2), without using the concept of a master equation for $`\rho _t`$. An alternative approach to non-Markovian quantum trajectories, more emphasizing the continuous measurement point of view, has now also been established .
In its linear version , our non-Markovian quantum state diffusion (QSD) stochastic Schrödinger equation for the quantum Brownian motion model (2) takes the form
$$\mathrm{}\dot{\psi }_t(z)=iH^{}\psi _t(z)+qz_t\psi _t(z)q_0^t𝑑s\alpha (t,s)\frac{\delta \psi _t(z)}{\delta z_s},$$
(5)
where we assumed a factorized total initial density operator $`\rho _{tot}=|\psi _0\psi _0|\rho _T`$ with a pure system state $`|\psi _0`$ and an environmental thermal density operator $`\rho _T`$. The influence of the environment on the system is encoded in the bath correlation function $`\alpha (t,s)=F(t)F(s)_{\rho _T}`$ where $`F(t)=_\lambda g_\lambda q_\lambda (t)`$ is the quantum force in (2) and $`z_t`$ is thus a complex Gaussian stochastic c-number force with correlation $`M[z_t^{}z_s]=\alpha (t,s)`$. In the usual high temperature limit $`kT\mathrm{}\mathrm{\Lambda }\mathrm{}\omega ,\mathrm{}\gamma `$, where $`\mathrm{\Lambda }`$ is an environmental cutoff frequency and $`\omega `$, $`\gamma `$ are the typical system frequency and damping rate, respectively, one finds
$$\alpha (t,s)=2m\gamma kT\mathrm{\Delta }(ts)+i\mathrm{}m\gamma \dot{\mathrm{\Delta }}(ts),$$
(6)
where $`\mathrm{\Delta }(t)`$ is a delta-like function decaying on the environmental ‘memory’ time scale $`\mathrm{\Lambda }^1`$ (here we use $`\mathrm{\Delta }(t)=\frac{\mathrm{\Lambda }}{2}e^{\mathrm{\Lambda }|t|}`$). In (5), the Hamiltonian $`H^{}=H(q,p)+\frac{1}{2}m\gamma \mathrm{\Lambda }q^2`$ contains an additional potential term that turns out to be counterbalanced by a similar term arising from the memory integral.
Eq. (5) is exact, i.e. it provides a quantum trajectory method for Brownian motion for any temperature and any distribution of environmental oscillators in the model (2), i.e. for any $`\alpha (t,s)`$. In order to compute numbers, however, we have to express the functional derivative under the memory integral in (5) in terms of elementary operators. In the high temperature limit considered here, we simply need to expand in terms of the time delay $`(ts)`$
$$\frac{\delta \psi _t(z)}{\delta z_s}=\frac{1}{\mathrm{}}\left(q\frac{p}{m}(ts)+\mathrm{}\right)\psi _t(z),$$
(7)
where the dots denote terms of the order $`(ts)^2`$ and higher, leading to corrections of the order $`\omega /\mathrm{\Lambda }`$, $`\gamma /\mathrm{\Lambda }`$ and can therefore be neglected (see for a general theory of such ‘post-Markov’ open systems). With (7), the memory integral in (5) takes the form
$$_0^t𝑑s\alpha (t,s)\frac{\delta \psi _t(z)}{\delta z_s}=\left(g_0(t)qg_1(t)p\right)\psi _t(z),$$
(8)
where we introduce time dependent coefficients $`g_0(t)=\frac{1}{\mathrm{}}_0^t𝑑s\alpha (t,s)`$ and $`g_1(t)=\frac{1}{m\mathrm{}}_0^t𝑑s(ts)\alpha (t,s).`$ The imaginary part of $`g_0(t)`$ will be compensated by the additional potential term in $`H^{}`$. The imaginary part of $`g_1(t)`$ gives rise to damping. The real part of $`g_0(t)`$ describes diffusion and as the real part of $`g_1(t)`$ also gives rise to diffusion, yet smaller by a factor $`\omega /\mathrm{\Lambda }`$, the latter can be neglected compared to the former in the regime we are interested in.
In order to get an efficient Monte Carlo method (importance sampling ), we go over to the nonlinear version of (5), which keeps the trajectories $`\psi _t(z)`$ normalized at all times while retaining the correct ensemble mean (4), see . Using (8), the relevant stochastic Schrödinger equation for Brownian motion reads
$`\mathrm{}\dot{\psi }_t(z)`$ $`=`$ $`iH\psi _t(z)i\left({\displaystyle \frac{1}{2}}m\gamma \mathrm{\Lambda }+\text{Im}\{g_0(t)\}\right)q^2\psi _t(z)`$ (12)
$`+(qq)z_t\psi _t(z)`$
$`\text{Re}\{g_0(t)\}\left((qq)^2(qq)^2\right)\psi _t(z)`$
$`+i\text{Im}\{g_1(t)\}\left(qpqp+m\dot{q}qqp\right)\psi _t(z).`$
Normalized quantum trajectories $`\psi _t(z)`$ whose ensemble mean gives the desired reduced density operator according to (4) can now be propagated using (12), where $`\dot{q}=\frac{d}{dt}q`$, a quantity which has to be determined numerically along with $`\psi _t(z)`$ (very often the replacement $`m\dot{q}p`$ turns out be a good approximation).
In (12), the time dependent coefficients quickly approach their asymptotic values $`g_0(t)\frac{m\gamma kT}{\mathrm{}}\frac{i}{2}m\gamma \mathrm{\Lambda }`$ and Im$`\{g_1(t)\}\frac{\gamma }{2}`$ for times larger than the environmental memory time. After this initial slip $`t\mathrm{\Lambda }^1`$, (12) becomes
$`\mathrm{}\dot{\psi }_t(z)`$ $`=`$ $`iH\psi _t(z)+(qq)z_t\psi _t(z)`$ (15)
$`{\displaystyle \frac{m\gamma kT}{\mathrm{}}}\left((qq)^2(qq)^2\right)\psi _t(z)`$
$`{\displaystyle \frac{i}{2}}\gamma \left(qpqp+m\dot{q}qqp\right)\psi _t(z).`$
We now highlight the power of our stochastic Schrödinger equation for Brownian motion (12) by investigating the dynamics of a driven, damped, nonlinear, noisy system, the Duffing oscillator, where $`H=\frac{1}{2}p^2+\frac{1}{4}q^4\frac{1}{2}q^2+gq\mathrm{cos}(t)`$, here coupled to a heat bath at temperature $`T`$. This system has been studied before using the master equation (3) (see and references therein), including a straight numerical solution which requires the propagation of a huge matrix. In our new approach, one propagates pure states $`\psi _t(z)`$ according to (12), a great reduction in resources, with the need, however, to solve (12) many times in order to evaluate the mean values. For Lindblad master equations, the power of quantum trajectory methods for investigating classically chaotic dissipative systems was shown in (see also ).
We use parameters $`g=0.3`$ with a damping rate $`\gamma =0.25`$, thus the classical problem is chaotic . The environment is furthermore characterized by $`kT=0.3`$, and a cutoff frequency $`\mathrm{\Lambda }=5`$. With $`\mathrm{}`$ of the order $`10^2`$ and smaller (see various choices of $`\mathrm{}`$ below), the parameters are in the required regime. As initial condition we choose a standard coherent state located at $`q=0.1,p=0.1`$.
In Fig.1 we show the ensemble mean $`M[W_z(q,p,t=4)]`$ over $`1000`$, $`5000`$, and $`10000`$ Wigner functions of pure state trajectories $`\psi _t(z)`$ obtained solving (12) numerically up to a time $`t=4`$. According to our construction, this quantity converges to the Wigner function of the reduced density operator for many realizations. Here we have chosen $`\mathrm{}=0.01`$, a phase space area corresponding approximately to the extension of the wave packets shown in Fig.2.
FIG. 1. Contour plots of the Wigner function $`W(q,p,t=4)`$ of the reduced density operator of the thermal Duffing oscillator with $`\mathrm{}=0.01`$ (for the phase space area corresponding to this $`\mathrm{}`$ see Fig.2). The contour plots show the ensemble mean over $`1000`$, $`5000`$, and $`10000`$ Wigner functions $`W_z(q,p,t=4)`$ of individual quantum trajectories obtained solving the quantum Brownian motion stochastic Schrödinger equation (12).
In Fig.2 we show contour plots of Wigner functions $`W_z(q,p,t=4)`$ of four realizations of (12), many of which add up to the Wigner function of the desired reduced density matrix shown in Fig.1. One can see clearly that these individual Wigner functions are well localized in phase space compared to the phase space spread of the ensemble, even for this classically chaotic system.
FIG. 2. Contour plots of Wigner functions $`W_z(q,p,t=4)`$ of four individual quantum trajectories obtained solving the quantum Brownian motion stochastic Schrödinger equation (12) for the thermal Duffing oscillator. Individual trajectories remain well localized in phase space with respect to the overall spread of the ensemble mean, even for this classically chaotic system. The chosen value of $`\mathrm{}=0.01`$ is slightly smaller than the phase space area covered by these states.
This remarkable feature of the quantum Brownian motion stochastic Schrödinger equation (12) is highlighted again in Fig.3, where we show the mean position spread, $`M[\mathrm{\Delta }q]=M[\sqrt{(qq)^2}]`$ and the mean uncertainty product $`M[\mathrm{\Delta }q\mathrm{\Delta }p/\mathrm{}]`$ in units of $`\mathrm{}`$ of individual trajectories as a function of time for three different choices of $`\mathrm{}`$.
FIG. 3. Localization property of the QBM stochastic Schrödinger equation. Individual runs are well localized in phase space, the localization being stronger the smaller $`\mathrm{}`$: (a) the average position spread $`M[\mathrm{\Delta }q]=M[\sqrt{(qq)^2}]`$ of solutions of the QBM stochastic Schrödinger equation for the choices $`\mathrm{}=0.01`$ (solid line), $`\mathrm{}=0.005`$ (dashed line), and $`\mathrm{}=0.001`$ (dotted line). Fig. (c) shows the mean uncertainty product $`M[\mathrm{\Delta }q\mathrm{\Delta }p]/\mathrm{}`$, which remains of the order one almost independently of $`\mathrm{}`$. Thus the quantum trajectories remain almost minimum uncertainty wave packets for all times.
The quantities shown in Fig.3 can only be given sense in the framework of quantum trajectories, they have no meaning from a density operator point of view as they are the ensemble mean over an expression non-quadratic in $`\psi _t(z)`$. It is apparent from Fig.3 that individual trajectories are well localized in phase space for all times, the localization being stronger the smaller $`\mathrm{}`$. As can be seen, our quantum trajectories remain almost ‘classical’ states, yet recover the fully quantum master equation (3). Thus, the representation (4) expresses the reduced density operator of quantum Brownian motion explicitly as a mixture of almost ‘classical’ states.
The observed localization property of QSD is well known in the Markov case and has been studied for instance in . Here we see that similar properties hold for the generalized non-Markovian QSD equation (12) which has now been applied to quantum dynamics beyond the class of Lindblad master equations. As in the Markov case, the localization property can be exploited to further reduce the numerical effort.
Finally, let us briefly address the connection between our approach and the widely used QBM master equation (3). Since a quantum trajectory approach strictly preserves positivity of the reduced density operator, our QBM stochastic Schrödinger equation (12) cannot be identical to (3) in the mean, as the latter is known to violate positivity on short time scales. Taking the ensemble mean $`M\left[\mathrm{}\right]`$ in (4) with (5) analytically, we were able to show in that in the regime considered in this Letter, the evolution of the ensemble mean (4) is well described by the master equation
$`\mathrm{}\dot{\rho }`$ $`=`$ $`i[H,\rho ]i\left({\displaystyle \frac{1}{2}}m\gamma \mathrm{\Lambda }+\text{Im}\{g_0(t)\}\right)[q^2,\rho ]`$ (17)
$`+i\text{Im}\{g_1(t)\}[q,\{p,\rho \}]\text{Re}\{g_0(t)\}[q,[q,\rho ]],`$
which reduces to (3) for times larger than the environmental memory time, $`t\mathrm{\Lambda }^1`$ due to the asymptotics of the coefficients $`g_0(t),g_1(t)`$. Thus, apart from an initial slip on the environmental memory time scale $`\mathrm{\Lambda }^1`$, our approach recovers (3) in the mean. It is known in the case of the exact master equation for a damped harmonic oscillator that such time dependent coefficients may ensure the positivity of the reduced density operator for non-Lindblad master equations, a result that is here supported for general system Hamiltonian $`H(q,p)`$.
To conclude, we have presented the stochastic Schrödinger equation for Brownian motion. It is compatible with the standard QBM master equation yet allows to compute states rather than a matrix, a huge reduction in resources, which becomes even more relevant for QBM in more than one space dimension. Individual trajectories are well localized in phase space, the localization being stronger the smaller $`\mathrm{}`$. Thus, in (4), the reduced density operator for Brownian motion is explicitly represented as an ensemble of almost ‘classical’ states.
We thank F Haake and IC Percival for helpful comments. WTS would like to thank the Deutsche Forschungsgemeinschaft for support through the SFB 237 “Unordnung und große Fluktuationen”. NG and TY thank the Swiss National Science Foundation.
|
no-problem/9907/cond-mat9907196.html
|
ar5iv
|
text
|
# Effect of the pseudogap on the mean-field magnetic penetration depth of YBa2Cu3O7-δ thin films
\[
## Abstract
We report measurements of the $`ab`$-plane penetration depth, $`\lambda (T)`$, in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> films at various $`\delta `$. At optimal doping, critical fluctuation effects are absent, and $`1/\lambda ^2(T)`$ from 4 K to 0.99 $`T_C`$ is that of a clean, strong-coupling d-wave superconductor with $`\mathrm{\Delta }_0(0)/k_BT_C3.3`$. As in crystals, underdoping reduces the superfluid density, $`n_s(0)1/\lambda ^2(0)`$, without affecting the low-$`T`$ slope of $`1/\lambda ^2(T)`$. These results, as well as electronic heat capacity data, are well described by an ad hoc model in which contributions to the superfluid and entropy are lost from regions of the Fermi surface occupied by the pseudogap.
PACS Nos. 74.25.Fy, 74.25.Nf, 74.40.+k, 74.76.Bz
\]
A large body of experimental evidence indicates the opening of a k-dependent gap, or pseudogap, at a temperature, $`T^{}`$, above the superconducting transition temperature, $`T_C`$, in underdoped cuprates . The pseudogap competes with the superconducting gap: $`T^{}`$ and the fraction of the Fermi surface (FS) occupied by the pseudogap increase with underdoping, while $`T_C`$, the superfluid density in the $`ab`$-plane, $`n_S(0)`$, and the peak value of the electronic specific heat coefficient, $`\gamma (T)`$, at $`T_C`$ decrease. A great deal of effort currently focuses on understanding the coexistence of these two gaps. Lee and coworkers propose that the fundamental physics lies in spin-charge separation, a key element being the segmentation of the FS into regions either occupied or unoccupied by the pseudogap in the normal state . With this in mind, we construct a simple model to describe our measurements of $`n_S(T)`$ and literature results for $`\gamma (T)`$, in which only portions of the FS unoccupied by the pseudogap contribute to the superfluid and entropy in the superconducting state.
We present new measurements of $`1/\lambda ^2(T)n_S(T)`$ in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> (YBCO) films at various $`\delta `$. We show that $`1/\lambda ^2(T)`$ in optimally-doped films is that of a clean, strong-coupling d-wave superconductor with a full FS. Underdoped films are well described by a Fermi liquid-like model, in which electronic properties are expressed as integrals over the FS, but the integrals extend only over sections of the FS not occupied by the pseudogap in the normal state. The fraction of the FS that survives is equal to the ratio of $`n_S(0)`$ of the underdoped film to $`n_{S,opt.}(0)`$ of the same film at optimal doping.
The absence of critical fluctuations in optimally-doped YBCO films and measurements of the effect of thermal phase fluctuations on 2D films of a conventional superconductor lead us to conclude that fluctuation effects are weak in the underdoped films. However, the relative importance of thermal phase fluctuations (TPF’s) to single particle excitations is controversial. Carlson et al. have shown numerically that a fluctuation driven superfluid density in Josephson junction (JJ) arrays displays features similar to some very clean YBCO crystals, namely, $`T`$-linear behavior at low-$`T`$ , $`T_C`$ roughly proportional to $`n_S(0)`$, and a wide critical region . Terahertz measurements of the sheet conductance of BSCCO films also suggest a wide fluctuation region . On the other hand, theoretical analyses of TPF’s in underdoped cuprates conclude that fluctuations are too weak at low-$`T`$ to account for the $`T`$-linear behavior of $`\lambda (T)`$ . Consistent with these results, our estimates indicate that the effects of TPF’s are minor.
It is not known why fluctuation effects near $`T_C`$ are weak in optimally-doped YBCO films and some crystals , while appearing strong in other crystals . Films are of high quality, based on their $`T`$-linear $`\lambda (T)`$ at low-$`T`$ and transition widths less than 1 K. Evidently, critical fluctuations are sensitive to structural differences which affect neither of these quality indicators. An estimate of the coupling between CuO bilayers in optimally-doped YBCO finds that for $`T`$ within 5 K of $`T_C`$, the ratio of interlayer coupling to in-plane coupling (J’/J in ref. ) is greater than unity and fluctuations should be strongly suppressed. To us, the presence of critical fluctuations in crystals is surprising, but their absence in films is not. We speculate that a significant portion of what appear to be critical fluctuations in crystals is, in fact, due to the rapid decrease in the quasiparticle scattering rate as $`T`$ decreases below $`T_C`$ which serves to rapidly increase the conductivity, $`\sigma _1(\omega ,T)`$, for $`\omega `$ less than the gap frequency, $`\mathrm{\Delta }_0(T)/\mathrm{}`$, thereby increasing $`n_S`$ . The decrease in scattering rate is known to be less rapid in disordered samples .
The d-wave theory which we use is an extension of the weak-coupling result for $`\lambda ^2(0)/\lambda ^2(T/T_{C0})`$ to strong coupling by increasing the ratio $`\mathrm{\Delta }_0(0)/k_BT_{C0}`$ above its weak-coupling value of 2.14, while preserving the dependence of $`\mathrm{\Delta }_0(T/T_{C0})/\mathrm{\Delta }_0(0)`$ on $`T/T_{C0}`$. $`T_{C0}`$ is the mean-field transition temperature. $`1/\lambda ^2(T)`$ and $`\gamma (T)`$ are determined from the usual FS integrals over k-space and energy.
For simplicity, we neglect possible deviations of $`\mathrm{\Delta }(𝐤,T)`$ from $`\mathrm{\Delta }_0(T)`$cos(2$`\varphi `$) which are reported in ARPES measurements on underdoped Bi<sub>2</sub>Sr<sub>2</sub>Ca<sub>1</sub>Cu<sub>2</sub>O<sub>8+δ</sub> (BSCCO) and which may or may not be present in YBCO. Possible anomalous behavior of the quasi-1D CuO chains is not included. Because such behavior is not observed in untwinned crystals, it is likely that chain specific effects are negligible in our highly twinned films. Variation of the Fermi velocity, $`v_F(𝐤)`$, with $`𝐤`$ is not included. If $`n_S(0)`$ is a factor $`F`$ smaller than in the optimally-doped film, then FS integrals are taken only over the angular interval, $`\pm F(\pi /4)`$, centered at each node in $`\mathrm{\Delta }(𝐤,T)`$.
2D TPF’s are predicted to suppress $`n_S(T)`$ by a factor: $`af_Q(T)k_BT\mu _0\lambda _{}(T)/\mathrm{}R_Q`$. $`1/\lambda _{}(T)d/\lambda ^2(T)`$; $`d`$ = 11.7 $`\AA `$ for YBCO. The prefactor, a, is not well known theoretically. We use $`a=0.175`$ which is consistent with measurements on a-MoGe films and calculations for a hexagonal array of resistively shunted JJ’s . $`R_Q\mathrm{}/4e^21027\mathrm{\Omega }`$ and $`f_Q`$ ($`0f_Q1`$) represents quantum suppression of TPF’s. The 2D transition, $`T_{2D}`$, is expected where $`\lambda _{}(T_{2D})T_{2D}\pi \mathrm{}R_Q/2k_B`$ = 9.8 mmK. Since $`T_{2D}`$ is near $`T_C`$, this condition is approximately $`\lambda _{}(T_{2D})9.8\mathrm{mmK}/T_C`$. (In the absence of quantum effects, $`T_{2D}`$ is the Kosterlitz-Thouless-Berezinskii transition temperature $`T_{KTB}`$.) As $`T`$ approaches $`T_{2D}`$, the suppression of $`n_S(T)`$ grows rapidly due to nonlinear effects, reaching 20% to 50% just below $`T_{2D}`$.
Quantum suppression of TPF’s in films has been controversial for some time. Recently, quantum effects were predicted and observed to suppress TPF’s when $`k_BT/\mathrm{}`$ drops below the ”$`R/L`$” frequency of the film. Approximately, the sheet inductance is $`L=\mu _0\lambda _{}(T)`$ and the sheet conductance is $`1/R=\sigma _1(\omega \mathrm{\Delta }_0/\mathrm{},T)d`$. For optimally-doped YBCO, $`f_Q`$ should be much less than unity below $`0.8T_C`$.
Data presented here for films grown by coevaporation and sputtering are representative of other high quality films. Films allow the rapid and reversible adjustment of oxygen content without altering the thickness or microstructure of the film. The coevaporated YBCO films were grown using the BaF<sub>2</sub> method with a room temperature SrTiO<sub>3</sub> substrate in an atmosphere of about $`5\times 10^6`$ torr of O<sub>2</sub>, with a postanneal in wet oxygen. After careful refinement of the growth rates from measurements of film stoichiometry by Rutherford Backscattering, YBCO films grown by this method consistently display a linear low-T penetration depth. Additional films made by RF sputtering and by in-situ coevaporation of Sm, Ba, and Cu onto substrates held at 750 C to 800 C are presented. Optimally-doped film transition widths were less than 1 K, based on the peak in the real conductivity $`\sigma _1(T)`$. One of the codeposited YBCO films was deoxygenated three times to $``$ O<sub>6.8</sub>, O<sub>6.7</sub>, and O<sub>6.6</sub> (determined from $`T_C`$) using low temperature anneals in argon.
The complex conductivity, $`\sigma =\sigma _1i\sigma _2`$, is determined from the mutual inductance of coaxial coils driven at 50 kHz located on opposite sides of the film . With a well defined coil geometry and known film thickness, $`s`$, Maxwell’s equations provide $`\sigma `$ as a function of the
measured mutual inductance. Great care is taken to ensure that measurements are made in the linear response regime to within 0.1 K of $`T_C`$ by taking successive measurements at increasingly smaller drive coil currents. $`\sigma _1`$ is very much smaller than $`\sigma _2`$ everywhere except very close to $`T_C`$. From $`\sigma _2`$ we define $`\lambda _{}`$:
$`\lambda _{}(T){\displaystyle \frac{s/d}{\mu _0\sigma _2\omega }}.`$ (1)
While $`\lambda _{}(0)`$ is determined to an accuracy of about 10%, limited by uncertainty in film thickness, relative changes induced by deoxygenation are known to better than 5%.
Figure 1 shows $`1/\lambda _{}(T)`$ for an optimally-doped YBCO film. The solid curve is weak-coupling d-wave theory ($`\mathrm{\Delta }_0(0)/k_BT_{C0}=2.14`$) fitted to the low-$`T`$ data by adjusting $`T_{C0}`$. The dashed curve, which is nearly indistinguishable from the data, is strong-coupling d-wave theory with $`\mathrm{\Delta }_0(0)/k_B=3.3T_{C0}300`$ K, consistent with tunneling measurements . The upper right inset to Figure 1 shows the excellent agreement between the measured $`1/\lambda _{}(T)`$ and mean-field curves for several optimally-doped YBCO films and one film of SmBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub>. Given $`1/\lambda _{}(0)`$ and taking $`v_F`$ = $`0.7\times 10^5`$ m/s , $`\gamma (T)`$ calculated with the same gap compares well with data on bulk YBCO (lower left inset to Fig. 1, with $`\gamma _0`$ = 16 mJ/mole K<sup>2</sup>). The small experi-
mental $`\gamma (T)`$ at low-$`T`$ could be fit better if the model allowed a k-dependent $`v_F`$ and deviations of $`\mathrm{\Delta }(𝐤)`$ from cos(2$`\varphi `$).
The inset to Figure 2 shows $`1/\lambda _{}(T)`$ for a YBCO film at three stages of deoxygenation. It is striking that $`n_S(T)/T`$ at low-$`T`$ (not necessarily the normalized shape) is essentially independent of doping despite the introduction of oxygen vacancies into the CuO chains. In our model, the unchanged slope, $`d\lambda _{}^1(T)/dT|_{T0}`$, indicates no change in the opening of the gap near the gap nodes . The downward curvature of $`1/\lambda _{}(T)`$ reflects that of $`\mathrm{\Delta }_0(T)`$ if TPF’s are negligible. We wish to extract the underlying mean-field behavior by estimating the effect of 2D TPF’s. For reference, the intersection of the dotted line with $`1/\lambda _{}(T)`$ (inset to Figure 2) would locate $`T_{2D}`$ were there no interlayer coupling and no quantum suppression of phase fluctuations.
The dashed curves in the inset and main portion of Figure 2 are extrapolations of the data taken below $`T_{2D}`$. The lower solid curves are fits in which hypothetical mean-field behavior (upper solid curves) is suppressed by TPF’s. Since the influence of TPF’s is determined by $`T`$, the normal-state sheet resistance, and $`\lambda _{}(T)`$, all measured quantities, the only fitting parameter is $`T_{C0}`$. The third deoxygenation step yields fits similar to the first two, but is left out for clarity. A reduction in fluctuations as a result of interlayer coupling would bring the fluctuation-
corrected mean-field result closer to the data, so uncertainty in mean-field behavior is bracketed by the upper solid curves and the data. We take the simple extrapolations (dashed curves) as reasonable approximations to mean-field behavior at least for $`T<T_{2D}`$. Our conclusions are insensitive to the extrapolation and to the ”foot” seen just below $`T_C`$ which is likely due to oxygen inhomogeneity.
Figure 3 displays the normal fluid density, $`n_N(T)1/\lambda _{}(0)1/\lambda _{}(T)`$, represented by the mean-field (dashed) curves of Fig. 2. Our model reproduces these when $`\mathrm{\Delta }_0(0)300K`$ is fixed and $`T_{C0}F^{2/5}`$, where $`F`$ $`n_S(0)/n_{S,opt.}(0)`$ in the experimental range, $`0.4F1`$. The gap ratio, $`\mathrm{\Delta }_0(0)/k_BT_{C0}`$, increases with underdoping, but the gap ratio defined from the maximum gap on the contributing FS segments, $`\mathrm{\Delta }_{max}=\mathrm{\Delta }_0(0)cos(\frac{\pi }{2}(1F))`$, increases only slightly. The contributing FS segments and $`\mathrm{\Delta }_{max}`$ are shown pictorially in the inset to Figure 3.
The model provides a basis for interpretation of $`\gamma (T)`$ . For optimal and mildly underdoped YBCO, there are three striking features in the data. One is that $`\gamma (T)`$ for $`T`$ 60 K is nearly independent of doping. Another is that the peak value of $`\gamma (T)`$ at $`T_C`$ decreases much more rapidly than $`T_C`$ with underdoping. Finally, the electronic entropy just above $`T_C`$, i.e., the integral of $`\gamma (T)`$ from 0 K to $`T_C`$, decreases significantly. This implies that the hypothetical normal-state $`\gamma (T)`$ must decrease dramatically as $`T`$ decreases below $`T_C`$, even though $`\gamma `$ is nearly constant for $`T>T_C`$.
Figure 3 shows that $`\gamma (T)`$ calculated with parameters fixed by $`\lambda (T)`$ has all of the experimental features mentioned above. This is significant. Simple models which account for the reduction in $`n_S(0)`$ by an increase in the effective mass of electrons, for example, would not describe $`\gamma `$ accurately. Our model incorrectly predicts that $`\gamma (T)`$ just above $`T_C`$ should decrease with underdoping. We hypothesize that our model describes the electron degrees of freedom that condense into the superconducting state, and that there exists, in addition, an anomalous contribution to $`\gamma (T)`$ which arises from degrees of freedom not associated with the superfluid (perhaps from electron spin degrees of freedom) and which decreases rapidly in the vicinity of $`T_C`$.
For strong underdoping, $`F0.4`$, the model predicts that the thermodynamic critical field, $`B_C(0)`$, is proportional to $`n_S(0)^{1/2}T_{C0}`$, and the upper critical field, $`B_{C2}(0)`$, perpendicular to the $`ab`$-plane is proportional to $`T_{C0}^2`$ .
The authors would like to thank Aaron A. Pesetski and John A. Skinta for useful discussions and James E. Baumgardner II for numerical calculations and data analysis software. The SmBaCuO film was generously provided by Vladimir Matijasevic. This work was supported in part by DOE Contract No. DE-FG02-90ER45427 through the Midwest Superconductivity Consortium.
|
no-problem/9907/chao-dyn9907011.html
|
ar5iv
|
text
|
# Universality of probability distributions among two-dimensional turbulent flows
(July 1, 1999)
## Abstract
We study statistical properties of two-dimensional turbulent flows. Three systems are considered: the Navier-Stokes equation, surface quasi-geostrophic flow, and a model equation for thermal convection in the Earth’s mantle. Direct numerical simulations are used to determine 1-point fluctuation properties. Comparative study shows universality of probability density functions (PDFs) across different types of flow. Especially for the derivatives of the “advected” quantity, the shapes of the PDFs are the same for the three flows, once normalized by the average size of fluctuations. Theoretical models for the shape of PDFs are briefly discussed.
PACS numbers: 92.10.Lq, 47.27.Gs, 5.20.Lj
The central idea of classical turbulence theory is that certain statistical properties in turbulent flow are independent of the details of the flow, like its boundaries, dissipation mechanism, and the kind of forcing, as long as the Reynolds number is sufficiently high. In this sense turbulent flow would be universal. Here we shall investigate independence not of boundary conditions, forcing etc., but look for universality across equations. This is not a crazy idea at all and has long been demonstrated for other classes of partial differential equations. Many partial differential equations with wave-like behavior, possessing common symmetries, exhibit identical fluctuation properties, once normalized by their standard deviations. (The Schrödinger equation belongs to this class, as formulated in the well-known Bohigas-Giannoni-Schmit conjecture). In the present paper it is demonstrated that three different hyperbolic partial differential equations (which all possess the same symmetries) exhibit, under analogous conditions, the same statistics for their fluctuations, once normalized by their standard deviations.
The three flows are described by advection-diffusion equations
$$\frac{\theta }{t}+\stackrel{}{v}\theta =D^2\theta +f.$$
(1a)
The scalar quantity advected is $`\theta `$. The forcing $`f`$ supplies the energy dissipated via a dissipation constant $`D`$. The velocity $`v`$ is a function of $`\theta `$, best written in fourier space,
$$\widehat{\stackrel{}{v}}(\stackrel{}{k})=i\frac{\stackrel{}{k}\times \widehat{\theta }(\stackrel{}{k})}{|\stackrel{}{k}|^\alpha }.$$
(1b)
The two-dimensional cross product $`\stackrel{}{k}\times \widehat{\theta }`$ is to be understood as a vector of length $`|\stackrel{}{k}\widehat{\theta }|`$ and direction perpendicular to $`\stackrel{}{k}`$. It follows that $`\stackrel{}{v}=0`$.
Different values of $`\alpha `$ correspond to different flows . The two-dimensional Navier-Stokes equation is $`\alpha =2`$ and $`\theta `$ corresponds to the vorticity $`\times \stackrel{}{v}`$. The surface quasi-geostrophic equation, $`\alpha =1`$, is a special case of the important quasi-geostrophic equation that describes flow of a shallow layer on a rotating sphere, as relevant for planetary atmospheres and oceans . In this case, $`\theta `$ is physically interpreted as temperature, which drives the flow through its buoyancy effect. The third equation considered is $`\alpha =3`$ which also appears in geophysical context as a limiting case of a shallow flow on a rotating sphere with uniform internal heating . Also here, $`\theta `$ is a temperature.
Other values of $`\alpha `$, integer or not, could be considered, but this is not done here. Finite-time singularities develop for $`\alpha <1`$, but not for $`\alpha >1`$. For the border case of $`\alpha =1`$ no corresponding analytical proof has been achieved, but numerical evidence suggests that no finite-time singularities occur .
Multiplying eqn. (1a) with $`\theta `$ and averaging over space with periodic boundary conditions yields
$$\frac{}{t}\theta ^2=D|\theta |^2+f\theta .$$
(2)
Consequently the left hand side of eqn. (1a) conserves $`\theta ^2`$ for all $`\alpha `$. For $`\alpha =2`$ also $`\stackrel{}{v}^2`$ is a conserved quantity, while $`\stackrel{}{v}^2\theta ^2`$ for $`\alpha =1`$, and $`\stackrel{}{v}^2`$ is not conserved for $`\alpha =3`$. Equation (1) is invariant under $`rr`$ as well as the set of simultaneous transformations $`rr\lambda `$, $`tt\lambda ^2`$, $`\theta \theta /\lambda ^\alpha `$, and $`ff/\lambda ^{\alpha +2}`$ (this is essentially the Reynolds number invariance). The family of flows with different $`\alpha `$ has been named $`\alpha `$-turbulence , although this term appears in the literature also for other kinds of flow.
The flow is simulated in a doubly periodic square box. Forcing acts on large scales, $`4|k|<6`$, with constant amplitude but random phases renewed at each time step. (The time step is constant). Two-dimensional turbulent flows produce vortices that merge and grow ever larger. These vortices must be destroyed in order to reach an equilibrium state. This is done by adding a large scale dissipation $`\gamma \theta `$, with $`\gamma `$ small, to the right-hand side of eqn. (1a), restricted to $`0<|k|3`$. All these conditions are designed to produce isotropic and homogeneous flow. The simulations are carried out with a pseudo-spectral method over long time periods using 4-th order Runge-Kutta integration. A mild spectral filter has been used, without complete dealiasing, since it is not clear whether complete dealiasing improves or worsens the quality of simulations.
The aforementioned invariance naturally defines a Reynolds number for flow of any $`\alpha `$ as $`Re=UL/D`$, where $`U`$ and $`L`$ are a velocity and length scale respectively. We choose $`U=\sqrt{\stackrel{}{v}^2}`$ and $`L=1`$ for a large-scale Reynolds number. With this definition the maximum Reynolds numbers achieved are on the order of several thousands on a 1024x1024 grid for each of the three flows.
In this paper only 1-point probability density functions (PDFs) are studied. First, the Navier-Stokes equation ($`\alpha =2`$) is treated, which is important by itself and also exemplifies the variations and dependencies in the PDFs within one equation. In the later part equations with different values of $`\alpha `$ are compared with each other, which is the central concern of this paper.
The PDFs are obtained from spatial snapshots of the flow field and then averaging over 8-24 such ensembles. Some of the PDFs shown are scaled by their average fluctuation, defined as
$$\sigma =𝑑x|x|P(x).$$
(3)
The integral is over all $`x`$. Instead of the first moment the standard deviation could have been used equally well. All PDFs of the Navier-Stokes equation presented here agree with the ones reported from recent simulations by Takahashi and Gotoh at higher Reynolds numbers.
Fig. 1 shows PDFs for different Reynolds numbers. In Fig. 1a we see similar but not at all identical shapes for the PDFs, a behavior representative for the PDFs of other quantities as well.
According to Fig. 2a velocity components are distributed Gaussian. The PDFs for $`v_x`$ and $`v_y`$ are almost identical, as must be true for isotropic turbulence. The tails show the same behavior as seen in decaying two-dimensonal turbulence. For a detailed discussion of this issue see , where the tails are explained from the influence of large vortices, which cause the largest velocity gradients. If the two velocity components are statistically independent of each other, then the PDF of the absolute value of $`v`$ should be a two-dimensional Maxwell distribution
$$P(x)=\frac{x}{s^2}\mathrm{exp}\left(\frac{x^2}{2s^2}\right).$$
The parameter $`s`$ is thereby the standard deviation of the Gaussian distribution for $`v_x`$. The Maxwell distribution plotted as dotted curve in Fig. 2b, hence contains no free parameter. It is a good first-order approximation.
As a matter of space not all PDFs can be presented here. The scalar (vorticity) is Gaussian in the center. The velocity derivatives are also Gaussian. This is particularly striking, since velocity derivatives of decaying turbulence are not Gaussian . Their core behaves much more like a Cauchy distribution
$$P(x)=\frac{1}{\pi }\frac{c}{c^2+x^2},$$
which has an inflection point even on a logarithmic plot. A Cauchy distribution follows theoretically from a “dilute gas” of point vortices of equal strength that move randomly, see . For forced two-dimensional turbulence $`_x\theta `$ (and $`^2\theta `$) could be interpreted as Cauchy distribution, but point vortices cannot make any sensible predictions on vorticity derivatives. We shall return to velocity derivatives later.
PDFs in $`\alpha `$-turbulence have been previously reported in (Fig. 7), where a “remarkable similarity” of the PDFs for $`\theta `$ has been pointed out.
Fig. 3 shows PDFs for different types of flow. In each figure the PDFs of all three flows are shown simultaneously, and the differernt figures show scalar derivatives $`_x\theta `$, scalar dissipation $`D|\theta |^2`$, and velocity components $`v_x`$. Apparently the PDFs for the different flows are the same. The agreement is for small as well as large fluctuations up to several standard deviations. The very largest fluctuations are inevitably undersampled, which accounts for deviations in the tails, which are likely to fall within measurement errors.
PDFs by themselves are known not to be particularly robust or universal, and it is hence important to compare flows under analogous conditions. The Reynolds numbers in the simulations for $`\alpha =1,2,3`$ were $`Re=`$ 3900, 4500, 4200 respectively. The differences do not appear significant. Comparisons at a lower Reynolds number (around 1300) yield the same universalities.
Also $`^2\theta `$ (not shown) overlaps with the same accuracy as $`_x\theta `$ and $`|\theta |^2`$. But not all PDFs overlap as accurately as the derivatives of $`\theta `$. The deviations in Fig. 3c are somewhat larger. Other PDFs show even larger deviations, but in none of the PDFs investigated was there any drastic difference. These deviations could lie within measurement errors. It turns out that quantities with larger deviations also show greater variability within the same equation for different Reynolds number. It appears that some PDFs simply require longer averaging than others. It is consistent with the data that all 1-point PDFs are identical, but it is also consistent that some are not. At the very least, PDFs for $`_x\theta `$, $`|\theta |^2`$, $`^2\theta `$, $`v_x`$ (and $`_y\theta `$, $`v_y`$) agree within several standard deviations.
We note (from Figures 1a and 3a) that the variation in the PDFs for different $`\alpha `$ is smaller than the variation with Reynolds number. It is astonishing that universality across equations is more robust than within the same equation at different Reynolds numbers.
The three quations describe physically quite different flows and also their mathemtical properties are diverse. For example, the velocity is a local function of the scalar only for one of the equations, and the kinetic energy is conserved in only two of the three flows. Inspite of these vast differences, their fluctuation properties differ only in absolute size.
PDFs of velocities on the ocean surface have recently been measured . Although none of the equations described here is directly applicable to this situation, the velocity components and their derivatives (the latter are not shown here) apparently show the same behavior as forced $`\alpha `$-turbulence. Universality across equations might explain the observed PDFs.
Several, if not all, probability density functions of three different flows, described by hyperbolic partial differential equations with reflection symmetry and Reynolds number invariance, have identical shapes. This clearly demonstrates that fluctuation properties can be explained in terms of statistical considerations only, and do not (or only very marginally) involve the dynamics of the flow.
Acknowledgments: It is a pleasure to thank Alexander Bershadskii, Jeff Chasnov, Emily Ching, and Toshiyuki Gotoh for very helpful discussions. Frank Ng and George Yuen from the High Performance Computing System at the Chinese University of Hong Kong provided indespensable technical help. This work was supported by a postdoctoral fellowship from the Chinese University of Hong Kong and a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (RGC Ref. No. CUHK4119/98P).
|
no-problem/9907/astro-ph9907381.html
|
ar5iv
|
text
|
# Solar Models and NACRE thermonuclear reaction rates
## 1 Introduction
Precise solar models have been constructed over the past three and a half decades (see, e.g. Bahcall et al. b63 (1963); Bahcall b93 (1993), Bahcall et al. 1998b ). The refinement have accelerated in the past decade (see, e.g. Bahcall and Ulrich bu88 (1988), Bahcall & Pinsonneault 1992b ). Since ten years many stellar and solar models have been computed using the thermonuclear reaction rates of Caughlan & Fowler (cf88 (1988), hereafter C88) a popular compilation but not optimized for solar conditions. More recently some authors (e.g. Bahcall et al. bp95 (1995); Reiter et al. rww95 (1995); Chaboyer et al. cdp95 (1995); Berthomieu et al. bpm95 (1995); Christensen-Dalsgaard et al. c96 (1996)) employed the improved thermonuclear reaction rates adopted by Bahcall & Pinsonneault (1992a ) for the calculation of accurate solar models. Meanwhile several groups of nuclear physicists have undergone other compilations of updated thermonuclear reaction rates of astrophysical interest. A year ago the compilation of Adelberger et al. (a98 (1998), hereafter A98) has been published. The original motivation of this compilation is to assess the state of the nuclear physics important to the solar neutrino problem. The incidences on solar models of these new reaction rates have been analyzed by several groups (see e.g. Bahcall et al. 1998b ; Brun et al. btm98 (1998); Morel et al. mpb98 (1998)). More recently the European Nuclear Astrophysics Compilation of REaction rates (Angulo et al. a99 (1999), NACRE, hereafter N99) has been completed and opened to free access. The driving motivation of this last work, coordinated by the Institut d’Astronomie et d’Astrophysique of the Université Libre de Bruxelles, is the build-up of well documented and evaluated sets of experimental data or theoretical predictions of astrophysical interest. To have a direct idea of the degree of reliability of any reaction rate, the authors have published, either a very convenient plot of the available measurements of cross section S-factors with respect to energy, or a table showing the range of the various parameters needed for cross section evaluation, e.g. the resonance parameters. Moreover, the accuracy of each analytical fit is indicated. This new compilation gives besides the adopted reaction rate $``$, its lower and upper limits $`_\mathrm{l}`$ and $`_\mathrm{u}`$. A solar neutrino analysis based on preliminary NACRE data for the PP reactions has been done by Castellani et al. (c97 (1997)). Recently Arnould et al. (agj99 (1999)) have used the N99 reaction rates to compute abundance predictions in non hydrogen and helium burning. They convincingly show that large spreads in the abundances predictions for several nuclides may result not only from a change in temperature, but also from nuclear physics uncertainties.
We are now in the fortunate position of having two precise and independent determinations of the best nuclear fusion data, namely A98 and N99. In order to illustrate the effects, on the standard solar model, of nuclear fusion rates on various astronomical quantities, including neutrino fluxes and helioseismology frequencies, we compare the model results calculated with the best current data from A98 and N99, with results obtained using early estimates of fusion rates of C88. Those differences are not very large. Nevertheless they modify the energy balance, the stratification, the chemical composition and the neutrino generation in the core.
Let us first recall the main constraints known nowadays on solar models. The helioseismological constraints relevant to the core are the small p-mode frequency differences $`\delta \nu _{02}`$ and $`\delta \nu _{13}`$ and the not yet observed spectrum of gravity modes. Other signatures of changes of thermonuclear reaction rates will be the sound velocity profile which is known from inversion of helioseismic data between $`R0.1R_{}`$ and $`R0.9R_{}`$, and also the radius of the base of the solar convection zone which is precisely located. The amount of observed photospheric depletions of lithium and beryllium which are often ascribed to transport phenomena beneath the convection zone are also sensitive to changes of the thermonuclear reaction rates in their low energy regime. An another constraint also connected to nuclear reaction rates is the isotopic ratio $`{}_{}{}^{3}\mathrm{He}_{}^{}`$ / $`{}_{}{}^{4}\mathrm{He}_{}^{}`$ measured at present day at the solar surface which is sensitive to the pre-main sequence deuteron burning and to initial isotopic ratios $`{}_{}{}^{2}\mathrm{H}_{}^{}`$ / $`{}_{}{}^{1}\mathrm{H}_{}^{}`$ and $`{}_{}{}^{3}\mathrm{He}_{}^{}`$ / $`{}_{}{}^{4}\mathrm{He}_{}^{}`$ of cosmological interest.
The paper is organized as follows: in Sec. 2 in the low energy range, for the nuclear reactions of interest for solar modeling, we summarize the main differences between N99 and A98 with respect to C88. The physics used in the models is described Sec. 3. In Sec. 4 we report results of comparisons between calibrated solar models computed with N99, A98 and C88, finally we conclude in Sec. 5.
## 2 Comparison of thermonuclear reaction rates from N99, A98 and C88 compilations
The most important nuclear reactions relevant of solar modeling are, for the PP chains (Clayton c68 (1968); Bahcall bb89 (1989) Table 3.1 and 3.3):
$`{}_{}{}^{1}\mathrm{H}_{}^{}(p,\beta ^+\nu _{\mathrm{pp}}){}_{}{}^{2}\mathrm{H}_{}^{}`$, $`{}_{}{}^{2}\mathrm{H}_{}^{}(p,\gamma ){}_{}{}^{3}\mathrm{He}_{}^{}`$, $`{}_{}{}^{3}\mathrm{He}_{}^{}({}_{}{}^{3}\mathrm{He}_{}^{},2p){}_{}{}^{4}\mathrm{He}_{}^{}`$,
$`{}_{}{}^{3}\mathrm{He}_{}^{}(\alpha ,\gamma ){}_{}{}^{7}\mathrm{Be}_{}^{}`$, $`{}_{}{}^{7}\mathrm{Be}_{}^{}(e^{},\nu _{{}_{}{}^{7}\mathrm{Be}_{}^{}}\gamma ){}_{}{}^{7}\mathrm{Li}_{}^{}`$, $`{}_{}{}^{7}\mathrm{Li}_{}^{}(p,\alpha ){}_{}{}^{4}\mathrm{He}_{}^{}`$,
$`{}_{}{}^{7}\mathrm{Be}_{}^{}(p,\gamma ){}_{}{}^{}{}_{}{}^{8}\mathrm{B}_{}^{}(\beta ^+\nu _{{}_{}{}^{8}\mathrm{B}_{}^{}}){}_{}{}^{8}\mathrm{Be}_{}^{}(\alpha ){}_{}{}^{4}\mathrm{He}_{}^{}`$,
and for the CNO bi-cycle:
$`{}_{}{}^{12}\mathrm{C}_{}^{}(p,\gamma ){}_{}{}^{13}\mathrm{N}_{}^{}(\beta ^+\nu _{{}_{}{}^{13}\mathrm{N}_{}^{}}){}_{}{}^{13}\mathrm{C}_{}^{}`$, $`{}_{}{}^{13}\mathrm{C}_{}^{}(p,\gamma ){}_{}{}^{14}\mathrm{N}_{}^{}`$,
$`{}_{}{}^{14}\mathrm{N}_{}^{}(p,\gamma ){}_{}{}^{15}\mathrm{O}_{}^{}(\beta ^+\nu _{{}_{}{}^{15}\mathrm{O}_{}^{}}){}_{}{}^{15}\mathrm{N}_{}^{}`$,$`{}_{}{}^{15}\mathrm{N}_{}^{}(p,\gamma ){}_{}{}^{16}\mathrm{O}_{}^{}`$, $`{}_{}{}^{15}\mathrm{N}_{}^{}(p,\alpha ){}_{}{}^{12}\mathrm{C}_{}^{}`$,
$`{}_{}{}^{16}\mathrm{O}_{}^{}(p,\gamma ){}_{}{}^{17}\mathrm{F}_{}^{}(\beta ^+\nu _{{}_{}{}^{17}\mathrm{F}_{}^{}}){}_{}{}^{17}\mathrm{O}_{}^{}`$, $`{}_{}{}^{17}\mathrm{O}_{}^{}(p,\alpha ){}_{}{}^{14}\mathrm{N}_{}^{}`$.
Owing to their low termination and small contribution to energetic and nucleosynthesis, despite their interest for neutrino generation, we do not explicitly take into account in the nuclear network $`{}_{}{}^{1}\mathrm{H}_{}^{}(pe^{},\nu _{\mathrm{pep}}){}_{}{}^{2}\mathrm{H}_{}^{}`$ and $`{}_{}{}^{3}\mathrm{He}_{}^{}(p,e^+\nu _{\mathrm{hep}}){}_{}{}^{4}\mathrm{He}_{}^{}`$ the so-called $`pep`$ and $`hep`$ reactions. Nevertheless we compute the number of $`\nu _{\mathrm{pep}}`$ neutrino generated using the equation (3.17) of the Bahcall’s (bb89 (1989)) reference text book.
The changes between the reaction rates of N99, A98 and C88 are extensively commented in Adelberger et al. (a98 (1998)) and Angulo et al. (a99 (1999)). As a matter of illustrations, for the three compilations and for each PP and CNO reaction – but the electronic capture $`{}_{}{}^{7}\mathrm{Be}_{}^{}(e^{},\nu _{{}_{}{}^{7}\mathrm{Be}_{}^{}}\gamma ){}_{}{}^{7}\mathrm{Li}_{}^{}`$ – Table 1 gives the S-factors at zero energy and the underlying global uncertainty on the rate $`\mathrm{\Delta }`$:
$$\mathrm{\Delta }=\sqrt{\frac{_\mathrm{u}}{_\mathrm{l}}}1,$$
(1)
estimated for $`T_6=15`$; $`T_6`$ is the temperature in M K, $`_\mathrm{l}`$ ($`resp.`$ $`_\mathrm{u}`$) stands for lower ($`resp.`$ upper) limit of N99 updated reactions. For our thermonuclear reaction network the contributions of resonances to the astrophysical reaction rates are negligible in the solar range of temperatures, therefore the values of S(0) and, if any, S’(0) presented here are pertinent. For sake of briefly we omit to reproduce the known S”(0) values. Figure 1 ($`resp.`$ Fig. 2) compares the relative differences between the adopted rates of N99 ($`resp.`$ A98) and C88 for the temperature range $`0.5T_619`$.
We next briefly recall the main changes in the rates of A98 and N99, with respect to those of C88 which is the oldest and, up to nowadays, the most used and complete.
$`{}_{}{}^{2}\mathrm{H}_{}^{}(p,\gamma ){}_{}{}^{3}\mathrm{He}_{}^{}`$: Among all reactions of PP chains and CNO bi-cycle, it is the rate of this PPI reaction which is the most badly known. The reaction rate is so fast that it is only involved by the pre-main sequence deuteron burning. Owing to the lower value adopted for the S-factors at zero energy, the rate of the reaction which synthesizes $`{}_{}{}^{3}\mathrm{He}_{}^{}`$ is about $`24\%`$ lower in N99 than in C88. This reaction is not updated in A98, for the calculations with A98 we used the value adopted in C88.
$`{}_{}{}^{3}\mathrm{He}_{}^{}({}_{}{}^{3}\mathrm{He}_{}^{},2p){}_{}{}^{4}\mathrm{He}_{}^{}`$: For the most energetic reaction of the PP chains N99 ($`resp.`$ A98) adopts values smaller by about $`6\%`$ ($`resp.`$ $`2\%`$) than C88 for the S-factors. As consequences of the calibration process, for the models using either N99 or A98, more $`{}_{}{}^{1}\mathrm{H}_{}^{}`$ nuclear fuel will be burnt in order to reach, at present day, the observed luminosity and effective temperature. Therefore these models will have cores with larger temperature, helium content, density and sound velocity than models computed with C88; then, at first sight, their predicted total neutrino fluxes are expected to be larger. This effect will be enhanced for the models computed with N99 since the rates of the two reactions $`{}_{}{}^{1}\mathrm{H}_{}^{}(p,\beta ^+\nu _{\mathrm{pp}}){}_{}{}^{2}\mathrm{H}_{}^{}`$ and $`{}_{}{}^{2}\mathrm{H}_{}^{}(p,\gamma ){}_{}{}^{3}\mathrm{He}_{}^{}`$ are smaller in N99 than in C88.
Figure p. 26 of Angulo et al. (a99 (1999)) gives the impression that, at low energy, the Junker at al.’s (j98 (1998)) recent measurements are avoided by the interpolation formulae adopted (see also Fig. 2 of Adelberger et al. a98 (1998)). These recent data should lead to an increase of S(0) so, to an enhancement of the efficiency of the reaction and then, owing to the calibration process, to a decrease of solar neutrino fluxes.
$`{}_{}{}^{7}\mathrm{Li}_{}^{}(p,\alpha ){}_{}{}^{4}\mathrm{He}_{}^{}`$: The S(0) value adopted by N99 differs from C88 between $`+15\%`$ and $`+7\%`$. In the core its curate rate is irrelevant due, first to its strong rate ($`10^5`$ year, e.g. Bahcall bb89 (1989) Table 3.2) and, second to the tiny mass fraction of $`{}_{}{}^{7}\mathrm{Li}_{}^{}\mathrm{2\hspace{0.17em}10}^{15}`$. Beneath the convection zone the burning of $`{}_{}{}^{7}\mathrm{Li}_{}^{}`$ will be more efficient with N99 than with C88, leading to an increase of the lithium depletion at the solar surface at present day. As suggested in C88, the $`{}_{}{}^{7}\mathrm{Li}_{}^{}`$ burning is slightly enhanced by few percents by the neighbor reaction $`{}_{}{}^{7}\mathrm{Li}_{}^{}(p,\gamma ){}_{}{}^{8}\mathrm{Be}_{}^{}(\alpha ){}_{}{}^{4}\mathrm{He}_{}^{}`$ which has been added to our nuclear network. This reaction is not updated in A98, for the calculations with A98 we shall use the value adopted in C88.
$`{}_{}{}^{7}\mathrm{Be}_{}^{}(e^{},\nu _{{}_{}{}^{7}\mathrm{Be}_{}^{}}\gamma ){}_{}{}^{7}\mathrm{Li}_{}^{}`$: N99 deals only with charged particle induced reactions involving nuclei, and therefore the $`{}_{}{}^{7}\mathrm{Be}_{}^{}`$ electron capture rate is not updated. In the calculations with N99 we shall used the value given by A98. Beneath $`T_6=1`$ only an upper limit is given in C88 for the $`{}_{}{}^{7}\mathrm{Be}_{}^{}`$ electron capture. The adopted rate of A98 differs from the rate of C88 by more than $`+50\%`$ at low temperature; for $`T_615`$, i.e. in the solar core, the rates of C88 and A98 are of same order.
$`{}_{}{}^{7}\mathrm{Be}_{}^{}(p,\gamma ){}_{}{}^{}{}_{}{}^{8}\mathrm{B}_{}^{}`$: This reaction controls the efficiency of the important source of $`\nu _{{}_{}{}^{8}\mathrm{B}_{}^{}}`$, the so-called boron solar neutrino. The adopted values for the S-factors at zero energy are slightly larger in N99 than in A98, but still smaller than in C88. With respect to C88, everything else equal, one can expect that the neutrino flux from boron will be reduced for the solar models computed with A98 and N99.
$`{}_{}{}^{13}\mathrm{C}_{}^{}(p,\gamma ){}_{}{}^{14}\mathrm{N}_{}^{}`$: The values of S-factors at zero energy adopted by N99 and A98 are magnified by a factor of about two with respect to their previous values in C88; as a result the rates are increased by $`+30\%`$ and $`+15\%`$ respectively. These large differences will not have any noticeable incidence on the global structure of the core since the energy generated by the CNO bi-cycle is only $`2\%`$ of the total nuclear energy.
$`{}_{}{}^{14}\mathrm{N}_{}^{}(p,\gamma ){}_{}{}^{15}\mathrm{O}_{}^{}`$: The rate of the most important reaction for the computation of energy generation and neutrino fluxes created by the CNO bi-cycle is known with a large uncertainty. The three compilations adopt about the same values for the S-factors at zero energy. Figure 2 shows small differences between the rates. This is due to different interpolation formulas which slightly differ since there is no measurement at low energy (see the convincing figure p. 58 of Angulo et al. (a99 (1999))).
$`{}_{}{}^{15}\mathrm{N}_{}^{}(p,\gamma ){}_{}{}^{16}\mathrm{O}_{}^{}`$: For the reaction which governs the efficiency of the NO-part of the CNO bi-cycle, N99, A98 and C88 adopt the use of S-factors at zero energy obtained by Rolfs & Rodney (rr74 (1974)). Due to differences in the interpolation formula Fig. 2 reveals enhanced rates of $`+15\%`$ in N99 with respect to C88 or A98.
$`{}_{}{}^{16}\mathrm{O}_{}^{}(p,\gamma ){}_{}{}^{17}\mathrm{F}_{}^{}`$: At low energy the reaction which controls the generation of $`\nu _{{}_{}{}^{17}\mathrm{F}_{}^{}}`$, the so-called fluorine solar neutrino, is based on data with large experimental errors. The adopted rate has the largest uncertainty among the CNO reactions. Though Table 1 gives for the three compilations about the same values for the S-factors, Fig. 2 shows large differences for the rates resulting from different analytical formulations. Beyond $`T_610`$, N99 and C88 are close (Angulo et al. a99 (1999)). The difference of $`50\%`$ between A98 and C88 results of the used of the standard formulation of the non-resonant reaction rate with S-factors (Fowler et al. fgz67 (1967)).
$`{}_{}{}^{17}\mathrm{O}_{}^{}(p,\alpha ){}_{}{}^{14}\mathrm{N}_{}^{}`$: N99 and A98 use different analytical fits based on the measurements of Landré et al. (l89 (1989)). They differ by $`30\%`$. With the discovery of a resonance at low energy (Landré et al. $`loc.cit.`$) the analytical fit of C88 became in error by more than two order of magnitude. For the models computed with C88 we have used the rates derived from the Landré’s et al. analytical fit, as recommended by A98.
#### Summary.
With respect to C88 many reaction rates, principally $`{}_{}{}^{3}\mathrm{He}_{}^{}({}_{}{}^{3}\mathrm{He}_{}^{},2p){}_{}{}^{4}\mathrm{He}_{}^{}`$, are lowered in N99 and also, but in a less extend, in A98. One can expect that this will lead to calibrated solar models with central cores with larger temperature. For the reactions of PP chains, with respect to C88, other important changes connected to the observable neutrino fluxes are the rates of the electronic capture $`{}_{}{}^{7}\mathrm{Be}_{}^{}(e^{},\nu _{{}_{}{}^{7}\mathrm{Be}_{}^{}}\gamma ){}_{}{}^{7}\mathrm{Li}_{}^{}`$ which is significantly diminished for $`T_67`$ in A98 and, for N99, the decrease of the rate of $`{}_{}{}^{7}\mathrm{Be}_{}^{}(p,\gamma ){}_{}{}^{}{}_{}{}^{8}\mathrm{B}_{}^{}`$. With respect to C88, the changes in N99 and A98 of the reaction rates of the CNO bi-cycle are not large enough to modify significantly the solar model.
## 3 The solar models
Basically the physics of the models is the same as in Morel et al. (mpb97 (1997)).
#### Calibration of models.
Each evolution is initialized with a homogeneous zero-age pre-main-sequence model in quasi-static gravitational contraction with the temperature at center $`T_\mathrm{c}0.5`$ MK, i.e. close to the onset of the deuteron burning. The models are calibrated within a relative accuracy better than $`10^4`$ by adjusting: the ratio $`l/H_\mathrm{p}`$ of the mixing-length to the pressure scale height, the initial mass fraction $`X_\mathrm{i}`$ of hydrogen and the initial mass fraction $`(Z/X)_\mathrm{i}`$ of heavy element to hydrogen in order that, at present day, the solar models have the luminosity $`L_{}=\mathrm{3.846\hspace{0.17em}10}^{33}`$ erg s<sup>-1</sup> (Guenther et al. gdkp92 (1992)), the radius $`R_{}=\mathrm{6.9599\hspace{0.17em}10}^{10}`$ cm (Guenther et al. $`loc.cit.`$) and the mass fraction of heavy element to hydrogen $`(Z/X)_{}=0.0245`$ (Grevesse & Noels gn93 (1993)). We used a time of evolution $`t_{\mathrm{ev}}=4600`$ My, an intermediate value between the meteoritic age $`t_\mathrm{m}=4530\pm 40`$ My of the Sun<sup>1</sup><sup>1</sup>1Here $`t_\mathrm{m}`$ is referenced with respect to ZAMS which occurs $`36\pm 10`$ My (Guenter $`loc.cit.`$) after the formation of meteorites $`4566\pm 5`$ My from now (Bahcall et al. bp95 (1995)). (Guenther g89 (1989)) and its helioseismic value $`t_\mathrm{h}=4660\pm 100`$ My derived by Dziembowski et al. (dfrs98 (1998)). The zero age main-sequence (ZAMS) is defined as the time where nuclear reactions dominate gravitation as the primary energy source by more than 50% (Guenther et al. $`loc.cit.`$). The mass of the Sun is assumed to be $`M_{}=\mathrm{1.9891\hspace{0.17em}10}^{33}`$ g (Cohen & Taylor ct86 (1986)).
#### Nuclear and diffusion network.
The general nuclear network we used contains the following species : $`{}_{}{}^{1}\mathrm{H}_{}^{}`$, $`{}_{}{}^{2}\mathrm{H}_{}^{}`$, $`{}_{}{}^{3}\mathrm{He}_{}^{}`$, $`{}_{}{}^{4}\mathrm{He}_{}^{}`$, $`{}_{}{}^{7}\mathrm{Li}_{}^{}`$, $`{}_{}{}^{7}\mathrm{Be}_{}^{}`$, $`{}_{}{}^{9}\mathrm{Be}_{}^{}`$, $`{}_{}{}^{12}\mathrm{C}_{}^{}`$, $`{}_{}{}^{13}\mathrm{C}_{}^{}`$, $`{}_{}{}^{14}\mathrm{N}_{}^{}`$, $`{}_{}{}^{15}\mathrm{N}_{}^{}`$, $`{}_{}{}^{16}\mathrm{O}_{}^{}`$, $`{}_{}{}^{17}\mathrm{O}_{}^{}`$ and $`{}_{}{}^{}\mathrm{Ex}_{}^{}`$; $`{}_{}{}^{}\mathrm{Ex}_{}^{}`$ is an “Extra” fictitious mean non-CNO heavy element with atomic mass 28 and charge 13 ($`{}_{}{}^{}\mathrm{Ex}_{}^{}{}_{}{}^{28}\mathrm{Al}_{}^{}`$) which complements the mixture i.e., $`X_{{}_{}{}^{}\mathrm{Ex}_{}^{}}=1_{i={}_{}{}^{1}\mathrm{H}_{}^{}}^{{}_{}{}^{17}\mathrm{O}_{}^{}}X_i`$ with $`X_i`$ as the mass fraction of the species labeled with $`i={}_{}{}^{1}\mathrm{H}_{}^{},\mathrm{},{}_{}{}^{17}\mathrm{O}_{}^{}`$. With respect to time, due to microscopic diffusion processes, the abundances of heavy elements are enhanced toward the center; $`{}_{}{}^{}\mathrm{Ex}_{}^{}`$ mimics that enhancement for the non CNO metals which contribute to changes of $`Z`$, then to opacity variations but neither to nuclear energy generation nor to nucleosynthesis. To compute the depletion of $`{}_{}{}^{9}\mathrm{Be}_{}^{}`$, we have added, to the nuclear network given Sec. 2, the most efficient reactions of $`{}_{}{}^{9}\mathrm{Be}_{}^{}`$ burning: $`{}_{}{}^{9}\mathrm{Be}_{}^{}(p,d)2{}_{}{}^{4}\mathrm{He}_{}^{}`$ and $`{}_{}{}^{9}\mathrm{Be}_{}^{}(\alpha ,n){}_{}{}^{12}\mathrm{C}_{}^{}`$. The life time of the neutron, namely 888 s (Barnett et al. b96 (1996)), is smaller by more than thirteen orders of magnitude than the evolutionary time scale of the Sun’s main-sequence $`pp`$ reaction. Therefore, for the calculations, the last reaction is rewritten $`{}_{}{}^{9}\mathrm{Be}_{}^{}(\alpha ,e^{}p\overline{\nu }_{{}_{}{}^{9}\mathrm{Be}_{}^{}}){}_{}{}^{12}\mathrm{C}_{}^{}`$. The weak screening of Salpeter (s54 (1954)) is used, it is a very good approximate of the exact solution of the Schrödinger equation for the fundamental $`pp`$ reaction (Bahcall et al. 1998a ).
The protosolar initial isotopic ratios (in number) for hydrogen and helium are respectively taken as $`{}_{}{}^{2}\mathrm{H}_{}^{}/{}_{}{}^{1}\mathrm{H}_{}^{}=\mathrm{3.01\hspace{0.17em}10}^5`$, $`{}_{}{}^{3}\mathrm{He}_{}^{}`$/$`{}_{}{}^{4}\mathrm{He}_{}^{}`$$`=\mathrm{1.1\hspace{0.17em}10}^4`$ (Gautier & Morel gm97 (1997)). The initial ratios between the heavy elements within $`Z`$ are set to their photospheric present day values, namely (in number) C: 0.24551, N: 0.06458 and O: 0.51295 (Grevesse & Noels gn93 (1993)) then, for the complement Ex: 0.17696. The initial isotopic ratios are derived from the abundances of nuclides (Anders & Grevesse ag89 (1989)): $`{}_{}{}^{13}\mathrm{C}_{}^{}`$/$`{}_{}{}^{12}\mathrm{C}_{}^{}`$$`=\mathrm{1.11\hspace{0.17em}10}^3`$, $`{}_{}{}^{15}\mathrm{N}_{}^{}`$/$`{}_{}{}^{14}\mathrm{N}_{}^{}`$$`=\mathrm{4.25\hspace{0.17em}10}^3`$, $`{}_{}{}^{17}\mathrm{O}_{}^{}`$/$`{}_{}{}^{16}\mathrm{O}_{}^{}`$$`=\mathrm{3.81\hspace{0.17em}10}^4`$. We have used the meteoritic values (Grevesse & Sauval gs98 (1998)) for the initial abundances in dex, ($`{}_{}{}^{}\mathrm{H}_{}^{}12`$), of $`{}_{}{}^{}\mathrm{Li}_{}^{}`$ and $`{}_{}{}^{}\mathrm{Be}_{}^{}`$:
$$\left[\frac{{}_{}{}^{}\mathrm{Li}_{}^{}}{{}_{}{}^{}\mathrm{H}_{}^{}}\right]=3.31\pm 0.04,\left[\frac{{}_{}{}^{}\mathrm{Be}_{}^{}}{{}_{}{}^{}\mathrm{H}_{}^{}}\right]=1.42\pm 0.04.$$
For the calculations of depletions, the lithium is assumed to be in its most abundant isotope $`{}_{}{}^{7}\mathrm{Li}_{}^{}`$ form, so it is with beryllium assumed to be $`{}_{}{}^{9}\mathrm{Be}_{}^{}`$. Neither the meteoritic abundance nor the nuclide isotopic ratio of $`{}_{}{}^{7}\mathrm{Be}_{}^{}`$ are known, due to numerical constraints the protosolar abundance of $`{}_{}{}^{7}\mathrm{Be}_{}^{}`$ was somehow arbitrarily taken to a very low, but non zero value, namely $`[{}_{}{}^{7}\mathrm{Be}_{}^{}/{}_{}{}^{1}\mathrm{H}_{}^{}]=3.58`$ dex. The initial abundance of each isotope is derived from isotopic ratios and initial values of $`X{}_{}{}^{1}\mathrm{H}_{}^{}+{}_{}{}^{2}\mathrm{H}_{}^{}`$, $`Y{}_{}{}^{3}\mathrm{He}_{}^{}+{}_{}{}^{4}\mathrm{He}_{}^{}`$ and $`Z/X`$ as inferred by the calibration process in order to fulfill the basic relationship $`X+Y+Z1`$.
Microscopic diffusion is described by the simplified formalism of Michaud & Proffitt (mp93 (1993)) with each of the heavy elements as a trace element.
#### Equation of state, opacities, convection and atmosphere.
We have used the OPAL equation of state (Rogers et al. rsi96 (1996)) and opacities (Iglesias & Rogers ir96 (1996)) for the solar mixture of Grevesse & Noels (gn93 (1993)) complemented, at low temperatures, respectively by the MHD equation of state (Däppen d96 (1996)) and Alexander & Ferguson (af94 (1994)) opacities. The interpolations of opacities are made with the v9 birational spline package of G. Houdek (Houdek & Rogl hr96 (1996); Houdek h98 (1998)).
In the convection zones the temperature gradient is computed according to the standard mixing-length theory. The mixing-length is defined as $`l\alpha H_\mathrm{p}`$, where $`H_\mathrm{p}`$ is the pressure scale height. The convection zones are mixed via a strong turbulent diffusion coefficient, which produces a homogeneous composition.
The atmosphere is restored using a $`T(\tau )`$ law derived from an atmosphere model of the Sun computed by van’t Veer (v98 (1998)) with the Kurucz’s (k91 (1991)) ATLAS12 package. The connection with the envelope is made at the Rosseland optical depth $`\tau _\mathrm{b}=20`$ (Morel et al. m94 (1994)), where the diffusion approximation for radiative transfer becomes valid. A smooth connection of the gradients is insured between the uppermost layers of the envelope and the optically thick convective part of the atmosphere. The radius $`R_{}`$ of any model is taken at the optical depth $`\tau _{}0.54`$ where $`T(\tau _{})=T_{\mathrm{eff}}`$; the mass of the star $`M_{}`$, is defined as the mass enclosed in the sphere of radius $`R_{}`$. The external boundary is located at the optical depth $`\tau _{\mathrm{ext}}=10^4`$, where the density is fixed to its value in the atmosphere model $`\rho (\tau _{\mathrm{ext}})=\mathrm{3.55\hspace{0.17em}10}^9`$ g cm<sup>-3</sup>, that corresponds about to the temperature minimum in the solar chromosphere.
#### Numerics.
The models have been computed using the CESAM code (Morel m97 (1997)). The numerical schemes are fully implicit and their accuracy is first order for the time and third order for the space. For numerical performance and algorithmic constraints the analytical expressions of reaction rates are tabulated with respect to temperature for the range $`0.5T_620`$ and interpolated with a relative accuracy better than $`10^5`$. Each evolution needs about 90 models. Typically 600 mass shell are used along the evolution, it increases up to 2100 for the models used in seismological analysis.
#### p-mode and g-mode oscillation calculations.
The frequencies of linear, adiabatic, global acoustic modes of the solar models have been computed for degrees $`\mathrm{}=0`$ to $`\mathrm{}=150`$ and have been compared to the observations. The characteristic low degree p-mode frequency differences $`\mathrm{\Delta }\nu _{n,\mathrm{}}=\nu _{n,\mathrm{}}\nu _{n1,\mathrm{}+2}`$ for $`\mathrm{}=0`$ and $`\mathrm{}=1`$, which provide information on the properties of the solar core, have been fitted by linear regressions with respect to $`n`$:
$$\mathrm{\Delta }\nu _{n,\mathrm{}}=\delta \nu _{n,\mathrm{}}+S_{\mathrm{}}(nn_0),n_0=21,\mathrm{}=0,1,$$
both for the observations and the theoretical frequencies. For the gravity modes which have not yet been observed, we give the characteristic asymptotic spacing period $`P_0`$ according to Provost & Berthomieu (pb86 (1986)).
## 4 Comparison of models
Table 2 gives the global properties of models and Fig. 3 exhibits the profiles, with respect to radius, of the most important variables for the internal structure namely, density, temperature, opacity, helium and heavy element contents.
### 4.1 Chemical composition
The changes in chemical composition directly result from changes of thermonuclear reaction rates but also, in a more intricate way, from changes in microscopic diffusion coefficients which are sensitive to the temperature and density, and to chemical composition, pressure, temperature and density gradients.
#### Changes at the surface and in the envelope.
For the three models N99, A98 and C88, Table 2 shows that the expected photospheric abundances of helium are slightly reduced and remain compatible with the range of observed values. As known (Basu 1997b ), the amount of photospheric “observed” helium derived from inversion of helioseismic data is more sensitive to the equation of state than the amount of photospheric “predicted” helium derived from calibrated solar models. Indeed we have calibrated a solar model<sup>2</sup><sup>2</sup>2Not analyzed here for sake of briefness. using C88 thermonuclear reaction rates and the MHD (Däppen d96 (1996)) equation of state instead of OPAL and obtained a photospheric helium content $`Y_\mathrm{s}=0.246`$ which is the value derived from inversion using the MHD equation of state (Basu & Antia ba95 (1995)).
Though the $`{}_{}{}^{7}\mathrm{Li}_{}^{}`$ surface depletion is increased with the use of the enhanced rate of $`{}_{}{}^{7}\mathrm{Li}_{}^{}(p,\alpha ){}_{}{}^{4}\mathrm{He}_{}^{}`$ adopted by N99, the predicted abundance is still very far from the observed value. The differences are likely consequences of the lack, in standard solar models, of mixing generated by the shear at the level of the tachocline (see e.g. Gough et al. g96 (1996), Brun et al. btz99 (1999)). It smoothes the chemical composition gradients and reduces the microscopic diffusion efficiency (Basu 1997a ) immediately beneath the convection zone. According to the new observations (Grevesse & Sauval gs98 (1998)), the predicted photospheric depletion of beryllium is tiny. The predictions for the surface isotopic ratios $`({}_{}{}^{3}\mathrm{He}_{}^{}/{}_{}{}^{4}\mathrm{He}_{}^{})_\mathrm{s}`$ by the three models are all within the interval of accuracy given by the observations.
#### Changes in the core.
The solar core is the innermost part where the nuclear energy generation is efficient. It extends from the center to about $`R_\mathrm{c}0.4R_{}`$ slightly beyond the $`{}_{}{}^{3}\mathrm{He}_{}^{}`$ peak located around $`0.3R_{}`$. Owing to the less efficiency of PP reactions, see Sec. 2, the temperature, the density, the amount of helium and the sound velocity at center of calibrated models N99 and A98, are larger than in C88. As expected, Fig. 4 shows almost symmetrical profiles for the differences for $`{}_{}{}^{1}\mathrm{H}_{}^{}`$ and $`{}_{}{}^{4}\mathrm{He}_{}^{}`$. Owing to the larger efficiency of the $`pp`$ reaction in C88, larger values are obtained for the relative difference C88 minus N99 than for A98 minus N99. The typical features for the relative differences of abundances of $`{}_{}{}^{3}\mathrm{He}_{}^{}`$ are consequences of the smaller reaction rates of N99 with respect to A98 and C88 of $`{}_{}{}^{3}\mathrm{He}_{}^{}({}_{}{}^{3}\mathrm{He}_{}^{},2p){}_{}{}^{4}\mathrm{He}_{}^{}`$. Beneath the $`{}_{}{}^{3}\mathrm{He}_{}^{}`$ peak, owing to the increase of the temperature, the amount of $`{}_{}{}^{3}\mathrm{He}_{}^{}`$ is smaller in C88 and A98 than in N99; it is the reverse beyond the peak. Though the same rate prevails in N99 and A98 for the $`{}_{}{}^{7}\mathrm{Be}_{}^{}`$ electronic capture, there is a non zero value for relative difference between the $`{}_{}{}^{7}\mathrm{Be}_{}^{}`$ profiles of models N99 and A98, resulting of differences between the rates adopted for $`{}_{}{}^{3}\mathrm{He}_{}^{}(\alpha ,\gamma ){}_{}{}^{7}\mathrm{Be}_{}^{}`$.
Figure 5 exhibits large differences in the abundances of $`{}_{}{}^{16}\mathrm{O}_{}^{}`$ for A98 and C88 with respect to N99 despite the fact that the rates of the reactions of $`{}_{}{}^{16}\mathrm{O}_{}^{}`$ burning are close. In fact, these differences result from changes of rates of $`{}_{}{}^{15}\mathrm{N}_{}^{}`$ burning which creates $`{}_{}{}^{16}\mathrm{O}_{}^{}`$. For $`{}_{}{}^{12}\mathrm{C}_{}^{}`$ and $`{}_{}{}^{14}\mathrm{N}_{}^{}`$, around $`0.18R_{}`$, effects of changes of nuclear reaction rates are magnified by the large gradients of that species. There, Fig. 3 reveals, on $`Z`$ profiles, small bumps due to the magnification by large gradients of variations in chemical composition caused by the changes of thermonuclear reaction rates.
### 4.2 Thickness of the convection zone.
For radius $`R0.4R_{}`$, i.e. in the envelope, Fig. 3 shows that the opacity profiles are close within $`\pm 0.4\%`$ for models N99 and A98. The thickness of the convection zone is about the same in N99 and A98, and close, within the error bars (see Table 2), to the observed value. It is slightly larger for model C88. That difference is due to the increase of the radiative temperature gradient resulting from the higher value of the opacity. The differences of temperature between N99 and A98 being small, the changes in opacity are mainly due to the variations of density. The relative opacity differences amount to $`\pm 1\%`$ between C88 and N99.
### 4.3 Neutrinos
Table 3 gives the predicted neutrino fluxes at earth level and the expected fluxes for the three neutrino experiments namely, chlorine (e.g. Davis da94 (1994)), gallium (e.g. Hampel et al. ha99 (1999)) and Kamiokande (e.g. Fukuda et al. fu96 (1996)), computed according to Berthomieu et al. (bpml93 (1993)). The gallium and chlorine absorption cross sections have been taken respectively from Bahcall (b97 (1997)) and Bahcall et al. (bla96 (1996)). The $`hep`$ flux which may be important (e.g. Bahcall & Krastev 1998c , Fiorentini et al. fb98 (1998)) in the neutrino spectrum measurements by the SuperKamiokande, SNO and Icarus experiments is not listed. With respect to C88, due to hotter core, $`\nu _{{}_{}{}^{7}\mathrm{Be}_{}^{}}`$ and CNO neutrino fluxes are enhanced in A98 and N99 and, as expected, $`\nu _{\mathrm{pp}}`$ is slightly reduced.
Despite larger temperatures in the core we obtained, for the models A98 and N99 with respect to the model C88, the expected decreases of the $`\nu _{{}_{}{}^{8}\mathrm{B}_{}^{}}`$ boron neutrino fluxes owing to their reduced rate of the reaction $`{}_{}{}^{7}\mathrm{Be}_{}^{}(p,\gamma ){}_{}{}^{}{}_{}{}^{8}\mathrm{B}_{}^{}`$. The introduction of N99 reaction rates relatively to A98 induces an increase of $`+10\%`$ of $`\nu _{{}_{}{}^{8}\mathrm{B}_{}^{}}`$. The effect is significant on the flux measured by the chlorine and Kamiokande experiments. Note that Table 3 reveals that the neutrino fluxes at earth level for A98 are very similar than those given Table 1 in Bahcall et al. (1998b ). They differ only by few percent for $`\nu _{{}_{}{}^{17}\mathrm{F}_{}^{}}`$ owing to the large abundance of $`{}_{}{}^{16}\mathrm{O}_{}^{}`$ resulting from the great efficiency of $`{}_{}{}^{15}\mathrm{N}_{}^{}`$ burning which about double the fraction of termination of the NO part of the CNO bi-cycle. Despite the fact that the two stellar evolution programs are entirely independent of each other, when the same nuclear reaction rates are used (A98), the most important neutrino fluxes $`\nu _{\mathrm{pp}}`$, $`\nu _{\mathrm{pep}}`$, $`\nu _{{}_{}{}^{7}\mathrm{Be}_{}^{}}`$, $`\nu _{{}_{}{}^{8}\mathrm{B}_{}^{}}`$, $`\nu _{{}_{}{}^{13}\mathrm{N}_{}^{}}`$, and $`\nu _{{}_{}{}^{15}\mathrm{O}_{}^{}}`$ all agree to better than 1%.
### 4.4 Seismological comparison
The seismic properties of the solar model are mainly related to the profile of sound-speed ($`resp.`$ Brunt-Väissälä frequency) as far as p-modes ($`resp.`$ g-modes) are concerned. Figure 7 shows that the models N99 and, to a lesser extend A98, compared to C88, have a larger sound speed in the central core below 0.3 solar radius by $`+0.2\%`$ ($`resp.`$ $`+0.1\%`$), and a smaller one by $`0.1\%`$ ($`resp.`$ $`0.05\%`$) just below the convection zone. Table 2 shows a small increase of small low degree differences $`\delta \nu _{02}`$ and $`\delta \nu _{13}`$, as defined Sec. 3 in relation with the difference of sound speed in the solar core. There the relative differences in the Brunt-Väissälä frequency between models N99 and C88 are larger by a few percents, i.e. one order of magnitude larger than the sound speed differences. The increase of the Brunt-Väissälä frequency leads to a $`1\%`$ smaller value of $`P_0`$, the characteristic spacing period of g-modes. The differences between A98 and C88 are three times smaller. This change in Brunt-Väissälä frequency influences much the low frequency modes for frequency less than 1 mHz, i.e. the low radial order p-modes, the f- and g-modes. Consequently the frequency differences of the low degree p-modes between models N99 and C88 vary from $`0.1\mu `$Hz to $`0.25\mu `$Hz when the frequency increases from 1 mHz to 5 mHz, with a minimum value of $`0.5\mu `$Hz around 2 mHz. The normalized frequencies differences for p-modes of degree $`\mathrm{}=3`$ to $`\mathrm{}=150`$ are negative and change by less than $`1\mu `$Hz in the observed range. As expected, the change of nuclear reaction rates do not modify the frequency of oscillation of degree larger than 70.
Figure 8 shows the frequencies differences in low frequency range between N99 and A98 and Table 4 gives the frequencies of g- and p-modes for $`\mathrm{}=0`$ to $`\mathrm{}=2`$ in the same frequency range. In the low frequency range $`400\mu `$Hz – 1 mHz, it appears that the p-mode frequencies are changed by less than $`0.5\mu `$Hz between N99 and C88, i.e. by less than $`+0.1\%`$, with an effect larger at lower frequency. Below $`200\mu `$Hz, the oscillations are gravity modes with an asymptotic behavior, and the relative period differences are almost proportional to $`P_0`$ given in Table 4 (see Provost et al. pbm98 (1998) for details). Between 200 to $`400\mu `$Hz, the oscillations are gravity modes, or f- and p1-modes and they are more influenced by the change of the Brunt-Väissälä frequency in the solar core, induced by changes in nuclear reactions, except the p1-modes for $`\mathrm{}=0`$ and $`\mathrm{}=1`$. Frequencies shifts are much larger, of the order of 1 to $`1.5\mu `$Hz, when the frequency varies from $`200\mu `$Hz to $`400\mu `$Hz, i.e. about $`1\%`$, when comparing the models N99 and C88.
In the range $`0.1R_{}R0.9R_{}`$ where the inversions of the helioseismic data are reliable, the sound speed of the three models has been compared with the seismic sound speed experimental results of Turck-Chièze et al. (t97 (1997)). Figure 9 shows that the relative differences are below a few $`10^3`$. The discrepancy between the Sun and the models is larger for model N99 with sound speed too small just below the convection zone and too large in the core. Table 2 shows that it is the same for the quantities $`\delta \nu _{02}`$ and $`\delta \nu _{13}`$ of the models compared to the corresponding observed values $`\delta \nu _{n,\mathrm{}}`$ derived from GOLF (Grec et al. gr97 (1997)) and VIRGO/LOI (Fröhlich et al. f97 (1997)) observations on SoHO.
## 5 Discussion and conclusions
We have compared the structure, the neutrino fluxes, the chemical composition profiles and the helioseismological properties of calibrated standard solar models computed with the adopted nuclear reaction rates of the European compilation NACRE (Angulo et al. a99 (1999)) with those of calibrated solar models computed with the nuclear reaction rates of Caughlan & Fowler (cf88 (1988)) and Adelberger et al. (a98 (1998)).
Roughly speaking, the thermonuclear reaction rates of PP chains adopted by NACRE and, but in less extend, by Adelberger et al., are slightly less efficient than those adopted by Caughlan & Fowler. The calibration generates models with cores of larger temperature, density, helium content and sound speed with the concomitant increase of the neutrino fluxes, except for $`\nu _{\mathrm{pp}}`$ and $`\nu _{{}_{}{}^{8}\mathrm{B}_{}^{}}`$; for this last one, the decrease is due to the smaller rate of the reaction $`{}_{}{}^{7}\mathrm{Be}_{}^{}(p,\gamma ){}_{}{}^{8}\mathrm{B}_{}^{}{}_{}{}^{}`$. Thus the predicted neutrino fluxes are reduced for the chlorine and Kamiokande experiments, but almost unchanged for gallium. For Kamiokande and chlorine, N99 predicts intermediate values between A98 and C88.
The introduction of the NACRE thermonuclear rates increases the discrepancy between predicted and observed sound velocity profiles between the Sun and the models, both below the convection zone and in the solar core. These relative differences, though at the level of a few thousandths, are smaller for the model computed with the reaction rates of Caughlan & Fowler, the increase is $`+0.5\%`$ for C98 and $`+1\%`$ for N99. The radius at the base of the solar convection zone is in good agreement with the observed value for all models.
Though NACRE adopts an enhanced rate for the reaction of lithium burning $`{}_{}{}^{7}\mathrm{Li}_{}^{}(p,\alpha ){}_{}{}^{4}\mathrm{He}_{}^{}`$, the predicted depletion of photospheric lithium remains too small to fit the observed value.
The differences between calibrated solar models computed with the adopted thermonuclear reaction rates of the two new compilations are rather small. It is not really possible to make any choice between them. From an increase of the accuracy of the observed p-mode frequencies and from a hopefully detection of g-modes one can expect to improve our knowledge on the stratification of the solar core in the goal to validate, in the low energy regime, the thermonuclear reaction rates and their concomitant neutrino generation.
Thanks are due to the NACRE’s work which also provides these estimates of uncertainties on the adopted rates. These new features, now available to the user, are important to constraint the solar model. We are investigating this point in a work in progress.
###### Acknowledgements.
It is a pleasure to thank the referee Pr. J.N. Bahcall for bringing several references to our attention, helping us to clarify several points and making several constructive suggestions which have improved the paper. This work has been performed using the computing facilities provided by the OCA program “Simulations Interactives et Visualisation en Astronomie et Mécanique (SIVAM)”. W. Däppen is acknowledged for kindly providing the MHD package of equation of state.
|
no-problem/9907/astro-ph9907352.html
|
ar5iv
|
text
|
# First Constraints on Iron Abundance versus Reflection Fraction from the Seyfert 1 Galaxy MCG–6-30-15
## 1 INTRODUCTION
The current paradigm for active galactic nuclei (AGN) is a central engine consisting of an accretion disk surrounding a supermassive black hole (e.g. see review by Rees 1984). The main source of energy is gravitational potential energy as material falls in and is heated to high temperatures in some sort of dissipative disk.
The accretion disk is assumed to consist of cold optically thick material. Cold in this context means that iron is more neutral than Fe XVII (oxygen is not fully ionized although H and He may be ionized). Depending on the geometry, this material may be subjected to irradiation. Careful study of the X-ray reprocessing mechanisms can give much information about the immediate environment of the accreting black hole. These effects of reprocessing can often be observed in the form of emission and absorption features in the X-ray spectra of AGNs. In Seyfert 1 nuclei, approximately half of the X-rays are ‘reflected’ off the inner regions of the accretion disk (Guilbert & Rees 1988; Lightman & White 1988), and superposed on the direct (power-law) primary X-ray emission. The general consensus is that the power-law component is emitted in the corona above the disk. As photons pass through the corona, some fraction will be upscattered to X-ray energies. Multiple Compton scatterings tend to produce a power-law X-ray spectrum. The principle observables of the reflection spectrum are a fluorescent iron K$`\alpha `$ line, and Compton backscattered continuum which hardens the observed spectrum above $``$ 10 keV.
The iron line together with the reflection component are important diagnostics of the X-ray continuum source. The strength of the emission line relative to the reflection hump between 20–30 keV is largely dependent on the abundance of iron relative to hydrogen in the disk (George & Fabian 1991; hereafter GF91), as well as the normalization of the reflection spectrum relative to the direct spectrum. The relative normalization of the reflection spectrum probably depends primarily on the geometry (i.e. the solid angle subtended by the reflection parts of the disk as seen by the X-ray source). However, strong light bending effects (e.g. Martocchia & Matt 1996) or special-relativistic beaming effects (e.g. Reynolds & Fabian 1997, Beloborodov 1999) can also enhance the amount of reflection.
MCG–6-30-15 is a bright nearby ($`z=0.0078`$) Seyfert 1 galaxy that has been extensively studied by every major X-ray observatory since its identification. An extended EXOSAT observation provided the first evidence for fluorescent iron line emission (Nandra et al. 1989) which was attributed to X-ray reflection. Confirmation of these iron features by Ginga as well as the discovery of the associated Compton reflected continuum supported the reflection picture (Nandra, Pounds & Stewart 1990; Pounds et al. 1990; Matsuoka et al. 1990) while ASCA data have shown the iron line to be broad, skewed, and variable (eg. Tanaka et al 1995; Iwasawa et al. 1997). The high energy and broad-band coverage afforded by the Rossi X-ray Timing Explorer (RXTE) show convincingly the simultaneous presence of both the broad iron line and reflection component for the first time in this object (Lee et al. 1998). Data on MCG–6-30-15 from BeppoSAX also confirm the presence of a broad skewed iron line and reflection continuum (Guainazzi et al. 1998).
We present in this paper a long look at MCG–6-30-15 simultaneously by RXTE and ASCA spanning a time interval of $`400\mathrm{ks}`$. (The RXTE on-source time was $``$ 400 ks and ASCA on-source time was $`200\mathrm{ks}`$). Our observations clearly confirm the presence of a redshifted broad iron K$`\alpha `$ line at $``$ 6.3 keV and reflection hump between 20–30 keV. The high energy instrument HEXTE on RXTE coupled with ASCA’s sensitivity at the lower energies allow us to constrain for the first time the relationship between reflection fraction and abundance values at the 99 per cent confidence level. We explore these features of reflection and iron emission in detail. An in-depth investigation of variability is beyond the scope of this paper and will be addressed in a later paper.
## 2 Observations
MCG–6-30-15 was observed by RXTE spanning $``$ 400 ks over the period from 4 Aug 1997 to 12 Aug 1997 by both the Proportional Counter Array (PCA) and High-Energy X-ray Timing Experiment (HEXTE) instruments. (The final useable integration times were 304 ks for the PCA and 114 ks for HEXTE.) It was simultaneously observed for $`200\mathrm{ks}`$ (with useable intergration time of 197 ks) by the ASCA Solid-state Imaging Spectrometers (SIS) over the period 1997 August 3 to 1997 August 10 with a half-day gap part-way through the observation (PI : H. Inoue). The SIS was operated in Faint mode throughout the observation, using the standard CCD chips (S0C1 and S1C3). We concentrate primarily on the RXTE spectra in this paper.
### 2.1 RXTE and ASCA Instruments
The Rossi X-Ray Timing Explorer (RXTE) consists of three instruments. The two pointed instruments are the Proportional Counter Array (PCA) that covers the lower energy range and the High Energy X-ray Timing Experiment (HEXTE) that covers the higher energies. Together, the two instruments cover the energy band between 2 and 200 keV. The PCA consists of 5 Xenon Proportional Counter Units (PCUs) sensitive to X-ray energies between 2–60 keV with $`18`$ per cent energy resolution at 6 keV. The total collecting area is 6500 $`\mathrm{cm}^2`$ ($``$ 3900 $`\mathrm{cm}^2`$ for 3 PCUs) with a $`1^{}`$ FWHM field of view. The HEXTE instrument is coaligned with the PCA and covers an energy range between 20–200 keV. For a more thorough review on these instruments, we refer the reader to Jahoda et al. (1996) and Rothschild et al. (1998).
The Advanced Satellite for Cosmology and Astrophysics (ASCA) is a Japanese X-ray observatory that was launched on 1993 February 20, designed and constructed as a joint endeavor with the United States. It consists of four identical grazing-incidence X-ray telescopes, each terminating with a fixed detector. The focal plane detectors are two CCD cameras (Solid state Imaging Spectrometer, or SIS) and two gas scintillation imaging proportional counters (Gas Imaging Spectrometer, or GIS). All four detectors are operated simultaneously all the time. The ASCA SIS is sensitive in the energy range between 0.4 and 10 keV, with an energy resolution of 2 per cent at 5.9 keV. Its field of view is 22 $`\mathrm{arcmin}^2`$. The primary goal of the SIS is spectroscopy in the 0.4-10 keV energy band. Its PSF is completely determined by the telescope rather than the detector response. For a more in-depth discussion, we defer to Tanaka, Inoue & Holt (1994).
### 2.2 Data Analysis
#### 2.2.1 RXTE Reduction
We extract PCA light curves and spectra from only the top Xenon layer using the newly released Ftools 4.1 software. Data from PCUs 0, 1, and 2 are combined to improve signal-to-noise at the expense of slightly blurring the spectral resolution. Data from the remaining PCUs (PCU 3 and 4) are excluded because these instruments are known to periodically suffer discharge and are hence sometimes turned off.
Good time intervals were selected to exclude any earth or South Atlantic Anomaly (SAA) passage occulations, and to ensure stable pointing. We also filter out electron contamination events.
We generate background data using pcabackest v2.0c in order to estimate the internal background caused by interactions between the radiation/particles and the detector/spacecraft at the time of observation. This is done by matching the conditions of observations with those in various model files. The model files used are the L7-240 background models which are intended to be specialized for application to faint sources with count rate less than 100 cts/sec.
The PCA response matrix for the RXTE data set was created using pcarsp v2.36. In the course of performing the spectral fitting described in this section, we discovered a bug in the pcarmf package. This resulted in the 1998-Aug-29 memo (http://legacy.gsfc.nasa.gov/docs/xte/whatsnew/calibration.html#c als41b) from NASA-GSFC detailing the circumstances under which pcarmf does not properly account for the temporal changes in the response matrices. All spectral fitting presented here has been corrected for this software bug. Background models and response matrices are representative of the most up-to-date PCA calibrations.
The net HEXTE spectra were generated by subtracting spectra of the off-source positions from the on-source data. Time intervals were chosen to exclude 32 seconds prior to and 320 seconds following SAA passages. This conservative approach avoids periods when the internal background is changing rapidly. Data in which the satellite elevation is less than 10 degrees above the Earth’s limb is also excluded. We use the standard 1997 March 20 HEXTE response matrices provided by the RXTE Guest Observer Facility (GOF) at Goddard Space Flight Center. The relative normalizations of the PCA and the two HEXTE clusters are allowed to vary, due to uncertainties ( $`<`$ about 5% ) in the HEXTE deadtime measurement.
#### 2.2.2 ASCA Reduction
ASCA data reduction was carried out using FTOOLS version 4.0 and 4.1 with standard calibration provided by the ASCA GOF. Detected SIS events with a grade of 0, 2, 3 or 4 are used for the analysis. One of the standard data selection criteria, br earth, elevation angle of the source from the bright Earth rim, does not materially affect the hard ASCA data while it does contribute to the soft X-ray data from the SIS at some level. We use the SIS data of approximately 231 ks from each detector for spectral analysis. The source counts are collected from a region centred at the X-ray peak within $``$ 4 arcmin from the SIS and 5 arcmin from the GIS. The background data are taken from a (nearly) source-free region in the same detector with the same observing time.
Table 1 details the average count rate and fluxes for specified energy bands as detected by the ASCA S0 and S1 instruments, and the RXTE PCA and HEXTE Cluster A and Cluster B detectors.
Fig. 1 shows the ASCA S0 160-2700 pha-channel ($``$ 0.6–10 keV), and the RXTE PCA 1–129 pha channel ($``$ 2-60 keV) background subtracted light curves. There is a gap of $``$ 60 ks in the ASCA light curve in which the satellite observed IC4329A while MCG–6-30-15 underwent a large flare observed by RXTE. Significant variability can be seen in both light curves on short and long timescales, which will be investigated in detail in a later paper. Flare and minima events are seen to correlate temporally in both light curves.
## 3 Spectral Fitting
We fit the data in a number of ways in order to investigate the known features of fluorescent iron emission (eg. Fabian et al. 1994, Tanaka et al. 1995) and Compton reflection (eg. Lee et al. 1998; Pounds et. al. 1990, Matsuoka et al. 1990; Nandra & Pounds 1994) in this object.
We restrict ASCA and PCA data analysis to be respectively between 3 and 10 keV, and 3 and 20 keV (Fig. 2 shows that the PCA background dominates above 25 keV). This lower energy restriction at 3 keV is selected in order that the necessity for modeling photoelectric absorption due to Galactic ISM material, or the warm absorber that is known to be present in this object is removed. This also allows us to bypass recent problems with residual dark currents in the ASCA 0.5–0.8 keV energy band, and RXTE calibration problems that may still exist at energies below 2 keV. Despite the fact that these absorption features are only important below $``$ 2 keV, we nevertheless check their significance in two ways : (1) we fix the column density at the value of $`4.09`$ $`\times `$ $`10^{20}`$ $`\mathrm{cm}^2`$ to account for Galactic ISM absorption along the line of sight for this object, and (2) we allow this parameter to be free. For both cases, we find that the difference in best fit values between a model that includes and one that excludes this absorption effect is negligible. Additionally, for the latter case we find that the model is unable to place any tight constraints on the column density in the chosen energy range for the RXTE data. Accordingly, we neglect this parameter from our fits. We have also checked that the standard background-subtraction methods described in Section 2.2.1 should adequately account for the PCA background in the energies of interest. As added checks for the quality of the reduction and background subtraction, we extract spectra from 81 ks of Earth-occulted data (elv $``$ 0), and find for the occulation data that the normalized flux per keV is zero for the selected energy range. HEXTE data are restricted to be between 16 and 40 keV in order that we may adequately model the reflection hump. HEXTE response matrices are inadequate below 16 keV (William Heindl, 1998 private communication); 0.5 per cent systematics were added to PCA spectra.
### 3.1 Spectral Features
A nominal fit to all three data sets (i.e. ASCA, RXTE PCA and HEXTE) demonstrates the clear existence of a redshifted broad iron K$`\alpha `$ line at $``$ 6.3 keV and reflection hump between 20 and 30 keV as shown in Lee et al. 1998. Fig. 3 further demonstrates the good agreement between ASCA and RXTE at energies below 10 keV, and in particular, at the energies surrounding the iron line.
As added assurance for the existence of the reflection component and good agreement between ASCA and RXTE, a multicomponent model fit that includes the reflected spectrum to all three data sets shows that the residuals are essentially flat (fig. 4); a gaussian component is used to account for the iron line. The underlying continuum is fit with the model pexrav which is a power law with an exponential cut off at high energies reflected by an optically thick slab of neutral material \[Magdziarz & Zdziarski 1995\]. We fix the inclination angle of the reflector at $`30^{}`$ so as to agree with the disk inclination one obtains when fitting accretion disk models to the iron line profile as seen by ASCA (Tanaka et al. 1995). The high energy cutoff is fixed at 100 keV consistent with thermal corona models. We perform fits in which the high energy cutoff is allowed to be a free parameter and find that RXTE is incapable of placing any contraints on this value for MCG–6-30-15 (the best fit values can vary anywhere from $``$ 30 keV to a few hundred keV). For this reason, we rely on the values determined by BeppoSAX. (The Phoswitch Detector System PDS on board BeppoSAX has a better spectral sensitivity than the RXTE HEXTE at the higher energies.) For robustness, we further test the sensitivity of RXTE to the high energy cutoff by performing fits in which this parameter is allowed to vary within the 100–400 keV 90% confidence region as determined by BeppoSAX (Guainazzi et al. 1998). The preferred high energy cutoff is 100 keV with infinity as the upper limit error. We also test for the significance of the high energy cutoff value in the determination of fit parameters; this is done by comparing fit results for 100 keV and 400 keV cutoff energies ; the two results do not differ with any statistical significance. We fit the RXTE data in the 3–40 keV energy range with a multiple component model consisting of a power-law reflection and redshifted Gaussian component. Errors are quoted at the 90 per cent confidence ($`\mathrm{\Delta }\chi ^2`$ = 2.71, Bevington & Robinson 1994).
The best fit values for the 100 keV cutoff energy case are detailed in Table 2. (A double gaussian fit using the ASCA parameters shows a negligible improvement of $`\mathrm{\Delta }\chi ^2`$ = 3 for one extra parameter.) For comparison, the best fit values for which this cutoff energy was fixed at 400 keV are : power-law slope $`\mathrm{\Gamma }`$ = $`2.07_{0.04}^{+0.06}`$ with a power law flux at 1 keV $`A=(1.95_{0.10}^{+0.13})\times 10^2`$ $`\mathrm{ph}`$ $`\mathrm{cm}^2`$ $`\mathrm{s}^1`$. The reflection fraction is $`0.81_{0.15}^{+0.16}`$ for lower elemental abundances set equal to that of iron of $`0.72_{0.17}^{+0.26}`$ solar abundances. The redshifted line energy is $`5.98\pm 0.08\mathrm{keV}`$ with line width $`\sigma `$ = $`0.58_{0.10}^{+0.11}\mathrm{keV}`$ and intensity of the iron line I = $`1.62_{0.19}^{+0.26}\times 10^4`$ $`\mathrm{ph}\mathrm{cm}^2\mathrm{s}^1`$. The equivalent width $`W_{K\alpha }`$ = $`300_{35}^{+48}\mathrm{eV}`$ and $`\chi ^2`$ is 39 for 47 degrees of freedom. While the fit values between the two cutoff energy scenarios differ by little, it is clear that a high energy cutoff of 100 keV is preferred ($`\mathrm{\Delta }\chi ^24`$ for no additional parameters). Accordingly, the results that follow are based on this fixed value for the high energy cutoff. We note that the $`\chi ^2`$ values are likely to be lower than expected due to the tendency of the reduction software to overestimate the errors.
### 3.2 Doppler and Gravitational effects on the Reflected Spectrum
We next investigate the degree to which gravitational and Doppler effects which determine the line profile also affect the reflection continuum shape (the previous section does not account for this effect). We use a convolution model (rdblur) that assumes the same characteristics as the disk-line model for a Schwarzschild geometry by Fabian et al. (1989) for application to the reflected spectrum. (The reflected continuum is convolved with the same kernel as the diskline model.) For these fits, we assume a cold accretion disk inclined at $`i=30^{}`$ (Tanaka et al. 1995). The radial emissivity, $`\alpha `$ assuming a power-law-type emissivity function ($`R^\alpha `$) of the line, is left as a free parameter. The innermost radius of stable orbit $`R_{in}`$ for a Schwarzschild geometry is set to Iwasawa et al. (1999) best fit ASCA value $`6.7r_g`$ ; the outermost radius $`R_{out}`$ is left as a free parameter ($`r_gGM/c^2`$ is the gravitational radius of the black hole), for this object. A comparison between the differences in the two models is shown in fig. 5.
We test for the effects of gravity and Doppler shifts by comparing the quality of the fits using (1) a model that accounts for the effects of relativistically smeared reflection and (2) a model that does not account for this effect. For the latter, the diskline model is used for the iron emission and pexrav for the continuum : pexrav \+ diskline. A similar model is used for the former with the addition of a multiplicative component (rdblur) that convolves the continuum with the same kernal as the diskline model : rdblur$``$(pexrav)+diskline. We have investigated fits in which the lower-Z and iron abundances were tied together and left as free parameters, and fits in which the former was fixed at 0.5 solar abundances and the latter fixed at twice solar abundances appropriate for the $`W_{K\alpha }`$ value seen in this object. (A more detailed investigation of the effects of abundances on the $`W_{K\alpha }`$ and reflection fraction is presented in subsequent sections.) The results for the latter (fixed abundances) vacillate between statistically insignificant preferences for relativistically smeared reflection from RXTE data ($`\mathrm{\Delta }\chi ^2=1.5`$ corresponding to 1 extra ‘hidden’ parameter, for 49 degrees of freedom), and the contrary from ASCA data ($`\mathrm{\Delta }\chi ^2=1.5`$ for 681 degrees of freedom). We have also searched for this effect in the 1994 ASCA data and find that $`\mathrm{\Delta }\chi ^2=0.2`$ for 681 degrees of freedom. When the low-Z and iron abundances are untied and left as free parameters, the RXTE data prefer smeared reflection with $`\mathrm{\Delta }\chi ^2=2.7`$ for 47 degrees of freedom. Additionally, while derivations for the iron abundance remains at twice solar, there is a preference for slightly higher low-Z abundances for the case of smeared reflection. $`\mathrm{\Delta }\chi ^2`$ values from ASCA remain simliar to those previously mentioned when abundances were fixed; there is a clear indication that ASCA is insensitive to abundance measurements. The reflection fraction is fixed at unity.
While we suspect that relativistic effects are indeed prevalent in the reflected spectrum, it appears that the sensitivity of the present data are insufficient for detecting the effects of relativistically smeared reflection with large statistical significance. We suspect that any deviation in $`\chi ^2`$ is largely due to the the modeling of the iron edge at $`7\mathrm{keV}`$ as illustrated in Fig. 5. The contrast is remarkably noticeable at that energy between the models : the standard pexrav model that we use for our fits invokes a sharp edge at those energies which is neither seen in the time-averaged ASCA nor RXTE data. The sharpness and depth of this feature is diminished when we invoke Doppler and gravitational effects. However, we are trying to detect a 10 per cent effect which is also at the level of the ASCA error bars above $``$ 7 keV. Differences in $`\chi ^2`$ can come additionally from the smearing of the reflection hump, and may be partly the reason that RXTE with its higher energy coverage prefers smeared reflection.
We have checked our findings against the level of present calibration uncertainties in the PCA response matrix. An absorbed power-law and narrow Gaussian fit to a 47 ks archived data of the quasar 3C 273 from the same gain epoch (epoch 3) as our observations give residuals less than 1 per cent for the energies of interest. Furthermore, the best fit 3C 273 values are comparable to those obtained with ASCA and BeppoSAX observations of this object (e.g. Haardt et al. 1998; Orr et al. 1998; Cappi et al. 1998).
However, because fit results from a model that includes the effects of relativistically smeared reflection and one that does not differ by little, we also present results for the former whenever appropriate in this paper. Due to the insensitivity of RXTE to the line profile, the results in which relativistically smeared reflection is considered, presented in Table 2 and subsequently correspond to the model : rdblur$``$(pexrav + narrow gaussian) with $`\alpha `$, $`R_{in}`$ and $`R_{out}`$ frozen respectively at 2, $`6r_g`$, and $`30r_g`$ The best fit results are similar to those derived from the diskline models mentioned previously.
### 3.3 Constraints on Reflection Fraction and Iron Abundance
Having established that the reflection component exists, we next investigate the relationship between the iron abundance and reflection fraction for the reflection scenarios described in sections 3.1 and 3.2. In order to obtain a better understanding of physical models of AGN central regions, we need to disentangle the abundance from the absolute normalization of the reflection component. Due to the lack of good broad spectral coverage in previous observations, the fit parameters of reflection fraction, elemental abundances, and power-law index were strongly coupled. With the high energy coverage of HEXTE and ASCA’s good spectral resolution at energies between 0.6 and 10 keV, we can decouple these parameters for the first time and study the relationship each has with respect to the others. Fig. 6 shows the confidence contours for abundance versus reflection fraction as expected from a corona+disk model. The best fit value for abundance and reflection fraction for the standard pexrav case are respectively $`0.92_{0.21}^{+0.31}`$ solar abundances and $`1.09_{0.19}^{+0.26}`$; the corresponding values for smeared reflection are $`1.13_{0.21}^{+0.31}`$ solar abundances and $`1.16_{0.26}^{+0.40}`$. Both results are consistent with the scenario in which the primary X-ray source is above the accretion disk subtending an angle of 2$`\pi `$ sr (i.e. corresponding to a reflection fraction of $`\frac{\mathrm{\Omega }}{2\pi }=1`$).
We wish to stress the uniqueness of this data set for both its good statistics and energy coverage. In the RXTE PCA 2-20 keV spectrum alone, we estimate that $``$ 4 million photons make up the spectrum for our 400 ks exposure. (The 2–20 keV flux is $``$ 8.6 photons $`\mathrm{cm}^2\mathrm{s}^1`$ for a detector effective area at those energies of $`1000\mathrm{cm}^2`$, for 3 PCUs.) Its uniqueness is underlined by our ability to place combined constraints on both the line and reflection fraction. However, it should be noted that pexrav does not model the iron emission feature which, for the case of RXTE data is achieved using a Gaussian line profile. Accordingly, the consistency of the strength of the line with pexrav predictions for the resulting absorption is investigated via Monte Carlo simulations in the next section. GF91 have shown that the strength of the iron K-shell absorption features can be increased by varying the elemental abundances, or by ionizing the lower atomic weight elements. The enhanced iron abundance causes more iron K-shell absorption of the reflection continuum thereby weakening the reflection continuum as the abundance rises.
For completeness, we also consider the significance of the inclination angle value for our determination of the reflection fraction and abundance results. (We test this only for the non-smeared reflection case since the determination of the inclination is largely due to the iron line peak rather than the reflected continuum.) This test is also relevant for assessing the accuracy of the value we obtain for $`W_{K\alpha }`$, which is dependent on the inclination and $`\mathrm{\Gamma }`$; ASCA’s determination of the inclination from this observation is $`31^{}\pm 2^{}`$ (Iwasawa et al. 1999, submitted) consistent with previous observations (Tanaka et al. 1995). For exaggerated inclinations of 20 and 40, we obtain respectively reflection fraction and abundance values to be $`1.02_{0.18}^{+0.25}`$ and $`0.91_{0.21}^{+0.32}`$, and $`1.18_{0.21}^{+0.26}`$ and $`0.95_{0.22}^{+0.33}`$ which are within the errors of the values obtained from fits in which we fix the inclination at 30. The apparent insensitivity of RXTE to the inclination value is due to the inability of the satellite to resolve the blue peak of the iron line. The inclination is determined at the energy where the blue peak of the line abruptly drops off; the spectral resolution of RXTE is inadequate for resolving this feature in much detail. $`\mathrm{\Gamma }`$ remains relatively unchanged.
### 3.4 Iron Abundance and Strength of the Reflected Continuum
Up until this point, we have treated the iron emission and absorption (which has direct bearing on the derived reflection fraction) as separate additive components of a multi-component model. This is due largely to pexrav modelling only the reflection continuum, which is imprinted with the absorption feature. However, we need to assess the consistency of the line intensity with pexrav model predictions of this absorption. This is now discussed in the context of Monte Carlo simulations.
Previous workers (i.e. GF91; Matt, Perola & Piro 1991; Reynolds, Fabian, & Inoue 1995) have investigated the effect of abundance values on the equivalent width of the line and the associated reflected spectrum. This was done via Monte Carlo simulations in which incident photons are assigned a random initial energy with a power-law distribution function $`\mathrm{\Gamma }1.9`$, and an incident energy (corresponding to an isotropic source). The probabilities for a photon to be either Compton scattered or photoelectrically absorbed for a given energy are tracked.
The equivalent width of the iron line in MCG–6-30-15 is $`266_{33}^{+46}\mathrm{eV}`$ and $`331\pm 25\mathrm{eV}`$ for non-smeared and smeared reflection respectively, when the reflector has $``$ solar abundance and reflection fraction close to unity. Both are inconsistent with the predicted $`W_{K\alpha }`$ of $`150\mathrm{eV}`$ from GF91 for a slab of cold material subtending $`2\pi \mathrm{sr}`$ at the X-ray source, rotating around a static black hole. We show in Fig. 6 the locus of points in the abundance – reflection plane which has an equivalent width of $`250\pm 100\mathrm{eV}`$. It is clear that the highest reflection fraction part of the diagram leads to a much stronger line than is observed. The best-fitting solution lies within the region defined by the reflection-fitting contours and the lines of observed equivalent width.
Some fraction of an iron line can be due to reflection from outer material (the putative torus say). This fraction is however likely to be small in MCG–6-30-15 since the part of the line consistent with a zero-velocity narrow core sometimes disappears at some phases of the ASCA data (Iwasawa et al 1996).
The equivalent width of the line does not depend however on the iron abundance alone, but also upon the abundance of the lower-Z elements such as oxygen which can absorb the line before it emerges from the reflector. We have therefore investigated the behaviour of the solution in the multi-dimensional, iron abundance, lower-Z abundance, reflection fraction space. To make the problem tractable we adopt unit reflection fraction. This is indeed the preferred solution when the iron and lower-Z abundances are separated. The contours in the abundance plane for the (non-smeared) results are shown in Fig. 7, showing that separate abundances are indeed preferred. We then use a Monte-Carlo code to track the predicted equivalent width along the major locus of the abundance contours. The equivalent widths are shown in Fig. 8 and reveal that a value of 300 eV occurs when the iron abundance is about twice the solar value and the lower-Z abundances are about half the solar value. Details of actual fit values using the pexrav and relativistically smeared reflection models can be found in Table 2.
## 4 Discussion
This simultaneous long observation coupled with the combined strengths of ASCA and RXTE have enabled us to constrain for the first time the relationship between iron abundance and reflection fraction at the 99 per cent confidence level, as well as confirm the presence of a broad skewed iron line for MCG–6-30-15. With the additional high energy coverage of HEXTE, we establish that the features of reflection are present and that results are consistent with the scenario in which cold material subtends $`2\pi `$ sr at the X-ray source. We further investigate the effects of gravity and Doppler shifts on the reflection component, but find that both RXTE and ASCA are insensitive to this. Additionally, we verify that the effects of the cutoff energy do not compromise our results. By fitting the data with the 100 keV lower limit for the cutoff energy and comparing that to the 400 keV cutoff energy fit results, we find that they are consistent with each other within their errors. The preferred cutoff energy however is 100 keV.
Monte Carlo simulations further reveal that an overabundance of iron by a factor of $``$ 2 is needed to reconcile the large value for the equivalent width that we observe for both the standard and relativistically smeared reflection scenarios; the equivalent width is even more dramatically enhanced when relativistivistic effects are invoked. By considering non-standard abundances, a consistent picture can be made for which both the iron line and reflection continuum originate from the same material / structure such as, e.g. an accretion disk. We find also that the factor of two to three iron overabundance as predicted by our data holds consistently even in comparisons with models of FeII emission, known to strongly contribute to the optical and UV continuum of many active objects (Wills, Netzer, & Wills 1985). It is also note-worthy to consider the importance of abundance determinations for assessing the chemical history of the host galaxy. For example, an iron rich environment with depleted amounts of lighter elements (as suggested in our data) may provide evidence that Type Ia supernovae events were likely to have occurred in high proportions during the history of the galaxy.
## 5 ACKNOWLEDGEMENTS
We thank all the members of the RXTE GOF for answering our inquiries in such a timely manner, with special thanks to William Heindl and the HEXTE team for help with HEXTE data reduction. We also thank Keith Jahoda for explanations of PCA calibration issues, and Roderick Johnstone and Keith Arnaud for their time and help with software. JCL thanks the Isaac Newton Trust, the Overseas Research Studentship programme (ORS) and the Cambridge Commonwealth Trust for support. ACF thanks the Royal Society for support. WNB thanks the NASA RXTE grant NAG5-6852, and the NASA Long Term Space Astrophysics (LTSA) grant NAG5-8107 for support. CSR thanks the National Science Foundation for support under grant AST9529175, and NASA for support under the LTSA grant NASA-NAG-6337. KI thanks PPARC.
|
no-problem/9907/cond-mat9907236.html
|
ar5iv
|
text
|
# Andreev scattering in nanoscopic junctions at high magnetic fields.
\[
## Abstract
We report on the measurement of multiple Andreev resonances at atomic size point contacts between two superconducting nanostructures of Pb under magnetic fields higher than the bulk critical field, where superconductivity is restricted to a mesoscopic region near the contact. The small number of conduction channels in this type of contacts permits a quantitative comparison with theory through the whole field range. We discuss in detail the physical properties of our structure, in which the normal bulk electrodes induce a proximity effect into the mesoscopic superconducting part.
PACS numbers: 61.16.Ch, 62.20.Fe, 73.40.Cg
\]
It is well know that it is possible to fabricate atomic size contacs between metallic electrodes in a controlled way by means of the mechanically controllable break junction technique or the scanning tunneling microscope (STM). Indeed, by repeatedly indenting the tip into the sample one can achieve a stationary state in which a connecting neck between the electrodes is formed. This neck elongates and contracts during the repeated indentation following a well defined pattern of elastic and plastic steps, which has been neatly measured in a combined STM-AFM experiment where conductance and forces could be recorded simultaneously.
The properties of a given neck can be probed by measuring the current-voltage characteristic within the same experiment, so that the STM serves at the same time as a fabrication tool and as an experimental probe of a very singular atomic size nanostructure. A reasonable knowledge of the geometry of the neck which can be varied in a well controlled way, is obtained through a simultaneous measurement of the conductance during the fabrication process. The final form of these structures, which are successfully fabricated is a long connecting neck jointed on its ends to the bulk electrodes whose radius decreases in a smooth way towards a central constriction, which can be of atomic size.
In this experiment, the control of the morphology extends over two lengths scales: first, the overall form of the neck can be varied at mesoscopic length scales (hundreds or thousands of Å) by the repeated indentation process, and second, the smallest cross section can be varied at atomic scales (tens of Å) by doing small voltage variations on the z- piezotube. Recently new possibilities of atomic size contacts have lead to progress on the understanding of some phenomena occurring at a nanoscopic level. It has been shown during the last years that lead (Pb) is a good material to create this kind of small dimensions systems having the additional advantage of being a superconductor below $`T_c=7.16K`$. Indeed, the transport of current between two weakly linked superconductors brings noteworthy information about the contact through e.g. the Josephson current or through the multiple Andreev reflection mechanism. In the case of a single atom link between two electrodes the authors of Ref. proposed that the effect of the multiple Andreev resonances on the I-V characteristics is a measure of the number and transparency of the conduction channels through a single atom.
In this work we focus on the magnetic field dependence of the I-V characteristics of single atom point contacts. Indeed, it is well known that superconductors of reduced dimensions such as thin films or granular samples remain superconducting well above Hc. As the magnetic field penetration depth of lead is about 390Å for a bulky sample, it is feasible to build connecting necks with smaller lateral dimensions with the repeated indentation procedure. We find indeed that sufficiently long and narrow necks show superconducting features up to fields as large as 20 times the bulk critical field of Pb (which is 0.05T at 1.5K). The structure of subgap resonances due to multiple Andreev reflections remains under field and our analysis shows in detail how the pair breaking effect of the magnetic field, together with the N-S proximity effect from the bulk electrodes smears the subgap resonances.
We use a stable STM setup with a tip and a sample of the same material (Pb) which is brought from the tunneling into the contact regime by cutting the feedback loop. The I-V curves were taken at 1.5K using a standard four wire technique. Great care was taken to shield electrically the whole setup as RF noise is known to smear the subgap resonances in small contacts. The experiment is done by gently changing the smallest cross section of the neck to make a large number of atomic size contacts at each magnetic field without varying the overall form of the neck. Indeed, while our setup is sufficiently stable to maintain the same neck over a complete magnetic field sweep, we cannot maintain the morphology of the neck on the atomic level over a large field variation. Nevertheless, we could perform small field sweeps of several hundred Gauss with a given atomic arrangement and we find the same result, so that the measurement procedure does not change the results presented here. The maximal elongation of the piezotube, which is 1600 Å, limits the overall length of the necks. Here we discuss one typical case of a neck having a critical field of about 20 times the bulk critical field of lead with a magnetic field applied always parallel to the long axis.
Fig.1 shows a representative choice of measured I-V curves of several last contacts before breaking at zero field and under field. Each I-V curve, is different at each contact and can be well fitted at zero field by the conduction channels model of Ref. (straight lines, upper figure in Fig.1). Accordingly, the experimental I-V curves in a last contact show a large variety of behaviors which is slightly changed by varying the morphology of the contact at atomic length scales. This is compatible with measurements that record the conductance at a fixed voltage in a large number of contacts and show how the conductance of Pb last contacts shows steps comparable to the quantum unit of conductance, but the average value has large fluctuations. The model of Ref. uses (and verifies) the theoretical predictions that the I-V curve between two superconductors which are weakly linked through a small number of conduction channels is highly non linear and varies strongly depending on the transparency $`T`$ of the junction ($`0<T<1`$; tunnel to contact regimes). It turns out, that only one conduction channel with a given $`T`$ is not sufficient to fit the I-V curves shown in Fig.1, but that it is necessary to add a number $`N`$ of theoretical curves, each one with a given $`T_n`$ between 0 and 1. This was related to the number $`N`$ and transparency $`T_n`$ of channels in each single atom contact, where $`N`$ and the average values of the $`T_n`$’s depend on the element studied. In the case of Pb, this gives $`N=3`$ with $`T_1`$ rather opened (most frequently close to $`1`$) and $`T_{2,3}`$ more closed (smaller than $`1`$). In the uppermost part of Fig.1 the numbers show the experimentally measured $`T_n`$’s for each contact. We will not go into more details about this model which is extensively discussed in Refs.. In the following, we discuss how to explain the data under field.
We first analyze the influence of the magnetic field introducing the pair breaking effect in the standard procedure, as formulated in a wavefunction representation. It was shown in that pair breaking effects can be incorporated by modifying the Andreev reflection amplitude, $`a(\omega )=u(\omega )\sqrt{u^2(\omega )1}`$, where $`u(\omega )`$ satisfies:
$$\frac{\omega }{\mathrm{\Delta }}=u\left(1\mathrm{\Gamma }\frac{1}{\sqrt{1u^2}}\right)$$
(1)
where $`\mathrm{\Gamma }=1/(\mathrm{\Delta }\tau _{pb})`$, $`\tau _{pb}`$ is the pair breaking time and $`\mathrm{\Delta }`$ is the self consistent superconducting gap including the pair breaking effects. This expression is generally valid, irrespective of the origin of the pair breaking mechanism. The value of $`\mathrm{\Gamma }`$ used in the fittings was assumed to be the same for all channels and all I-V curves at a given applied field.
The straight lines in Fig.1 shows the fittings which are as good as the ones obtained at zero field, provided that the pair breaking parameter is introduced. The number $`N`$ and the characteristic values of the parameters for the transparency of each channel $`T_n`$ does not vary up to the largest fields. $`\mathrm{\Gamma }`$ is determined with a precision of about 20%.
The values of $`\mathrm{\Gamma }`$ explain the magnetic field dependence of the gap in the tunneling regime, which we have measured by breaking completely the contact. We therefore could follow and fit precisely the predicted influence of pair breaking in the structures associated with multiple Andreev reflections. This was not possible to do previously, as other realizations of multiple Andreev reflections (e.g. large point contacts, tunnel junctions with microbridges ) involve experimental setups which are much more complex than a single atom contact with a small number of conduction channels and cannot be modelled precisely.
We can gain more insight in the physics of this system if we consider the pair breaking parameter in a uniform superconducting cylinder in a magnetic field which is given by :
$$\frac{\mathrm{}}{\tau _{pb}}=\frac{e^2DR^2H^2}{6\mathrm{}^2}$$
(2)
where $`R`$ is the radius of the cylinder, $`D=v_Fl/3`$ is the diffusion coefficient, $`l`$ is the mean free path, and $`H`$ is the applied field. Following this model, in order to explain the values of $`\mathrm{\Gamma }`$ used in fig. we need a cylinder of a radius which is rather large ($`R450`$Å, taking $`\xi R`$, note that a smaller value of $`\xi `$ leads to even larger values of $`R`$) as compared to the usual width estimations for the neck presented here or other necks fabricated with the same method . Clearly, a model based on a simple cylinder does not explain the observed behavior, we need to take into account that the radius varies as a function of $`z`$.
We analyze in the following the order parameter, density of states and pair breaking parameter in a neck of a varying radius. Indeed, a better agreement is obtained if we consider that at a given field, the superconducting region is in good contact with the part of the neck with larger radius which already became normal, so that pair breaking effects arise from the proximity effect of this normal region. Assuming that the electronic mean free path is smaller than the coherence length, we can describe the superconducting properties by the Usadel equations. We parametrize the Green’s functions in terms of an angle parameter, $`\theta (\stackrel{}{r},E)`$, where $`E`$ is the energy measured from the chemical potential. Setting $`\mathrm{}=1`$, they can be written as:
$`{\displaystyle \frac{D}{2}}^2\theta `$ $`+`$ $`iE\mathrm{sin}(\theta )+|\mathrm{\Delta }|\mathrm{cos}(\theta )`$ (3)
$``$ $`2e^2D|\stackrel{}{A}|^2\mathrm{cos}(\theta )\mathrm{sin}(\theta )=0`$ (4)
where $`\stackrel{}{A}=(Hr\stackrel{}{u}_\varphi )/2`$ is the vector potential. We neglect the influence of other spin flip and inelastic processes. At the boundary of the contact, we have $`|\theta |_R=0`$, and $`R(z)`$ determines the geometry of the neck (we neglect any radial dependencies, and take $`R(z)\xi `$). We also assume that the magnetic field is unscreened within the neck. Then, $`A`$ can be replaced by its average, $`A^2(z)=\frac{H^2R(z)^2}{12}`$. Within this approximation, the vector potential enters in eq. as giving rise to an effective, position dependent, pair breaking time. If we apply eq. to a uniform wire, this pair breaking time reduces to that in eq..
Figure shows the superconducting order parameter, for different fields, as function of the position for a typical neck modelled by two truncated cones of $`L=800`$ Å length attached to bulk electrodes, with an opening angle of $`\alpha =35^0`$. We also take $`L=3\xi `$, so that $`\xi 260\AA `$. There is a smooth transition to the normal state as the radius of the neck increases. This is further illustrated in figure, where the density of states is shown at different positions for $`H=0.2`$T. For this field, the influence of the normal region is felt throughout the entire neck. Even at the central region, the gap is significantly rounded. This is also observed in the calculated density of states at the center shown in Fig.4.
¿From the solution of the Usadel equations we can infer the amplitude for Andreev reflections at the contact surface, which is given by $`i\mathrm{tan}[\theta (z=0,E)/2]`$. This function is slightly different from the standard expression used to incorporate pair breaking effects in a point contact (see equ. and Ref.). We have checked that there are no appreciable differences in the quality of the fits to the experimental data, shown in fig., with a reasonable value for $`\xi 260300\AA `$.
In conclusion, we have measured, and analyzed, the multiple Andreev scattering resonances of atomic sized Pb contacts in the presence of a magnetic field greater than the bulk critical field. In this regime, superconductivity is restricted to a small neck of mesoscopic dimensions. We are able to build and control in situ with our STM structures which are a unique example of weak links of dimensions variable from atomic to mesoscopic length scales, opening a new field of studies in nanophysics. We present a quantitative comparison of experiment and theory of pair breaking effects on multiple Andreev resonances.
We would like to thank discussions and the help of A. Izquierdo, G. Rubio and N. Agraït. One of us (E. B.) is thankful to the Universität Karlsruhe for its hospitality. Financial support from the TMR program of the European Commission under contract ERBFMBICT972499, the CICyT (Spain) through grant PB96-0875, the CAM (Madrid) through grant 07N/0045/98 and FPI and the spanish DGIGyT under contract PB97-0068 are gratefully acknowledged.
|
no-problem/9907/hep-ph9907315.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The cross section of inclusive $`J/\psi `$ hadroproduction measured in $`p\overline{p}`$ collisions at the Fermilab Tevatron turned out to be more than one order of magnitude in excess of what used to be the best theoretical prediction, based on the colour-singlet model (CSM). As a solution to this puzzle, Bodwin, Braaten, and Lepage proposed the existence of so-called colour-octet processes to fill the gap. The central idea is that $`c\overline{c}`$ pairs are produced at short distances in colour-octet states and subsequently evolve into physical (colour-singlet) charmonia by the nonperturbative emission of soft gluons. The underlying theoretical framework is provided by nonrelativistic QCD (NRQCD) endowed with a particular factorization hypothesis, which implies a separation of short-distance coefficients, which are amenable to perturbative QCD, from long-distance matrix elements, which must be extracted from experiment. This formalism involves a double expansion in the strong coupling constant $`\alpha _s`$ and the relative velocity $`v`$ of the bound charm quarks, and takes the complete structure of the charmonium Fock space into account.
In the case of inelastic $`J/\psi `$ photoproduction, NRQCD with colour-octet matrix elements tuned to fit the Tevatron data predicts at leading order (LO) a distinct rise in cross section as $`z1`$, where $`z`$ is the fraction of the photon energy transferred to the $`J/\psi `$ meson in the proton rest frame, which is not observed by the H1 and ZEUS collaborations at HERA. This colour-octet charmonium anomaly has cast doubts on the validity of the NRQCD factorization hypothesis , which seems so indispensible to interpret the Tevatron data in a meaningful way.
Here, we report on an attempt to rescue the NRQCD approach by approximately taking into account dominant higher-order (HO) QCD effects. The basic idea is as follows. The predicted excess over the HERA data at $`z`$ close to unity is chiefly generated by colour-octet $`c\overline{c}`$ pairs in the states $`{}_{}{}^{1}S_{0}^{}`$, $`{}_{}{}^{3}P_{0}^{}`$, and $`{}_{}{}^{3}P_{2}^{}`$ , where we use the spectroscopic notation $`{}_{}{}^{2S+1}L_{J}^{}`$ to indicate the spin $`S`$, the orbital angular momentum $`L`$, and the total angular momentum $`J`$. On the other hand, in hadroproduction at the Tevatron, the contributions from the colour-octet $`{}_{}{}^{1}S_{0}^{}`$ and $`{}_{}{}^{3}P_{J}^{}`$ states fall off much more strongly with increasing transverse momentum ($`p_T`$) than the one due to the colour-octet $`{}_{}{}^{3}S_{1}^{}`$ state , which is greatly suppressed in the quasi-elastic limit of photoproduction . Consequently, the nonperturbative matrix elements which are responsible for the colour-octet charmonium crisis are essentially fixed by the Tevatron data in the low-$`p_T`$ regime. This is precisely where the LO approximation used in Ref. is expected to become unreliable due to multiple-gluon radiation from the initial and final states. In Ref. , this phenomenon was carefully analyzed in a Monte Carlo framework and found to significantly increase the LO cross section. In Ref. , fits to the latest prompt $`J/\psi `$ data taken by the CDF collaboration at the Tevatron were performed incorporating this information on the dominant HO QCD effects. The resulting HO-improved NRQCD predictions for inelastic $`J/\psi `$ photoproduction at HERA do not overshoot the H1 and ZEUS data any more.
## 2 Theoretical input
The underlying theoretical framework is explained in Ref. . If $`p_T`$ is of order $`M_{J/\psi }`$ or below, we adopt the fusion picture, where the $`c\overline{c}`$ bound state is formed within the primary hard-scattering process. In the high-$`p_T`$ regime, we work in the fragmentation picture, where the $`c\overline{c}`$ bound state is created from a single high-energy gluon, charm quark or antiquark which is close to its mass shell. We take the renormalization scale $`\mu `$ and the common factorization scale $`M_f`$ to be $`\mu =M_f=m_T`$, where $`m_T=\sqrt{4m_c^2+p_T^2}`$ is the $`J/\psi `$ transverse mass. We define the starting scale $`\mu _0`$ of the fragmentation functions (FF’s) as $`\mu _0=2m_c=M_{J/\psi }`$. For our LO analysis, we choose CTEQ4L and GRV-LO as the proton and photon PDF’s, respectively, and evaluate $`\alpha _s`$ from the one-loop formula with $`\mathrm{\Lambda }^{(4)}=236`$ MeV . Whenever we include higher orders, we adopt the $`\overline{\mathrm{MS}}`$ renormalization and factorization scheme and employ CTEQ4M , GRV-HO , and the two-loop formula for $`\alpha _s`$ with $`\mathrm{\Lambda }_{\overline{\mathrm{MS}}}^{(4)}=296`$ MeV .
Unfortunately, not all ingredients which would be necessary for a fully consistent NLO analysis are yet available. In the case of fusion, the NLO corrections to the partonic cross sections are only known for direct photoproduction in the CSM . Furthermore, in the case of fragmentation, the NLO corrections to the FF’s at the initial scale $`\mu _0`$ are still unknown. In the case of direct $`J/\psi `$ photoproduction under typical HERA conditions, the QCD correction factor $`K`$ to the inclusive cross section in the CSM was found to be as low as 1.2 in the inelastic regime $`z\text{ }<0.9`$. It is plausible that the $`K`$ factors for the color-octet and resolved-photon processes should be modest, too. However, the situation should be very different for inclusive $`J/\psi `$ hadroproduction at the Tevatron, especially in the low-$`p_T`$ range, where one expects substantial HO QCD effects due to multiple-gluon radiation. Such effects were estimated for the fusion mechanism in Ref. by means of the Monte Carlo event generator PYTHIA after implementing therein the relevant colour-octet processes, and they were indeed found to be very sizeable. The impact of these effects on the fit to the latest CDF data is demonstrated in Table 1.
## 3 Predictions for charmonium photoproduction
We now explore the phenomenological consequences of this HO improvement for inclusive $`J/\psi `$ photoproduction in $`ep`$ collisions at HERA, with beam energies $`E_e=27.5`$ GeV and $`E_p=820`$ GeV in the laboratory frame, assuming the maximum photon virtuality to be $`Q_{\mathrm{max}}^2=4`$ GeV<sup>2</sup>. As in the H1 and ZEUS publications, we convert the $`ep`$ cross sections to averaged $`\gamma p`$ cross sections by dividing out the photon-flux factor. The contribution due to $`\psi ^{}`$ mesons with subsequent decay into $`J/\psi `$ mesons is approximately taken into account by multiplying the theoretical predictions by an overall factor of 1.15. The data are mostly concentrated in the low-$`p_T`$ range, where the fusion picture should be valid.
In Fig. 1, we compare our LO and HO-improved predictions for the $`p_T^2`$, $`z`$, and $`y`$ distributions with the H1 and ZEUS data. Here, $`y`$ is the $`J/\psi `$ rapidity in the laboratory frame, which is taken to be positive in the proton flight direction. The circumstance that $`𝒪^{J/\psi }[\underset{¯}{8},{}_{}{}^{1}S_{0}^{}]`$ and $`𝒪^{J/\psi }[\underset{¯}{8},{}_{}{}^{3}P_{0}^{}]`$ are not separately fixed by the fit to the CDF data induces some uncertainty in the colour-octet contributions to the cross sections of direct and resolved photoproduction and thus also in the total cross section. This uncertainty is encompassed by the results for $`𝒪^{J/\psi }[\underset{¯}{8},{}_{}{}^{1}S_{0}^{}]=M_r^{J/\psi }`$ and $`𝒪^{J/\psi }[\underset{¯}{8},{}_{}{}^{3}P_{0}^{}]=0`$ and those for $`𝒪^{J/\psi }[\underset{¯}{8},{}_{}{}^{1}S_{0}^{}]=0`$ and $`𝒪^{J/\psi }[\underset{¯}{8},{}_{}{}^{3}P_{0}^{}]=(m_c^2/r)M_r^{J/\psi }`$, which are actually shown in Fig. 1. We observe that, at LO, the colour-octet contribution of direct photoproduction is dominant for $`z\text{ }>0.5`$. Thus, it also makes up the bulk of the $`p_T^2`$ and $`y`$ distributions, which are integrated over $`0.4<z<0.9`$. This contribution is responsible for the significant excess of the LO predictions over the experimental results for $`d\sigma /dp_T^2`$ and $`d\sigma /dz`$ at low $`p_T`$ and high $`z`$, respectively. On the other hand, the HO-improved predictions tend to undershoot the data leaving room for a substantial $`K`$ factor due to the missing NLO corrections to the partonic cross sections. Now, the colour-singlet contribution of direct photoproduction, which is well under theoretical control , is by far dominant, except in the corners of phase space, at $`z\text{ }<0.15`$ and $`z\text{ }>0.85`$, where the colour-octet contributions of resolved and direct photoproduction, respectively, take over. Of course, we should also bear in mind that the predictions shown in Fig. 1 still suffer from considerable theoretical uncertainties related to the choice of the scales $`\mu `$ and $`M_f`$, the PDF’s, and other input parameters such as $`m_c`$ and $`\mathrm{\Lambda }^{(4)}`$ . From these observations, we conclude that it is premature at this point to speak about a discrepancy between the Tevatron and HERA data of inclusive $`J/\psi `$ production within the framework of NRQCD .
## 4 Conclusions
We determined the $`J/\psi `$ colour-octet matrix elements which appear in the NRQCD expansion at leading order in $`v`$ by fitting the latest Tevatron data of prompt $`J/\psi `$ hadroproduction . We found that the result for the linear combination $`M_r^{J/\psi }`$ of $`𝒪^{J/\psi }[\underset{¯}{8},{}_{}{}^{1}S_{0}^{}]`$ and $`𝒪^{J/\psi }[\underset{¯}{8},{}_{}{}^{3}P_{0}^{}]`$ is substantially reduced if the HO QCD effects due to the multiple emission of gluons, which had been estimated by Monte Carlo techniques , are taken into account. As an important consequence, the intriguing excess of the LO NRQCD prediction for inelastic $`J/\psi `$ photoproduction at $`z`$ close to unity over the HERA measurements disappears. We assess this finding as an indication that it is premature to proclaim an experimental falsification of the NRQCD framework on the basis of the HERA data. Although we believe that our analysis captures the main trend of the HO improvement, we stress that it is still at an exploratory level, since a number of ingredients which would be necessary for a fully consistent NLO treatment of inclusive $`J/\psi `$ hadroproduction and photoproduction are still missing.
Achnowledgements. The author thanks Gustav Kramer for his collaboration on this work.
|
no-problem/9907/astro-ph9907441.html
|
ar5iv
|
text
|
# 1 The paradigm
## 1 The paradigm
Two anisotropic effects give rise to the unified view of quasars and radio galaxies : relativistically-beamed twin jets feed the double lobes of powerful radio sources , and the black-hole/accretion disk system is shrouded in a dusty torus whose axis is aligned with the radio axis . A radio galaxy with double lobes is seen when the system is viewed side-on. As lines-of-sight approach the axis, the torus opening reveals the light of the nuclear black-hole/accretion-disk system which comes to dominate the galaxy light to produce a quasi-stellar object. When lines-of-sight coincide closely with the axis, Doppler enhancement of the relativistically-approaching radio jet leads to its compact flat-spectrum radio emission dominating the extended emission. Such ‘core-dominated’ quasars show superluminal motions in the jet structures as revealed by repeated VLBI observations (e.g. ).
Recognition of these two mechanisms has given rise to the two current paradigms of radio-source unification, based on FRI and FRII radio galaxies as the two parent populations. The FRI radio galaxies show the two regions of highest surface brightness in radio emission along the jets feeding the double radio lobes; they are generally less powerful than the FRIIs, and do not show strong optical/UV emission lines. The core-dominated counterparts are BL Lac objects . The powerful FRII galaxies show the brightest regions at the extremities of the double lobes and have strong emission lines; their projected counterparts are steep-spectrum quasars (at angles to the line-of-sight permitting a view of the nucleus), and core-dominated quasars when the line-of-sight coincides closely with the radio axis.
The first step in our analysis (described in detail in ) is to estimate space densities as a function of epoch for the two isotropically-radiating parent populations, the FRI and FRII radio galaxies. As these objects dominate low-frequency radio surveys, we use counts from the 3C and 6C (151 MHz) surveys and the 3CRR luminosity distribution ( and R. Laing, private communication; see Figure 1) to derive space densities following the procedure of Wall et al. . A parametric representation of luminosity-dependent density evolution (Figure 1) is chosen to mimic the evolution found by Shaver et al. , and we determine the best-fit parameters through a downhill simplex minimization process.
The second step was to ‘beam’ these parent populations to determine the contribution they make to the beamed flat-spectrum populations found in higher-frequency ($`\nu 1`$ GHz) surveys. We adopt the simplest possible beaming models, characterised for each of the two populations by two parameters, a Lorentz factor describing the speed of ejection and the ratio of the (rest-frame) fraction of beamed core emission to total emission. Using Monte Carlo runs randomly-orienting the parent sources, source-count predictions together with the proportion of beamed objects involved can be made at all frequencies. We use the minimization procedure to determine the beaming parameters providing the best prediction of the source counts at 5 GHz (Figure 2).
## 2 Results and discussion
For the parent populations, the new analysis of space densities now based on their completely-defined luminosity distributions finds three essential features. (1) Powerful evolution is required for the FRII population; in our parametric representation of evolution as $`\mathrm{exp}(M\tau )`$ with $`\tau `$ as look-back-time, $`M_{\mathrm{max}}=10.9`$ for the most radio-luminous FRII galaxies. (2) This density enhancement peaks at $`z_c/2`$ and tapers off to a redshift cutoff $`z_c`$ (Figure 1). We obtain z<sub>c</sub> = 5.6; the fit with this value is superior to the fit with no redshift cutoff at the 99.9% level of significance. (3) The FRI population shows little or no evolution ($`M0`$), in agreement with the relative uniformity of space density found for BL Lac objects ( and references therein).
As for the beaming models, we find an optimum value of Lorentz factor $`\gamma `$ = 8.5 for the radio quasars which are the beamed products of the FRII parents; the BL Lac objects which are the beamed products of the FRI parents show $`\gamma `$ = 15.0. These values are not dissimilar from those determined from VLBI observations of superluminal sources .
The analysis accounts for major features of the source statistics from a population definition which is physically meaningful. The increasingly broad ‘evolution bump’ in the source counts as survey frequency is raised comes about naturally through the increasing intrusion of beamed (flat-spectrum) objects. Despite the simplicity of assumptions, the limited data defining the luminosity distribution and the small number of parameters, the data are well described, as shown in Figure 2. Other tests are successful , including source-count prediction at different frequencies and the proportion of compact objects and broad-line objects as a function of flux density.
The success of the model demonstrates that essentially all radio AGN detected in sky surveys above 1 mJy may be encompassed by the unification hypothesis: quasars and BL Lac objects are double-lobed radio galaxies seen end-on. At smaller flux densities, the population of AGN declines and is replaced by the emergent population of starburst galaxies (Figure 2).
There are deficiencies. The model over-predicts counts at faint levels; better estimates of the local luminosity function and of the starburst-galaxy evolution (e.g. ) should be incorporated. Moreover it is now known that the redshift cutoff found for core-dominated objects is a function of radio luminosity; the cutoff moves to lower redshifts as radio luminosity decreases . At the highest radio luminosities the space density profile with redshift resembles the behaviour of the star-formation rate with epoch as determined by Steidel et al. . The simplistic ‘opera house’ models of figure 1 require modification accordingly. Finally VLBA/VLBI surveys of core-dominated quasars show that there is a range in jet speed with median values lower than those found here. How these are related to the ‘apparent’ jet speeds found here requires further consideration. Such modifications are unlikely to destroy the basic tenet of the analysis, that essentially all AGN found in radio surveys above 1 mJy can be described by unified (orientation-dependent) schemes. The modifications will refine the definition of the AGN space-density profile, and at such time the implications for associations between the AGN phemomena and starburst activity may emerge with greater clarity.
|
no-problem/9907/cond-mat9907432.html
|
ar5iv
|
text
|
# Origin of the Native Driving Force for Protein Folding
\[xx Preprint NCU/CCS-1998-0920; NSC-CTS-981002
## Abstract
We derive an expression with four adjustable parameters that reproduces well the $`20\times 20`$ Miyazawa-Jernigan potential matrix extracted from known protein structures. The numerical values of the parameters can be approximately computed from the surface tension of water, water-screened dipole interactions between residues and water and among residues, and average exposures of residues in folded proteins.
\]
Protein structure and design is a very important topic in life science where physics and mathematics are indispensable to its understanding . Recently Li et al. pointed out some highly interesting and unexpected properties of Miyazawa and Jernigan’s $`20\times 20`$ potential matrix ($`M`$) for protein structure . This matrix, whose elements are statistically deduced pair-wise interaction potential energies among the twenty types of amino acids in proteins of known structure, has been widely applied to protein design and folding simulations . Li et al. noticed that $`M`$ has a highly accurate leading principal-component representation: variations of the elements of $`M`$ from their mean can be expressed in terms of only the two leading eigenvalues of $`M`$ and the eigenvector $`\stackrel{}{q}`$ of the leading eigenvalue such that
$$M_{ij}c_2q_iq_j+c_1(q_i+q_j)+c_0,$$
(1)
where $`i`$ and $`j`$ label the 20 amino acids, and $`c_0=1.38`$, $`c_1=5.03`$ and $`c_2=7.40`$, in units of $`RT`$, the gas constant times (room) temperature.
Two features of the right-hand-side of Eq. (1) stand out: 1) Not all residue-dependent terms are genuine two-body interactions; the $`c_1`$ terms represent one-body, mean-field potential energies. 2) Both the two-body $`c_2`$ terms and the one-body $`c_1`$ terms depend on the same set of $`q`$’s. Numerically, because the magnitudes of the $`q`$’s are small, the $`c_1`$ terms dominate over the $`c_2`$ term. This is consistent with the widely held notion that the earliest and fastest part of a protein folding process is by and large controlled by the hydrophobicity of the residues. Tables I and II show that indeed $`q`$ is moderately correlated with the hydrophobicities ($`\mathrm{\Delta }G`$) . The product, pairwise form of the two-body terms reminds one of dipole-dipole interaction, and this in turn would imply a connection between the one-body terms and the dipole moments of the residues. Tables I and II also show a noticeable correlation between $`q`$ and the dipole moments ($`Q`$) of the side-chains of the residues . In the rest of the paper we will derive an expression for the MJ matrix in terms of an average “bare” residual solvation energy (for a hypothetical residue with vanishing dipole), interactions between the dipole moments of the residues and water molecules, and the degree of exposure to water (expressed as its complement, the burial factor) of a residue in a folded protein. We show that except for the burial factor of the residues the other three adjustable parameters appearing in the expression all have clear physical meanings with numerical values that can be computed approximately. The average burial factors for hydrophobic and hydrophilic residues that emerge from our analysis of the MJ matrix are 0.8 and 0.2, respectively (they are related and should approximately sum to 1). In this paper energy will be given in units of $`RT=0.60\mathrm{kcal}/\mathrm{mol}=4.2\times 10^{21}J`$ and dipole moments will be given in Debyes ($`D`$).
Dipole-dipole interaction. The interaction in vacuum between two electric dipoles $`\stackrel{}{Q}_i`$ and $`\stackrel{}{Q}_j`$ separated by $`\stackrel{}{R}_{ij}=\widehat{n}R_{ij}`$ is $`V_{ij}=(\stackrel{}{Q}_i\stackrel{}{Q}_j3(\widehat{n}\stackrel{}{Q}_i)(\widehat{n}\stackrel{}{Q}_j))/(4\pi ϵ_0R_{ij}^3)`$. If the carriers of the dipoles are relatively unconstrained we expect attraction and $`|\mu _r||Q_i||Q_j|V_{ij}0`$, where $`|\mu _r|=D^2/2\pi ϵ_0R_{ij}^3`$. In what follows, $`Q_i`$, $`i=1,\mathrm{},20`$ is the dipole moment of the $`i^{th}`$ side-chain, and $`Q_w`$ is the dipole moment of a water molecule. For residue-residue interaction, taking the inter-side-chain distance to be $`R_{ij}R_06.5\AA `$ , and recalling that an electron-positron pair separated by one $`\AA `$ is equal to 4.8 $`D`$, we have $`|\mu _r|0.172`$ $`(RT)`$, which may be viewed as a maximum value for the coupling since in a real setting it is expected to be weakened owing to the presence of water molecules.
One-body terms. Let $`E_0`$ be the average bare surface-dependent solvation energy of a residue in water when the residue-water dipole interaction is not taken into account; $`N_w`$ be the average number of water molecules in contact with a residue; $`\mu _w`$ be the average effective dipole-dipole coupling between the $`i^{th}`$ residue and a water molecule. Then, with residue-water interaction energy included and possible dependence of $`E_0`$, $`\mu _w`$ and $`N_w`$ on $`i`$ ignored, the residue-water interaction energy is $`E_i=\mu _wQ_iQ_wN_w+E_0\mu _wQ_i^{}Q_wN_w`$, where for convenience we write $`Q_i^{}Q_i+Q_0`$ and $`Q_0=E_0/(\mu _wQ_wN_w)`$. A hydrophobic (hydrophilic) residue would have $`E_i>0`$ ($`E_i<0`$). If $`N_i`$ is the number of the type $`i^{th}`$ residues in a peptide, then the energy of an unfolded peptide in water is $`U=_iN_iE_i`$. Suppose that after folding $`\mathrm{\Delta }N_i`$ fewer $`i^{th}`$ residues are exposed to water. Then the binding energy of the folded relative to the unfolded state is $`\mathrm{\Delta }U=_i\mathrm{\Delta }N_iE_i`$. The negative sign means that in folding, the peptide will maximize (minimize) those $`\mathrm{\Delta }N_i`$ whose $`E_i`$ are the most positive (negative), subject to the constraint of polymeric nature of the peptide.
Relation between $`q`$ and $`Q`$. Equating $`\mathrm{\Delta }U`$ with the binding energy obtained from Eq. (1) by summing the one-body terms over all pairs we have
$$\mathrm{\Delta }Uc_1N_c\underset{i}{}N_iq_i^{}=\mu _wQ_wN_w\underset{i}{}Q_i^{}\mathrm{\Delta }N_i$$
(2)
where $`q_i^{}q_iq_0`$, $`q_0`$ is a constant and $`N_c`$ is the average number of contacts a residue has in a folded state. Matching the $`i`$-dependent terms we have
$$c_1q_i^{}\xi _iQ_i^{},\xi _i=\mu _w(\mathrm{\Delta }N_i/N_i)(N_wQ_w/N_c).$$
(3)
Because in a folded protein proportionally more hydrophobic ($`h`$) residues than polar ($`p`$) residues will be hidden from water, one expects $`\mathrm{\Delta }N_i/N_i`$, hence $`\xi _i`$, to have a strong residual dependence. To minimize the number of parameters we allow $`\xi _i`$ to have only two values: $`\xi _h`$ and $`\xi _p`$, and have them determined by separate linear fits to $`q`$’s belonging to hydrophobic and hydrophilic residues, respectively. Excluded in the fits are residues whose hydrophobicities are ambivalent \- Tyr, Ala, Gly, Thr, Ser and Pro. Demanding that the two fits have the same intercepts we obtain
$$q_0=0.055,Q_0=2.9;\xi _h=0.56,\xi _p=0.14$$
(4)
The linear correlation between $`q`$ and $`\xi Q^{}`$ over the complete set of 20 residues - following and , the first eight amino acids in Table I are taken to be hydrophobic - is 0.949, which is dramatically better than the correlation between $`q`$ and $`Q`$, see Fig. 1(a) and Table II.
The burial factor. Since on average the numbers of hydrophobic and polar residues in a protein are approximately equal and about half of all residues are buried in the core, we have $`N_hN_p`$, $`\mathrm{\Delta }(N_h+N_p)/(N_h+N_p)1/2`$ and hence $`\mathrm{\Delta }N_p/N_p1\mathrm{\Delta }N_h/N_h`$. From the ratios of the two $`\xi `$’s we thus deduce the burial factors for hydrophobic and polar residues, respectively, to be
$`\mathrm{\Delta }N_h/N_h0.80,\mathrm{\Delta }N_p/N_p0.20.`$ (5)
That is, our analysis of the MJ matrix suggests that on average four times as many hydrophobic residues are buried in the core than are polar residues.
Two-body terms. We define the true two-body part of the MJ matrix to be the matrix minus the one-body and constant part of Eq. (1): $`M_{ij}c_0c_1(q_i+q_j)`$. This two-body part is again well approximated by $`c_2^{}q_iq_j`$, $`c_2^{}=10.7`$, with which it has a linear correlation of 0.832. When $`c_2^{}q_iq_j`$ is re-expressed in terms of $`Q^{}`$ using Eq. (3) the shift $`q_0`$ induces an additional one-body term such that
$$M_{ij}C_2\xi _i\xi _jQ_i^{}Q_j^{}+c_1^{}(q_i+q_j)+\mathrm{const}.$$
(6)
Where $`C_2=c_2^{}/c_1^2=0.423`$ and $`c_1^{}=c_1c_2^{}q_0=5.62`$. The linear correlation between $`M_{ij}c_1^{}(q_i+q_j)`$ and $`\xi _i\xi _jQ_i^{}Q_j^{}`$ is 0.681, see Fig. 1(b). Given that the dipole moments and $`\xi _h`$ and $`\xi _p`$ are predetermined, the first term on the right-hand-side of Eq. (6) is a one free parameter ($`C_2`$) fit to 210 pieces of “noise” in the MJ matrix. The mediocre quality of the correlation nevertheless suggests that the two-body term cannot be explained by dipole interactions alone; interactions depending on charge and polarizability may need to be included. The inclusion of such terms may cause the two-body term to deviate from having the simple $`qq`$ form suggested by in Eq. 1. Owing to its relative small magnitude such a deviation should be tolerable to the original MJ matrix.
MJ matrix in terms of $`Q^{}`$. Re-expressing the one-body term in Eq. (6) in terms of $`Q^{}`$ and rationalizing notations by writing $`\mu _{ij}=C_2\xi _i\xi _j`$ and $`\xi _i^{}=\xi _ic_1^{}/c_1`$ we finally have
$$M_{ij}\mu _{ij}Q_i^{}Q_j^{}+\xi _j^{}Q_j^{}+\xi _i^{}Q_i^{}+\text{const.}$$
(7)
where $`\mu _{hh}=0.13`$, $`\mu _{hp}=0.032`$, $`\mu _{pp}=0.0078`$, $`\xi _h^{}=0.63`$ and $`\xi _p^{}=0.15`$. The two sides of the equation has a linear correlation of 0.922, see Fig. 1(c). Since $`Q_i`$ is either zero or positive, the negative values of $`\mu _{ij}`$ imply that the dipoles mostly succeed in causing the residues to lower their energies. That is, even in a folded state the the residues appear to be sufficiently unrestricted to find optimum orientations. To the extent that the dipole moments of the side-chains are not free parameters, the expression on the right-hand-side is a four parameter fit - $`C_2`$, $`E_0`$, $`\mathrm{\Delta }N_h/N_h`$ and $`\mu _w`$ (see below) - to the complete MJ matrix.
Residue-residue dipole coupling. By definition $`\mu _{ij}(\mathrm{\Delta }N_i/N_i)(\mathrm{\Delta }N_j/N_j)`$. With $`\mathrm{\Delta }N_i/N_i`$ describing the percentage of buried residues in a folded protein, the inequalities $`|\mu _{pp}|<|\mu _{hp}|<|\mu _{hh}|<|\mu _r|`$ correctly take into account the dielectric property of water: the coupling between residues shielded from water is stronger than that between residues that are not. The magnitude of the weighed average of the residue-residue coupling, $`\overline{\mu }_{ij}=(7\mu _{pp}+6\mu _{hp}+7\mu _{hh})/20=0.041`$, is about four times less than the bare coupling strength of $`|\mu _r|=0.172`$.
Water-residue coupling. We can obtain the effective water-residue coupling from the relation $`\xi _i=\mu _w(\mathrm{\Delta }N_i/N_i)(N_wQ_w/N_c)`$ given earlier. Using the value 6.5 $`\AA `$ for the average effective diameter of a residue and the value 2 $`\AA `$ for the diameter of a water molecule, we estimate that a residue may have a maximum of 12 residue contacts and 57 water molecule contacts. In practice the number of contacts is encumbered by the presence of the peptide backbone and geometric constraints, such that in fact $`N_c7\text{[3]}`$. We therefore scale $`N_w`$ down to $`35`$. With $`Q_w=1.85D`$, we deduce from Eqs. (5) and (7) that $`\mu _w0.076(RT)`$. The negative sign of $`\mu _w`$ is consistent with the notion that the presence of dipole in a residue reduces its hydrophobicity. Taking the average water-residue distance to be 4.25 $`\AA `$ we expect the bare water-residue coupling to be $`(6.5/4.25)^3=3.5`$ times stronger than the bare residue-residue coupling. However, in an unfolded state the residues are completely exposed to water. We therefore expect the approximate relations $`|\mu _{pp}|<|\mu _w|/3.5|\mu _{hp}||\overline{\mu }_{ij}|<|\mu _r|`$, which are satisfied.
Solvation energy, surface tension and hydrophobicity. With $`\mu _w`$ and $`Q_0`$ extracted from the data we now find the bare solvation energy to be $`E_0=\mu _wQ_0Q_wN_w=14.6RT`$. Although hydration is an exceedingly complex process and is not fully understood, the effective surface tension of water, or surface free energy cost to water forced to sit against a hydrophobic surface has been estimated to be $`\sigma =40`$ erg/cm<sup>2</sup> . For a residue of diameter $`R_0`$ the free energy cost is $`W=4\pi (R_0/2)^2\sigma =13RT`$, which is reasonably close to the value of $`E_0`$. The fact that a good fit to the MJ matrix demands that $`E_0`$ enters $`\mathrm{\Delta }U`$ in Eq. 2 multiplied by $`\mathrm{\Delta }N_i`$ is indication that $`E_0`$ needs to be surface energy. When the water-residue dipole interaction energy is included, the total solvation energies $`E_i`$ of the residues then delineate into groups with distinct hydrophobicities, with the seven most hydrophobic (hydrophilic) having an average solvation energy of $`13.2RT`$ ($`9.3RT`$).
Very recently Keskin et al. re-analyzed the MJ matrix and derived the approximation (for ease of discussion the $`W_i^{}`$ used here has an additional negative sign relative to that in ): $`M_{ij}\mathrm{\Delta }W_{ij}^{}+W_i^{}+W_j^{}+\mathrm{const}.`$, where the one-body term $`W^{}`$ is essentially defined as the mean-field of $`M_{ij}`$ and $`\mathrm{\Delta }W_{ij}^{}`$ is a four parameter fit to $`M_{ij}`$ minus its mean-field. The analysis confirms the dominance of the one-body term in the MJ matrix. The overall fit to the MJ matrix, with a correlation of 0.99, is excellent and the fit to the two-body part is about the same as that given by the dipole picture: the correlation between $`M_{ij}W_i^{}W_j^{}`$ and $`\mathrm{\Delta }W_{ij}^{}`$ is 0.67. Not surprisingly $`W^{}`$ and $`q`$ are closely related. The expression $`\eta c_1q+1.16`$, with scale factor $`\eta =1.17`$, reproduces $`W^{}`$ with a linear correlation of 0.997. The value of $`\eta `$ is mostly explained by the fact that the mean-field calculated from the right-hand-side of Eq. (1) is $`1.22c_1(q_i+q_j)`$. Incidentally, $`\eta c_1=5.89`$ is very close to the value of the renormalized coefficient $`c_1^{}=5.62`$ given in Eq. (6).
In Table I are listed values for $`q`$, $`Q`$, $`W^{}`$, $`\xi Q^{}`$, and hydropathy scales $`\mathrm{\Delta }G`$ (in units of $`RT`$) corrected for self-solvation for the side-chains of the twenty amino acids . Recall that $`\xi `$ contains the burial factor (see Eq. 4) and $`Q^{}`$ is $`Q`$ shifted by an amount proportional to $`E_0`$ (see Eq. 3). The pairwise linear correlation of the entries in Table I are given in column 2 of Table II. The correlation between $`\xi Q^{}`$ and $`W^{}`$ (and $`q`$) is very significantly better than that between $`Q`$ and $`W^{}`$ (and $`q`$). The linear relations connecting the solvation energy with $`\xi Q^{}`$, $`W^{}`$ and $`q`$: $`E_i(\mathrm{\Delta }N_i/N_i)/N_c=\xi _iQ_i^{}c_1(q_iq_0)(W_i^{}W_0^{})/\eta `$, where $`W_0^{}=0.71`$ is a shift, highlight the importance of taking into account the burial factor of a residue in a folded protein when interpreting the one-body terms of the MJ matrix.
The hydropathy scales shown in Table I are derived for side-chains in model peptides rather than in proteins. They include the effect of self-solvation that reduces the hydropathies of the polar side-chains , but does not include the effect of burial factor. This probably explains why, as seen in Table II, the $`\mathrm{\Delta }Gq`$, $`\mathrm{\Delta }GW^{}`$, $`\mathrm{\Delta }GQ`$ and $`\mathrm{\Delta }G\xi Q^{}`$ correlations are of similar quality.
The $`q`$ and $`W^{}`$ values of proline suggest it to be polar, while its $`Q`$, $`\xi Q^{}`$ and $`\mathrm{\Delta }G`$ values say it is ambivalent or even hydrophobic. The third column in Table II shows that the correlations listed either remain unchanged or improve when proline is excluded from the linear fit. The ambiguous hydrophobicity of this residue may be related to the fact that is has a looping structure.
We summarize our interpretation of Eq. (1) being a good approximation of the MJ matrix as follows. The one-body part, or hydrophobicity (or hydropathy) energy, is made up of two parts: free energy cost to water to accommodate the residue surface, and attractive dipole interaction between residue and water. Because polar residues have large dipole moments, hydrophobic residues have small or no moments and ambivalent residues have something in between, the hydropathic/hydrophobic energy is strongly attractive, weakly attractive and strongly repulsive for polar, ambivalent and hydrophobic residues, respectively. Residue-residue dipole interactions accounts for a sizable portion, but not all, of the two-body part. Aside from using the given dipole moments for the residues and having two burial factors, one each for the hydrophobic and polar residues, no residue-dependent adjustments were made in deriving Eq. (7), our rendition of Eq. (1). That is, we have not attempted a detailed fit of the MJ matrix. The correlation between the dipoles of the residues and $`q`$ becomes unequivocal and the strengths of the dipole couplings extracted from the MJ matrix become reasonable only when the burial factors are included in the formulation. That the factor is important reveals the dynamical nature of protein folding: strengths of interactions change as the folding progresses. Protein folding is a very complicated process that depends on many details and the MJ matrix does not tell its whole story. It does however contain the most basic structural information at the molecular level of those proteins whose structures are known. The success of the present analysis in understanding the main features of the MJ matrix gives us confidence that the model used here may provide a starting point for building a true potential suitable for use in a molecular dynamical description of early folding of protein in water.
The authors thank C. Tang, G.M. Crippen, Y. Duan, M. Wortis, D.C.Y. Lu and P.G. Luan for discussions and the referee for pointing them to reference and for useful suggestions. HCL thanks the Physics Department of Simon Fraser University for hospitality in the Summer of 1998 during which part of the paper was written. This work is partly supported by grant NSC88-M-2112-008-009 from National Science Council (ROC).
|
no-problem/9907/cond-mat9907106.html
|
ar5iv
|
text
|
# Temporally ordered collective creep and dynamic transition in the charge-density-wave conductor NbSe3
## Abstract
We have observed an unusual form of creep at low temperatures in the charge-density-wave (CDW) conductor NbSe<sub>3</sub>. This creep develops when CDW motion becomes limited by thermally-activated phase advance past individual impurities, demonstrating the importance of local pinning and related short-length-scale dynamics. Unlike in vortex lattices, elastic collective dynamics on longer length scales results in temporally ordered motion and a finite threshold field. A first-order dynamic phase transition from creep to high-velocity sliding produces “switching” in the velocity-field characteristic.
Interaction between internal degrees of freedom and disorder determines the dynamical properties of driven periodic media, a class of systems that includes vortex lattices in type II superconductors , Wigner crystals , magnetic bubble arrays , and charge-density waves (CDWs), the low-temperature phase of quasi-one-dimensional conductors . While the interplay between quenched disorder and elastic deformations is relatively well understood, a complete description of the role of thermal disorder and plastic deformations, which may result in disordered dynamical phases such as driven glass, smectic and liquid states , has not yet been achieved.
CDWs have long been regarded as a prototypical system for the study of many-degree-of-freedom dynamics, both because of their relative theoretical simplicity and because CDW materials like NbSe<sub>3</sub> exhibit collective phenomena with remarkable clarity. A CDW consists of coupled modulations of the electronic density $`n=n_0+n_1\mathrm{cos}[Q_cx+\varphi (x)]`$ and of the positions of the lattice ions . Applied electric fields $`E`$ greater than a threshold field $`E_T`$ cause the CDW to depin from impurities and slide relative to the host lattice, resulting in a non-linear dc current density $`j_c`$ proportional to the CDW’s sliding velocity. The impurities cause the CDW to move nonuniformly in both space and time, and the elastic collective dynamics leads to oscillations (“narrow-band noise”) in $`j_c(t)`$. The frequency $`\nu `$ of these oscillations is proportional to the dc component of $`j_c`$, and their $`Q=\nu /\mathrm{\Delta }\nu `$ can exceed 30,000 in high-quality NbSe<sub>3</sub> crystals.
Despite these simplifying features, most aspects of CDW transport at low temperatures remain poorly understood. At temperatures $`T>2T_P`$/3, $`j_c`$ above $`E_T`$ is a smooth, asymptotically linear function of $`E`$. However, at low temperatures $`j_c(E)`$ changes drastically, as illustrated in Fig. 1. CDW conduction still begins at $`E_T`$ but $`j_c`$ is small and freezes out with decreasing temperature for fields less than a second characteristic field $`E_T^{}>E_T`$. At $`E_T^{}`$ $`j_c`$ increases by several orders of magnitude to a more nearly temperature-independent value, often by an abrupt, hysteretic ”switch.” Similar behavior is observed in all widely studied CDW materials and is thus a fundamental aspect of CDW dynamics .
We have characterized the sliding CDW’s transport and structural properties in the low-temperature regime of extremely high quality NbSe<sub>3</sub> crystals. For $`E_T<E<E_T^{}`$, we find that $`j_c`$ is activated in temperature and increases exponentially with field. Contrary to previous observations in driven periodic media, this creep-like collective motion exhibits temporal order. Our results illuminate the relation between local and collective pinning and indicate that dynamics on lengths much shorter than the Fukuyama-Lee-Rice (FLR) length — neglected in most theoretical treatments — play a central role. They imply revised interpretations for “switching” at $`E_T^{}`$, the low-frequency dielectric response, low-field relaxation, and nearly every other aspect of the CDW response at low temperatures.
High purity ($`r_R400`$) whisker-like NbSe<sub>3</sub> single crystals with typical cross-sectional dimensions of $``$3 by $``$0.8 $`\mu `$m were mounted on arrays of 2 $`\mu `$m wide gold-topped chromium wires . The total current density $`j_{tot}`$ is a sum of the CDW and single-particle current densities, $`j_c`$ and $`j_s`$. $`j_c`$ is orders of magnitude smaller than $`j_s`$ in the range $`E_T<E<E_T^{}`$ (except at relatively high temperatures and very close to $`E_T^{}`$ ). $`j_c(E)`$ cannot be directly measured and its form in the low-velocity branch of NbSe<sub>3</sub> has not previously been determined. As shown in the inset to Figure 2, our high-quality crystals exhibit voltage oscillations with Q’s as large as 130 in this regime. Consequently, we are able to determine $`j_c`$ by measuring the oscillation frequency $`\nu =(Q_c/2\pi en_c)j_c`$, where $`e`$ is the electronic charge and $`n_c`$ is the condensed carrier density. $`j_c(E)`$ was independently estimated by alternating the applied current’s direction and measuring resistance transients $`R(t)`$ associated with transients in the distribution of CDW strain $`ϵ(x)=(1/Q_c)(\varphi /x)`$ between the current contacts .
Figure 2 shows the CDW current density $`j_c(E)=\nu \times 0.32`$ pA/$`\mu `$m<sup>2</sup>Hz calculated from the measured oscillation frequency $`\nu (E)`$ for $`E_T<E<E_T^{}`$ at four temperatures. The CDW moves extremely slowly throughout this field and temperature range: the smallest measured $`\nu `$ values at $`T`$=20.7 K correspond to CDW motion of roughly one wavelength or 14 Å per second and to $`j_c10^9j_{tot}`$. Between T$``$40 K and T$``$20 K, $`j_c`$ at fixed $`E<E_T^{}`$ is temperature activated, decreasing by roughly 7 orders of magnitude. $`j_c`$ jumps abruptly at $`E_T^{}`$, with $`j_c(E=1.1E_T^{})/j_c(E=0.9E_T^{})`$ increasing from $``$ 10<sup>3</sup> to $``$ 10<sup>6</sup> as $`T`$ decreases from 28 to 20 K.
The current density $`j_c\nu `$ can be fit by a modified form for thermal creep
$$j_c(E,T)=\sigma _0[EE_T]\mathrm{exp}\left[\frac{T_0}{T}\right]\mathrm{exp}\left[\alpha \frac{E}{T}\right].$$
(1)
where the $`[EE_T]`$ term describes the fact that the current drops to zero at a threshold $`E_T`$ that remains large even at high temperatures. The solid lines in Fig. 2 indicate a fit with $`T_0=505`$ K, $`\alpha =136`$ K V<sup>-1</sup> cm, and $`\sigma _0=350`$ $`\mathrm{\Omega }^1\mu `$m<sup>-1</sup>. The value of $`T_0`$ is insensitive to the assumed field dependence and corresponds to 0.6 times the single-particle gap $`2\mathrm{\Delta }`$ , consistent with measurements of delayed conduction and of $`\sigma _c`$ near $`E_T^{}`$ above 30 K . Although creep is observed in other systems, the coherent oscillations imply that the creep in this case is highly unusual: it exhibits temporal order.
Figure 3 shows $`j_c(E)`$ at $`T`$=20.5 K obtained from transient measurements . These data agree closely with $`j_c(E)`$ deduced from $`\nu (E)`$ over nearly three decades in $`j_c`$ . Combining the two measurements yields $`j_c/\nu =0.22`$ pA/$`\mu `$m<sup>2</sup>Hz, consistent with the expected value of $`j_c/\nu =0.32`$ pA/$`\mu `$m<sup>2</sup>Hz within the factor-of-two uncertainty in the value of $`j_c`$ determined from transient measurements. This rules out significant filamentary conduction, observed in the low temperature regime of other CDW materials, and implies that the entire crystal cross section or at least a significant fraction of it is involved in coherent conduction.
Figure 4 shows the results of high-resolution x-ray diffraction measurements of the CDW’s transverse structure versus electric field. The CDW creates superlattice peaks in the diffraction pattern, and the half-width of each peak is inversely related to the CDW phase-phase correlation length. For $`E<E_T`$, the resolution-corrected inverse half-width is $`l4100`$ Å, comparable to the crystal dimension in this direction. For $`E>E_T`$, $`l`$ decreases monotonically with increasing $`E`$, remaining greater than $`2500`$ Å for $`E_T<E<E_T^{}`$. $`l`$ does not show any abrupt change at $`E_T^{}`$ despite the several orders-of-magnitude increase in $`j_c`$ there. Similar results were obtained in other directions perpendicular to $`𝐐_𝐜`$ (e.g. \[1 0 3\] and \[1 0 2̄\]) and at higher temperatures.
Several different models have been proposed to account for the low-temperature properties of CDW conductors. In K<sub>0.3</sub>MoO<sub>3</sub> and TaS<sub>3</sub>, whose Fermi surfaces are completely gapped by CDW formation, the activation energies for the single-particle conductivity $`\sigma _s`$ and the CDW conductivity $`\sigma _c`$ in the low-velocity branch are both comparable to the CDW gap so that $`\sigma _c(T)\sigma _s(T)`$ . Motivated by this observation, Littlewood suggested that dissipation caused by single-particle screening of CDW deformations limits CDW motion in the low-velocity branch, and that an abrupt, hysteretic transition to the high-velocity branch occurs at a frequency $`\nu `$ comparable to the dielectric relaxation frequency $`\nu _1\sigma _s`$ when this screening becomes ineffective. The predicted value of $`\nu `$ at the discontinuity for K<sub>0.3</sub>MoO<sub>3</sub> and TaS<sub>3</sub> is four orders of magnitude too large , and for NbSe<sub>3</sub> at $`T`$=20.7 K $`\nu `$ is 13 orders of magnitude too large. Levy et al. showed that a related model exhibits a hysteretic transition from the pinned state to a fast sliding state when $`\sigma _s`$ is small even if high-frequency screening effects are neglected. Neither model can explain the low-temperature CDW properties of partially-gapped NbSe<sub>3</sub>, for which $`\sigma _s`$ remains metallic and increases with decreasing temperature below 50 K.
Various forms of CDW plasticity including phase slip at isolated defects and shear between two-dimensional CDW sheets have been suggested to account for the properties of the low-velocity branch and the transition at $`E_T^{}`$. Our observation of highly coherent oscillations in high-quality crystals and earlier results rule out models based on slip at rare isolated defects and contacts, and our x-ray measurements rule out the form of shear plasticity discussed in Ref. .
Brazovskii and Larkin have focused on the CDW’s local interaction with defects. At low temperatures CDW phase advance past rare defects occurs via thermally-activated soliton generation, and motion becomes much more rapid at large fields when the effective barrier to soliton generation vanishes. This interpretation has appealing features, but the suggested form for the $`j_c(E)`$ relation at low temperatures does not reproduce the two branches separated by an abrupt hysteretic transition or the field dependence in either branch observed experimentally in NbSe<sub>3</sub>. Furthermore, the predicted $`E_T^{}`$ is determined by the soliton energy and should be independent of crystal size. Experimentally, in NbSe<sub>3</sub> both $`E_T`$ and $`E_T^{}`$ vary as $`1/t`$ for crystal thicknesses $`t`$ less than $``$ 20 $`\mu `$m . The thickness dependence of $`E_T`$ results because transverse CDW correlations are limited by $`t`$ so that collective pinning is two-dimensional . Consequently, the thickness dependence of $`E_T^{}`$ implies that it, too, is determined by collective effects.
CDW creep of a fundamentally different character is observed in thin NbSe<sub>3</sub> crystals at high temperatures . Near T<sub>P</sub>, $`E_T`$ is rounded, nonlinear conduction can extend to near E=0, and highly coherent oscillations below the nominal $`E_T`$ are not observed. This incoherent creep occurs when $`k_BT`$ approaches the collective pinning energy ($`\mathrm{\Delta }(T)^2t`$) of the phase-correlated FLR domains, which in NbSe<sub>3</sub> have micrometer dimensions. The temporally-ordered creep observed in relatively thick crystals at low temperatures above a sharp threshold $`E_T`$ must involve barriers that are much smaller than those of collective pinning and that are not rare, and a length scale that is much smaller than that of the collective dynamics responsible for the narrow-band noise.
Motivated by earlier ideas , we suggest that the low-velocity branch develops when CDW motion becomes limited by thermally-activated phase advance by $``$ 2$`\pi `$ past individual impurities. Although collective pinning is weak , the phase of the $`Q_c=2k_F`$ oscillations is fixed at each impurity so that phase advance requires CDW amplitude collapse and a finite barrier $`\mathrm{\Delta }`$ . Collective dynamics within volumes containing enormous numbers of impurities (set by the FLR length) then generates the finite threshold $`E_T`$ and coherent oscillations, as in the high-temperature regime. Unlike in vortex lattices, long-length-scale CDW dynamics is largely elastic and thus retains temporal order even though the short-length-scale dynamics is stochastic.
The $`EE_T`$ prefactor and the remaining terms in Eq. 1 follow naturally from this combination of long and short length-scale processes. The measured barrier $`T_0`$ is consistent with the expected pinning barrier per impurity of $``$ $`\mathrm{\Delta }`$ . An applied electric field should reduce this barrier by $``$ $`en_cV\lambda E`$, and using a condensate density $`n_c=2\times 10^{21}`$ cm<sup>-3</sup> , a CDW wavelength $`\lambda =14\AA `$ and the measured $`\alpha `$ value yields a volume $`V`$ involved in each thermally-activated event of $`V4.2\times 10^{17}`$ cm<sup>3</sup>. Using the scale factor expected for typical impurities , the bulk residual resistance ratio of $``$ 400 for our crystals corresponds to a concentration $`n_i2.5\times 10^{16}`$ cm<sup>-3</sup>. The volume per impurity $`1/n_i4\times 10^{17}`$ cm<sup>3</sup> is thus in excellent agreement with $`V`$ deduced from creep measurements.
The present experiments together with those of Ref. rule out all previous explanations of the “switching” between low and high velocity branches at $`E_T^{}`$ in NbSe<sub>3</sub>. We suggest that switching occurs via a first-order dynamic phase transition. The long-length-scale dynamics exhibits temporal order in both branches, but in the high-velocity branch dynamic fluctuations produced as the CDW moves past impurities may become more important than thermal fluctuations in overcoming impurity barriers . The transition’s abruptness, hysteresis, and temperature dependence shown in Fig. 1, the CDW’s tendency to fragment near the transition into distinct conducting regions , and the field and temperature-dependent time delays required for the transition’s completion are all consistent with this explanation.
Finally, we note that local temporal order has very recently been observed in the creep regime of a vortex lattice .
We thank C.L. Henley, M. B. Weissman, M. C. Marchetti, A. A. Middleton, and S. Brazovskii for fruitful discussions. This work was supported by NSF Grants DMR97-05433 and DMR-98-01792. S.G.L. acknowledges additional support from NSERC. The x-ray data were collected using beam line X20A at the National Synchrotron Light Source (NSLS). Sample holders were prepared at the Cornell Nanofabrication Facility.
|
no-problem/9907/astro-ph9907135.html
|
ar5iv
|
text
|
# Star Formation in Las Campanas Compact Groups
## 1 Introduction
Perhaps over half of all galaxies lie within groups containing 3–20 members (Tully (1987)); yet, due to the difficulty of discerning them from the field, groups of galaxies are, as a whole, not as well studied as larger galaxy systems. Compact groups (CGs), however, defined by their small number of members ($`<10`$), their compactness (typical intra-group separations of a galaxy diameter or less), and their relative isolation (intra-group separations $``$ group-field separations) are more readily identifiable.
Recently, Tucker et al. (1999) produced a catalogue of loose groups (LGs) from the Las Campanas Redshift Survey (LCRS; Shectman et al. (1996)), using an adaptive friends-of-friends algorithm (Ramella et al. (1989)). Intrigued by the work of Barton et al. (1996), who created a CG catalogue from the Center for Astrophysics (CfA) Redshift Survey and found that most of their CGs were embedded in dense environments, we produced a similar catalogue from the much deeper LCRS (Allam & Tucker (1998), Tucker et al. (1999)). For extracting group catalogues, redshift surveys have an advantage over sky surveys since redshift adds a third dimension of constraint: group catalogues based upon redshift surveys tend to have far fewer chance alignments than do those based upon sky surveys (e.g., Hickson (1982), 1993; HCG). We apply a standard friends-of-friends algorithm to extract a sample of CGs systems in the LCRS. Our definition for these CGs is as follows:
* $``$ 3 galaxies,
* compact (projected nearest-neighbor inter-galaxy separations of $`D_L`$ 50$`h^1`$kpc, or $``$ 1 galaxy diameter), and
* isolated in redshift (nearest-neighbor inter-galaxy velocity differences $`V_L`$ 1000 km s<sup>-1</sup>).
The LCRS, optimized for efficient observing with a fiber-fed multi-object spectrograph, has a 55 arcsec fiber separation limit. This has prevented the observation of spectra for all galaxies which were members of close pairs; so, many galaxies in CG environments are missing from the LCRS redshift catalogue. We have partially circumvented this problem by assigning each of the $``$1,000 “missing” LCRS galaxies the redshift of its nearest neighbor and convolving it with a gaussian of $`\sigma `$=200 km s<sup>-1</sup>, a value which is similar to the typical median velocity disperion of HCGs (Hickson 1982) and of LCRS LGs (Tucker et al. 1999); hence, on the small angular scales necessary for compact group selection, the LCRS falls somewhere between a 2D sky survey and a fully 3D redshift survey. The resulting catalogue contains 76 CGs having 3 or more members, and evidence for interactions in many of these CGs (in the form of tidal tails, bridges, etc.; see Allam & Tucker (1998), Allam et al. (1999)) confirms that they are indeed, for the most part, physical systems. All the CGs contain at least one redshift; 23 contain 2 or more. (Unfortunately, only one LCRS CG has redshifts for all its members.) The innate physical properties of LCRS CGs — such as typical group richnesses and densities — are similar to those of the Barton et al. catalogue, which in turn are similar to those of the HCG catalogue, especially for CGs with 4 or more members. The median redshift for LCRS CGs, however, is $``$0.08, more than twice that of either of the other two CG catalogues. As with the HCG and Barton et al. samples, LCRS CGs represent some of the densest concentrations of galaxies known and thus provide ideal laboratories for studying the effect of strong interaction on the morphology and stellar content of galaxies. Details of the general properties of these CGs and of how they were extracted from the LCRS will be discussed in Allam et al. (1999); here, we will focus on the star formation properties in LCRS CG environments.
It is well known that direct interactions between galaxies tend to increase their star formation rate (SFR) (Larson & Tinsley (1978); Bushouse (1987); Kennicutt et al. (1987)). LCRS CGs represent an environment where interactions, tidally triggered activity, and galaxy mergers are expected to be at their highest rate of occurrence. Therefore, if no other factors dominate, we may expect a global enhancement in the SFR of LCRS CG galaxies. In order to test this hypothesis, we will use the equivalent width (EW) of the \[O II\] $`\lambda `$ 3727 emission line (Colless et al. (1990), Kennicutt (1992)) as a star formation indicator.
The paper is organized as follows: § 2 describes the sample under investigation, § 3 discusses the sample’s spectroscopic properties, and § 4 relates the sample’s morphological features; finally, in § 5, we summarize our main conclusions.
## 2 The Samples
As a first step towards the clarification of the effect of high density environments on enhancing the SFR in galaxies, it is necessary to characterize the SFR of galaxies in more isolated environments. For that reason, a sample of 253 CG galaxies, a sample of 7621 LG galaxies, and a sample of 13452 field galaxies have been selected from the LCRS. Particular care was taken in order to obtain a loose group sample in which no galaxies from CGs were included. Further, galaxies from both LGs and CGs were excluded from the field sample. Our goal is to study environmental factors affecting the SFR of galaxies by taking advantage of the very large and homogeneous data set available from the LCRS.
Before we move on, however, a concern must be addressed: could the fiber separation effect — the fact that, in high-density regions, the fraction of LCRS galaxies with spectra is lower than that in low-density regions — bias our analysis? To first order, this concern is unimportant, since we are comparing the fraction of starbursts (see § 3) against the total sample of galaxies with spectra — not against the total sample of galaxies both with and without spectra. Furthermore, the galaxies removed due to the fiber size were removed blindly — i.e. with no regard to their star formation properties or morphological type. On the other hand, uncertainties in group membership due to the fiber separation effect can obscure the boundary between low- and high-density regimes, possibly diluting the differences in the observed properties of these environments. In other words, any environmental effects we detect would likely be even stronger in an uncontaminated sample.
## 3 Distribution of \[O II\] Equivalent Widths
Several works have used EW(O II) $`\lambda `$3727 as a star formation index for distant galaxies (Colless et al. (1990), Kennicutt (1992)). We have used automatically measured rest-frame LCRS EW(O II)’s, which have a mean error of 2.2 Å \[Hashimoto et al. (1998)\]. Figure 1 shows the distribution of the EW(O II) of LCRS galaxies in CGs, in LGs, and in the field. A formal $`\chi ^2`$ test indicates that the distribution for CGs differs from that for LGs at the 99.99965% confidence level, and from that for field galaxies at the 99.99951% confidence level. (These very high formal confidence levels are due partly to the large samples involved and partly to the large differences among these samples for the smallest bin.)
Following Hashimoto et al. (1998), we classify the emission line strength as follows: NEM (no emission), for which EW$`<`$5$`\AA `$; WEM (weak emission), for which 5$`\AA `$ $``$EW$`<`$20$`\AA `$; and SEM (strong emission), for which EW$``$20$`\AA `$. The WEM class contains mostly normal galaxies, where star formation is governed by internal factors such as gas content and disk kinematics. The SEM class contains mainly starburst galaxies, where star formation is due to interaction. Table 1 represents the frequency of EW(O II) for galaxies in different environments. The variations in the frequency of the SEM class may reflect environmental variations in galaxy-galaxy interaction rates.
Note that the fraction of LG galaxies showing a normal (WEM) SFR is only three-quarters that for the field galaxies, and the fraction of LG galaxies showing starburst (SEM) activity is only two-thirds that in the field. For CG galaxies, the ratios are more severe: the fraction of CG galaxies with normal SFR is only two-thirds that for the field galaxies, and the fraction of CG galaxies which are star-bursting is only half that of the field, indicating that the SFR in high density environments is generally weaker than in the field.
## 4 The Concentration Index $`C`$ of LCRS galaxies
Although the SFR in high density environments is, on average, depressed relative to that than in the field, much of this effect might be due merely to differences in average morphological mix. After all, spirals, which are more prevalent in the field, tend to have higher average SFRs than do ellipticals. To test this possibility, we have made use of Hashimoto et al. (1998)’s measurement of the concentration index, $`C`$, for LCRS galaxies as a measure of the morphological types of the galaxies in our sample. The $`C`$ index represents the intensity-weighted second moment of a galaxy; it compares the flux between specified inner and outer isophotes of a galaxy to indicate the degree of light concentration. As such, the $`C`$ index is related to the Hubble type (Abraham et al. (1994)), where late/irregular type galaxies have smaller $`C`$ values. The total number of galaxies in our sample with a measured $`C`$ index is 12901. The mean and median $`C`$ index is given for each of the different galaxy environments in Table 2.
The $`C`$ distribution of CGs galaxies is shown in Figs. 2 & 3. A KS test indicates that the CG galaxies are drawn from the same morphological parent population as the LG galaxies at a probability of 20%; the probability that CG and the field galaxies have the same morphological mix is only 0.2%. Clearly, the distribution of CG galaxies is skewed toward early types (large $`C`$’s).
In Fig. 4, the distribution of EW(O II) vs. $`C`$ index is shown for LCRS galaxies in the different environments. The relation between the mean $`C`$ index, $`<`$$`C`$$`>`$, and the mean EW(O II), $`<`$EW(O II)$`>`$, is presented in Fig. 5. Note that $`<`$EW(O II)$`>`$ increases smoothly with decreasing $`<`$$`C`$$`>`$ for LG and field galaxies, parallelling the relation between Hubble type and EW(O II) (Kennicutt (1992)). Although much noisier, the same relation holds basically true for CG galaxies, too. We must note, however, that the latest-type (the smallest $`C`$ bin) CG galaxies show a significant deficit of star formation — perhaps only one-half to one-third that of field galaxies of this morphology. Therefore, it appears that not all the differences between the average star formation properties of CGs, LGs, and the field are due merely to morphological mix. Some appear to be due to the dampening of star formation within late-type CG galaxies.
## 5 Conclusion
The star formation histories of galaxies in CGs can provide insight into the environmental factors that influence the evolution of galaxies. One approach is to examine the spectra of galaxies for evidence of ongoing star formation or of a young stellar population. We can then compare the fraction of compact group galaxies with recent star formation with the fraction from loose groups and the field.
We have done this by making use of a new catalogue of CGs, based upon the LCRS, which contains 253 galaxies in 76 CGs. To clarify whether interaction produces enhanced star formation in LCRS CGs, they have been compared to carefully selected samples of LCRS LG and field galaxies. In all, a sample of 21326 LCRS galaxies in the three different environments was employed.
We compared the SFR based on the strength of the emission line EW(O II) for LCRS CGs, LGs, and field galaxies: we found that the fraction of starbursts for CG members is roughly half that for the field, whereas for LG galaxies it is roughly two-thirds that for the field. Also we found that a normal galaxy SFR occurs for LCRS CG galaxies at roughly two-thirds the rate for the field, whereas for LG galaxies this rate is three-fourths that for the field. This means that, on average, the star formation in high density environments is depressed with respect to the field.
Much of this effect can be attributed to the different morphological mixes associated with low and high density environments: when we compared the distribution of the concentration index $`C`$ of galaxies in CGs, in LGs, and in the field, we found the distribution of CGs galaxies to be definitely skewed towards early morphological types (large $`C`$ index), which generally tend to have relatively low SFRs. Nonetheless, when we then compared the SFR vs. the $`C`$ index for CG, for LG, and for field galaxies, we found that the SFR for CGs appears to be deficient for very late morphological types (small $`C`$ index) — in fact, the SFR for these late-type CG galaxies is only one-half to one-third the SFR for field spirals.
It is clear from these findings that CG environments tend to depress star formation, partly due to a relative overabundance of early-type galaxies and partly due to some mechanism that dampens star formation within late-type CG spirals. Note that results from other sources — in particular, the HCG catalogue and from Zabludoff & Mulchaey’s (1998) sample of poor groups — lend support to this view. For example, both of these other samples have been shown to have galaxy populations skewed toward early types (Hickson 1982, Zabludoff & Mulchaey 1998). More interesting, however, is the growing body of evidence, both in the far-infrared (Allam 1998) and in H$`\alpha `$ (Iglesias-Páramo & Vílchez 1999), that the global star formation rates within HCGs are, on average, not enhanced relative to field samples of similar morphological mix. Indeed, Iglesias-Páramo & Vílchez even note a marginally significant locus of HCG spiral galaxies of particularly low H$`\alpha `$ emission in their Fig. 4; these HCG spirals may correspond to our LCRS CG sample of low-SFR late-type galaxies.
Therefore, our initial hypothesis — that interaction-induced starbursts dominate the global SFR in LCRS CGs — fails. Although starbursts are no doubt important, other factors prevail to yield a net depression in the SFR in CG environments. Much of this effect is merely due to the high fraction of early-type galaxies in CGs, but at least some of it is likely due to dampened activity in late-type galaxies; this second mechanism indicates that gas stripping mechanisms may play a role in CG environments.
We thank the referee for many useful comments. This work was supported by the U.S. Department of Energy under contract No. DE-AC02-76CH03000. HL acknowledges support provided by NASA through Hubble Fellowship grant #HF-01110.01-98A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555.
|
no-problem/9907/chao-dyn9907001.html
|
ar5iv
|
text
|
# Comment on “Simple Measure for Complexity”
## Abstract
We critique the measure of complexity introduced by Shiner, Davison, and Landsberg in Ref. . In particular, we point out that it is over-universal, in the sense that it has the same dependence on disorder for structurally distinct systems. We then give counterexamples to the claim that complexity is synonymous with being out of equilibrium: equilibrium systems can be structurally complex and nonequilibrium systems can be structurally simple. We also correct a misinterpretation of a result given by two of the present authors in Ref. .
Santa Fe Institute Working Paper 99-06-040
In Ref. , Shiner, Davison, and Landsberg introduce a two-parameter family $`\mathrm{\Gamma }_{\alpha \beta }`$ of complexity measures:
$$\mathrm{\Gamma }_{\alpha \beta }\mathrm{\Delta }^\alpha (1\mathrm{\Delta })^\beta ,$$
(1)
where
$$\mathrm{\Delta }\frac{S}{S_{\mathrm{max}}}.$$
(2)
The quantity $`\mathrm{\Delta }`$ is called the “disorder”, $`S`$ is the Boltzmann-Gibbs-Shannon entropy of the system, and $`S_{\mathrm{max}}`$ its maximum possible entropy—taken to be equal to the equilibrium thermodynamic entropy. For $`\alpha ,\beta >0`$, $`\mathrm{\Gamma }_{\alpha \beta }`$ satisfies the widely accepted “one-hump” criterion for statistical complexity measures—the requirement that any such measure be small for both highly ordered and highly disordered systems . The approach to complexity measures taken by Shiner, Davison, and Landsberg is similar to that of Lòpez-Ruiz, Mancini, and Calbet . In both Refs. and the authors obtain a measure of complexity satisfying the one-hump criterion by multiplying a measure of “order” by a measure of “disorder”.
We welcome this addition to the literature on complexity measures and are pleased to see a variety of complexity measures compared and examined critically. However, there are several aspects of Ref. upon which we would like to comment.
First, despite satisfying the one-hump criterion, it is not clear that $`\mathrm{\Gamma }_{\alpha \beta }`$ is a measure of complexity. $`\mathrm{\Gamma }_{\alpha \beta }`$ is a quadratic function of a measure of distance from thermodynamic equilibrium, as the authors note on p. 1461. This has three consequences:
1. As pointed out in Ref. , this type of complexity measure is over-universal in the sense that it has the same dependence on disorder for structurally distinct systems. Eq. (1) makes it clear that, despite the claims of Shiner et al. to the contrary, all systems with the same disorder $`\mathrm{\Delta }`$ have the same $`\mathrm{\Gamma }_{\alpha \beta }`$.
2. Since $`S_{\mathrm{max}}`$ is taken to be the equilibrium entropy of the system, $`\mathrm{\Gamma }_{\alpha \beta }`$ vanishes for all equilibrium systems: “ ‘Complexity’ vanishes … if the system is at equilibrium” \[1, p. 1461\]. Due to this $`\mathrm{\Gamma }_{\alpha \beta }`$ does not distinguish between two-dimensional Ising systems at low temperature, high temperature, or the critical temperature. All of these systems are at equilibrium and hence have vanishing $`\mathrm{\Gamma }_{\alpha \beta }`$. However, they display strikingly different degrees of structure and organization. Nor does $`\mathrm{\Gamma }_{\alpha \beta }`$ distinguish between the many different kinds of organization observed in equilibrium —between, say, ideal gases, the long-range ferromagnetic order of low-temperature Ising systems, the orientational and spatial order of the many different liquid crystal phases , and the intricate structures formed by amphiphilic systems . All of these systems are in equilibrium, but they (presumably) have very different complexities.
3. We have just seen that equilibrium should not be taken to indicate an absence of complexity. Conversely, not all systems out of equilibrium are complex. For example, consider a paramagnet, a collection of two-state spins that are not coupled. If this system is pumped so that it’s out of equilibrium, a larger percentage of the spins will be in their higher energy states. Nevertheless, there is still no spatial structure or ordering in the system; the spins are still completely uncorrelated. However, the complexity measure of Shiner et al. will be nonzero for this very simple system. While $`\mathrm{\Gamma }_{\alpha \beta }`$ vanishes for systems at “maximal distance from equilibrium” Ref. \[1, p. 1461\], all other systems displaced from equilibrium have non-vanishing complexity by virtue of the $`1\mathrm{\Delta }`$ term in Eq. (1). It does not seem reasonable to us to require that any system partially out of equilibrium have positive complexity.
In summary, then, we argue that whether or not a system is in equilibrium in and of itself says little about the system’s structure, pattern, organization, or symmetries. Equilibrium systems can be complex, nonequilibrium systems can be simple, and vice versa. Since $`\mathrm{\Gamma }_{\alpha \beta }`$ is defined in terms of a “distance from equilibrium” $`1\mathrm{\Delta }`$, we feel that it cannot capture structural complexity.
Second, we are confused by Ref. ’s calculation of $`\mathrm{\Gamma }_{11}`$ for equilibrium Ising systems on p. 1462. If the system is at equilibrium, then the disequilibrium term $`1\mathrm{\Delta }`$ should vanish, leading to a vanishing $`\mathrm{\Gamma }_{11}`$. Perhaps the authors are using a uniform distribution rather than the thermodynamic equilibrium distribution in their calculation of $`S_{\mathrm{max}}`$.
Third, Ref. appears to have misinterpreted our earlier work on the statistical complexity of one-dimensional spin systems . On p. 1462, Ref. identifies the statistical complexity $`C_\mu `$ with zero-coupling ($`J=0`$) disorder $`\mathrm{\Delta }`$. At a minimum, this interpretation is not consistent dimensionally, since $`C_\mu `$ has the units of entropy (bits), while $`\mathrm{\Delta }`$ is a dimensionless ratio. More crucially, however, Ref. conflates the definition of $`C_\mu `$, which does not make $`C_\mu `$ a function solely of the system’s entropy, with a particular equation for $`C_\mu `$ (Eq. (8) of Ref. ) correct within a strictly delimited range of validity . Further, Ref. draws an inaccurate conclusion based on that equation. For nearest-neighbor Ising systems Refs. and show that $`C_\mu =H(1)`$, the entropy of spin blocks of length one. Contrary to the statement in Ref. , $`H(1)`$ is not the same as the entropy of noninteracting spins—i.e., of paramagnetic spin systems, those with $`J=0`$.
Finally, Ref. states that thermodynamic depth belongs to the family of complexity measures that are single-humped functions of disorder. However, two of us recently pointed out that thermodynamic depth is an increasing function of disorder .
In summary, we have argued here and elsewhere that a useful role for statistical complexity measures is to capture the structures—patterns, organization, regularities, symmetries—intrinsic to a process. Ref. emphasizes that defining such measures solely in terms of the one-hump criterion—say, by multiplying “disorder” by “one minus disorder”—is insufficient to this task. Introducing an arbitrary parameterization of this product—e.g. via $`\alpha `$ and $`\beta `$ in Eq. (1)—does not help the situation. A statistical complexity measure that is a function only of disorder is not adequate to measure structural complexity, since it is unable to distinguish between structurally distinct configurations with the same disorder.
This work was supported at the Santa Fe Institute under the Computation, Dynamics and Inference Program via ONR grant N00014-95-1-0975 and by Sandia National Laboratory. We thank an anonymous referee for several helpful comments.
|
no-problem/9907/cond-mat9907401.html
|
ar5iv
|
text
|
# Time-dependent Effects in the Metallic Phase in Si-MOS: Evidence for Non-Diffusive Transport
## Abstract
We have found that the conduction in Si-MOS structures has a substantial imaginary component in the metallic phase for the density range $`6\times n_c>n>n_c`$, where $`n_c`$ is the critical density of the metal-insulator transition. For high mobility samples, the corresponding delay (or advance) time, $`\tau (0.110)`$ ms, increases exponentially as density and temperature decrease. In very low mobility samples, at temperature of 0.3K, the time-lag in establishing the equilibrium resistance reaches hundreds of seconds. The delay (advance) times are approximately $`10^210^8`$ times larger than the overall $`RC`$-time of the gated structure. These results give evidence for a non-Boltzmann character of the transport in the low-density metallic phase. We relate the time-dependent effects to tunneling of carries between the 2D bulk and localized states.
The metal-insulator transition observed in different two dimensional carrier systems is currently in the focus of interest. A number of models put forward for its explanation span from a non-Fermi-liquid state to interface traps physics . In the latter models, the strong exponential drop in the resistivity at low temperature is a transient temperature effect only. On the experimental side, from measurements at high carrier density , the exponential drop was proven not to be related to the ground state conduction, at least for high carrier densities $`n(1015)\times n_c`$.
The involvement of interface traps into the transport may be revealed by studying charging effects, time lag, noise character etc. We present here data evidencing that the time-dependent effects are essential in the metallic phase, even for densities 6 times the critical one, $`n_c`$. We studied in detail four samples with different peak mobility, Si-11 ($`\mu ^{peak}=39,000`$ cm<sup>2</sup>/Vs), Si-22 ($`\mu ^{peak}=33,000`$ cm<sup>2</sup>/Vs), Si-4/32 ($`\mu =8400`$ cm<sup>2</sup>/Vs), and Si-52 ($`\mu =1300`$ cm<sup>2</sup>/Vs). For the first three samples, an Al gate film was deposited onto the SiO<sub>2</sub> layer followed by a post-metallization anneal. The density of trapped carriers (estimated from the threshold voltage ), was $`(23)\times 10^{10}`$ cm<sup>-2</sup> for the first two samples, and $`21\times 10^{10}`$ cm<sup>-2</sup> for the third one. In the lowest mobility sample, we increased intentionally the amount of disorder by thermal evaporating an Inconel gate without a subsequent anneal. Beyond the substantial decrease in the mobility, the amount of trapped carriers increased up to $`60\times 10^{10}`$ cm<sup>-2</sup>. All samples were of the same Hall-bar geometry, $`5\times 0.8`$ mm<sup>2</sup>, with gate oxide thickness of $`d_{\mathrm{SiO}_2}=200\pm 20`$ nm, aspect ratio of $`w/l=0.32`$, and corresponding capacitance between the gate and 2D layer $`C690`$ pF. The potential and current contacts to the 2D channel were lithographically defined and made by thermal diffusion of phosphorus. The overall device $`RC`$-time, including contact resistance was of the order of $`(110)\mu `$s, and was expected to contribute a negligibly small imaginary component to the sample ac-conductance.
Four-terminal ac-transport measurements were carried out in the frequency range 0.3 to 30 Hz with a quadrature lock-in amplifier. In order to eliminate the influence of the resistance of potential probes, we used a battery operated electrometric preamplifier with an input current less than 1 pA. The amplifier phase-frequency characteristic was verified not to contribute to the studied effects. In all samples, we found the time-dependent effects to persist in the metallic phase, far above the critical density. In high mobility samples, the characteristic times were in the ms-range and were measured from a phase shift $`\phi `$ between the voltage drop, $`V_x`$, and the source-drain current, $`I_x`$, as well as from the frequency dependence $`\phi (F)`$. In low mobility samples, the characteristic times were of order of 1-100 s and were measured directly, by applying a small voltage step on the top of the constant gate voltage and measuring the transient voltage $`V_x`$ at a constant current $`I_x`$. In the following, we define the “resistivity”, $`\rho `$ as the in-phase component, $`Re((V_x/I_x)\times w/l)`$.
Measurements with high mobility samples. Figure 1 shows the phase shift between the ac voltage $`V_x`$ and the current $`I_x`$, measured in a high mobility sample at a frequency of 3.8 Hz, as a function of carrier density. The phase shift emerges as the density decreases below $`6\times 10^{11}`$ cm<sup>-2</sup>. This is about 6 times larger than the critical density, $`n_c=0.95\times 10^{11}`$ cm<sup>-2</sup> for the metal-insulator transition in this sample.
The lower inset of Fig. 1a shows that, as density decreases, the phase shift first is negative (which corresponds to the voltage delay), then becomes positive (voltage advancing), and, finally, becomes again negative close to the critical density. The phase shift increases linearly with ac-current frequency $`F`$ and may thus be interpreted as a time delay (or advance, correspondingly), $`\tau =\phi /(2\pi F)`$. The upper inset in Fig. 1 a shows delay time (= $`|\tau |`$) calculated from the slope of the frequency characteristics, at a fixed temperature of 290 mK, and over the range of high densities where $`\varphi <0`$. As temperature decreases, the phase shift displays more and more pronounced oscillations as a function of density. The phase shift is reproducible during the same cool-down, however, in different cool-downs the oscillatory details varied on the density scale.
Figures 2 a and 2b show the phase shift and the resistivity for sample Si-22 as a function of temperature for eight fixed densities. It is remarkable that the strong exponential drop in resistivity develops in the same ranges of densities and temperatures as the phase shift does, although we can not simply relate the two effects to each other.
As follows from Figs. 1a and 2a, both, $`\tau `$ and $`\phi `$ decay about exponentially with density and temperature,
$$\tau f_1(n,T)\mathrm{exp}(T_0(n)/T).$$
(1)
The prefactor $`f_1`$ oscillates as a function of density (and of temperature) changing sign at ”node” values, $`n_i`$. The definition of the slope, $`T_0`$, is illustrated by the dashed tangent lines in the lower inset of Fig. 2a.
$`T_0(n)`$ is not constant over entire temperature range: it is large for high temperatures, and decreases for lower
$`T`$’s. In the vicinity of the nodes $`n_i`$ and for low temperatures, the slope tends to vanish which is simply affected by the oscillatory behavior of $`f_1(T)`$. Therefore, the narrow density ranges around the nodes were ignored in the calculations of $`T_0`$. The resulting density dependence $`T_0(n)`$ is shown in Fig. 1b, evaluated separately for high ($`T>2.5`$ K) and low ($`T<2.5`$ K) temperatures. Despite Eq. (1) describes the data very roughly, an important conclusion can be drawn immediately: $`T_0`$ does not decrease to 0 for the nodes $`n=n_i`$ and develops smoothly from the ranges of $`\phi >0`$ to those of $`\phi <0`$. This means that the nodes are related to the prefactor $`f_1(n)`$ rather than to the exponential factor. For high densities, $`T_0`$ decays steeper than $`(nn_0)^1`$, as shown in Fig. 1 b, thus causing an exponential decay of $`\tau `$ for high densities.
Low mobility sample. In the low mobility sample Si-52, the time-dependent effects are much stronger and manifest themselves in a transient voltage between potential probes when the gate voltage $`V_g`$ changes by a small step $`\mathrm{\Delta }V_gV_g`$. Typical transient curves 1 - 5 of $`\mathrm{\Delta }\rho (t)=\rho (t)\rho (0)`$ normalized by $`\mathrm{\Delta }\rho _0=\rho (\mathrm{})\rho (0)`$ are shown in Fig. 3a, for 5 different densities. The curves were fitted with exponential functions $`\rho _0\mathrm{exp}(t/\tau _d)`$ (shown by dashed curves), from which the time lag $`\tau _d`$ was obtained. At some, rather arbitrary densities, the transient curves were non-monotonic with oscillations (curve 6), or jumps (curves 7, 8, 9).
The time lag $`\tau _d`$ and resistivity $`\rho `$ are plotted in Fig. 3b vs carrier density. Their ratio, $`\tau _d/\rho `$, is again $`10^8`$ times larger than the sample capacitance pointing to its irrelevance.
Discussion. The characteristic times, even for high mobility samples, are of the order of $`10^410^2`$s and, therefore, can not be associated with any $`RC`$ time constants of the sample. Having the typical sample parameters $`R(10^310^4)`$ Ohm, and $`C=0.7\times 10^9`$ F, one can hardly find in the sample either a capacitance $`10^4`$ F, or a resistance $`10^8`$ Ohm (in the metallic range of densities). Only for high densities $`n20\times 10^{11}`$cm<sup>-2</sup>, the delay time becomes comparable to $`RC=10^6`$ s. The huge time $`\tau `$ can not be attributed to contact phenomena, because for the more disordered sample Si-52, at much higher densities (and for lower resistance of the contacts, correspondingly), $`\tau `$ is larger by a factor of $`10^5`$. The irrelevance of the contacts to the time-dependent effects is also confirmed by the linearity of the $`IV`$-curves measured between different contacts with currents in the range from $`10^7`$ down to $`10^{12}`$ A.
Model. Interface defect charges originating from the lack of stoichiometry are intrinsic to Si/SiO<sub>2</sub> system; their typical density is 10<sup>12</sup>cm<sup>-2</sup> for a state-of-art thermally grown dioxide . Tunneling of electrons from Si to the interface charged states is known to cause a time lag in Si-MOS capacitors at room temperature . These charged states are partly neutralized during slow cooling of Si-MOSFETs with a positive gate voltage applied. Further, at liquid helium temperatures, electron tunneling rate to these interface traps in SiO<sub>2</sub> under a barrier of 3.2 eV is negligible. The uniform part of the potential produced by interface-state charge is out of importance, however, spatial fluctuations of the built-in charge produce shallow fluctuations of the potential acting on 2D electrons in Si (at $`z>0`$), and cause corresponding localization of electrons. We assume that the observed times are due to tunneling processes, on the Si-side entirely, between the electrons in 2D “bulk” and the potential traps in the localized areas produced by the fluctuations of the interface charge . For the discussed low-temperature case, the active traps are created by the attractive (positive) charges that fall inside a large-scale repulsive fluctuation. The attractive charge placed at the interface (at $`z<0`$) localizes an electron nearby (on the Si-side, $`z>0`$), with a binding energy
$$\epsilon _b=m^{}e^4/8\kappa ^2\mathrm{}^2.$$
(2)
Here $`m^{}=0.21m_e`$ is the electron effective mass, $`\kappa =7.7`$ the average dielectric permittivity, and thus $`\epsilon _b=0.02`$ eV for the Si/SiO<sub>2</sub> interface. The repulsive charges which are very closely located to the attractive charge, decrease the binding electron energy. As a result, the binding energy distribution broadens and extends over a wide energy range, from about $`\epsilon _b`$ down to 0. The effective binding energy $`\epsilon _b^{eff}`$ is therefore substantially lower than $`\epsilon _b`$.
The localized state is located inside a large-scale repulsive fluctuation and is surrounded by a broad potential barrier of the height $`\epsilon _b^{eff}`$. The barrier itself is surrounded by the electrons in the metallic regions of the 2D bulk. The barrier is responsible for the large electron capture and emission times. At nonzero $`T`$, only the traps located close to the Fermi level, within $`E_F\pm kT`$, are recharging when the local potential in the 2D bulk varies.
For low temperatures, the electron emission time equals to
$$\tau _{em}(\mathrm{}/\epsilon _b)\mathrm{exp}(x/\lambda ),$$
(3)
where $`\lambda =\sqrt{\mathrm{}^2/8m^{}\epsilon _b^{eff}}`$ is the typical tunneling length and $`x`$ is the distance from the trap to the nearest conductive region. The capture time $`\tau _c`$ is related to $`\tau _{em}`$ by the obvious relationship: $`\tau _c/\tau _{em}=(1f)/f`$, where $`f`$ is the level occupancy (Fermi distribution function). For the traps whose energy is close to $`E_F`$, $`f1/2`$ and these two times are about equal. Using, for an estimate, $`\epsilon _b^{eff}=0.01`$eV we obtain $`\lambda =20`$Å and the tunneling distance $`x=460`$Å corresponding to the tunneling time $`10^3`$ s. Thus, a radius of the repulsive barrier $`r`$ is to be of the order of $`500`$Å, to account for the recharging time of 1 ms. For more disordered samples, the amount of the charge trapped at the interface and the amplitude of potential fluctuations are even larger. Therefore, the radius (in the $`xy`$ plane) of the potential fluctuations is also larger. To account for $`\tau =100`$s, we estimate $`x`$ has to be equal to 700Å. The length scale, (500 - 700)Å, of the potential fluctuation is consistent with numerous data obtained for similar samples, as well as with direct tunneling microscopy of the Si/SiO<sub>2</sub> interface .
As electron density and $`E_F`$ increase, the potential barriers get thinner due to screening by free electrons of the 2D bulk, and the radius of the total repulsive fluctuation decreases, causing $`\tau `$ to decrease monotonically. This is in a competition with a density dependence of the effective binding energy $`\epsilon _b^{eff}`$; as a result, the overall density dependence of $`\tau `$ may be non-monotonic. Finally, above a certain density, all localized states sink below the Fermi energy, the barriers are screened entirely, and the Drude-Boltzmann regime sets in replacing tunneling. The onset occurs at Fermi energy equal roughly to the amplitude of bare fluctuations, and is thus inversely proportional to the sample peak mobility.
The temperature comes into the model via (i) the tunneling distance to the nearest trap level found within the energy interval $`E_F\pm kT`$, (ii) Fermi distribution function $`f`$ and (iii) activation processes on the border and inside the localized area. These mechanisms lead to temperature dependences $`\tau \mathrm{exp}(T_{0,i}/T)`$, and the resulting temperature dependence may have different $`T_{0,i}`$ in different ranges of temperature.
The electron tunneling time is much larger than the transport scattering time ($`3`$ ps), the electron-electron interaction time, $`h/E_{ee}0.1`$ ps, and the electron diffusion time ($`20`$ ps). Therefore, all electrons in 2D layer do participate in tunneling during a time $`\tau 10^3`$ s. The capacitance and Hall voltage are measured at frequencies lower than the tunneling rate, $`1/\tau _{em}`$, hence, all the electrons of the 2D bulk participate in re-charging or in Hall transport.
In summary, our data show that the charge transport in the “metallic conduction” regime is accompanied by (or includes) a non-diffusive component. It manifests itself in the time-dependent effects in metallic phase over density range from $`n_c`$ to about 6 times $`n_c`$. The delay/advance time between the ac voltage and the current in high mobility samples is of the ms-range and grows exponentially as density and temperature decrease. For more disordered samples the time lag between the gate voltage pulse and the response reaches 1-100 seconds. We associate these times with the “in-plane” tunneling of carriers between the 2D bulk and the potential traps. The suggested model presumes that the unit ”slow” trap consists of a potential well surrounded by potential barrier, and seems to be applicable to various material systems since similar long-range localized areas were found in GaAs/AlGaAs as well . Particularly, this may be relevant to the system with a set of artificial quantum dots (traps) . The model may qualitatively describe (i) large delay time $`\tau `$, (ii) the growth of $`\tau `$ as temperature and density decrease, and (iii) the non-monotonic density and temperature dependence of $`\tau `$. Whereas the tunneling time in the above model is different from that in Ref. , the physics of the temperature dependence of the resistivity caused by ”fast traps” (located nearby the border between the localized and free carriers) may be similar. The complexity of the presented data, however, requires a thorough theoretical consideration, which should take into account interaction of the free carriers in 2D bulk with the localized ones, recharging and screening processes within the localized areas, and symmetry properties of the surface localized states.
V.P. acknowledges help by M. D’Iorio and E. M. Goliamina in samples processing, and stimulating discussions with B. Altshuler, D. Maslov, and A. Finkelstein. The work was supported by RFBR, by the Programs “Physics of solid-state nanostructures” and “Statistical physics”, by INTAS, NWO, and by FWF P13439 and GME, Austria.
|
no-problem/9907/hep-lat9907021.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The center vortex theory and the monopole/abelian-projection theory are two leading contenders for the title of quark confinement mechanism. Both proposals have by now accumulated a fair amount of numerical support. To decide between them, it is important to pinpoint areas where the two theories make different, testable predictions. In this article we would like to report on some preliminary efforts in that direction.
Much of the numerical work on the center vortex theory has focused on correlations between the location of center vortices, identified by the center projection method, and the values of the usual gauge-invariant Wilson loops (cf. , ). In the abelian projection approach, on the other hand, Wilson loops are generally computed on abelian projected lattices, and this fact might seem to inhibit any direct comparison of the monopole and vortex theories. However, it has also been suggested in ref. that a center vortex would appear, upon abelian projection, in the form of a monopole-antimonopole chain, as indicated very schematically in Fig. 1. The idea is to consider, at fixed time, the vortex color-magnetic field in the vortex direction. In the absence of gauge fixing, the vortex field points in arbitrary directions in color space, as shown in Fig. 4. Upon fixing to maximal abelian gauge, the vortex field tends to line up, in color space, mainly in the $`\pm \sigma ^3`$ direction. But there are still going to be regions along the vortex tube where the field rotates from the $`+\sigma _3`$ to the $`\sigma _3`$ direction in color space (Fig. 4). Upon abelian projection, these regions show up as monopoles or antimonopoles, as illustrated in Fig. 4. If this picture is right, then the $`\pm 2\pi `$ monopole flux is not distributed symmetrically on the abelian-projected lattice, as one might expect in a Coulomb gas. Rather, it will be collimated in units of $`\pm \pi `$ along the vortex line. We have argued elsewhere that some sort of collimation of monopole magnetic fields into units of $`\pm \pi `$ is likely to occur even in the $`D=3`$ Georgi-Glashow model, albeit on a scale which increases exponentially with the mass of the W-boson. On these large scales, the ground state of the Georgi-Glashow model cannot be adequately represented by the monopole Coulomb gas analyzed by Polyakov in ref. . The question we address here is whether such flux collimation also occurs on the abelian-projected lattice of $`D=4`$ pure Yang-Mills theory.
The test of flux collimation on abelian-projected lattices is in principle quite simple. Consider a very large abelian Wilson loop
$$W_q(C)=<\mathrm{exp}[iq𝑑x^\mu A_\mu ]>$$
(1.1)
or abelian Polyakov line
$$P_q=<\mathrm{exp}[iq𝑑tA_0]>$$
(1.2)
corresponding to $`q`$ units of the electric charge. The expectation values are obtained on abelian-projected lattices, extracted in maximal abelian gauge.<sup>1</sup><sup>1</sup>1Abelian-projected links in Yang-Mills theory are diagonal matrices of the form $`U_\mu =\text{diag}[\mathrm{exp}(iA_\mu ),\mathrm{exp}(iA_\mu )]`$. If $`q`$ is an even number, then magnetic flux of magnitude $`\pm \pi `$ through the Wilson loop will not affect the loop. Flux collimation therefore implies that $`W_q(C)`$ has an asymptotic perimeter-law falloff if $`q`$ is even. Likewise, Polyakov lines $`P_q`$ for even $`q`$ are not affected, at long range, by collimated vortices of $`\pm \pi `$ magnetic flux. In the confined phase, the prediction is that $`P_q=0`$ only for odd-integer $`q`$. In contrast, we expect in a monopole Coulomb gas of the sort analyzed by Polyakov that $`W_q(C)`$ has an area-law falloff, and $`P_q=0`$, for all $`q`$.
Numerical results for abelian-projected Wilson loops and Polyakov lines must be interpreted with some caution in the usual maximal abelian gauge, due to the absence of a transfer matrix in this gauge. Since positivity is not guaranteed, these expectation values need not relate directly to the energies of physical states. For this reason, we prefer to interpet the abelian observables in terms of the type of global symmetry, or type of “magnetic disorder”, present in the abelian-projected lattice, without making any direct reference to the potential between abelian charges, or the energies of isolated charges. Following ref. , let us introduce the U(1) holonomy probability distribution on abelian-projected lattices
$$𝒫_C[g]=<\delta [g,\mathrm{exp}(i𝑑x^\mu A_\mu )]>$$
(1.3)
for Wilson loops, and
$$𝒫_T[g]=<\delta [g,\mathrm{exp}(i𝑑tA_0)]>$$
(1.4)
for Polyakov lines, where
$$\delta [e^{i\theta _1},e^{i\theta _2}]=\frac{1}{2\pi }\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}e^{in(\theta _2\theta _1)}$$
(1.5)
is the $`\delta `$-function on the U(1) manifold. These distributions give us the probability density that a given abelian Wilson loop around curve $`C`$, or an abelian Polyakov line of length $`T`$, respectively, will be found to have the value $`gU(1)`$ in any thermalized, abelian-projected lattice. The lattice has global U(1) symmetry (or, in alternative terminology, the lattice has “U(1) magnetic disorder”) if these distributions are flat for Polyakov lines and large Wilson loops. In other words, we have U(1) symmetry iff, for any $`g,g^{}U(1)`$, it is true that
$$𝒫_T[g]𝒫_T[g^{}g]=0$$
(1.6)
for Polyakov lines, and
$$𝒫_C[g]𝒫_C[g^{}g]\mathrm{exp}[\sigma \text{Area}(C)]$$
(1.7)
holds asymptotically for large Wilson loops. In the case that these relations are not true in general, but the restricted forms
$$𝒫_T[g]𝒫_T[zg]=0$$
(1.8)
and
$$𝒫_C[g]𝒫_C[zg]\mathrm{exp}[\sigma \text{Area}(C)]$$
(1.9)
hold for any $`gU(1)`$ and $`z=\pm 1Z_2`$, then we will say that the lattice has only $`Z_2`$ global symmetry (or $`Z_2`$ magnetic disorder).<sup>2</sup><sup>2</sup>2Generalizing to $`gSU(N)`$ and $`zZ_N`$, for gauge-invariant loops on the unprojected lattice, eq. (1.8) follows from the well-known global $`Z_N`$ symmetry of the confined phase.
Inserting (1.5) into (1.3) and (1.4), we have
$`𝒫_C[e^{i\theta }]`$ $`=`$ $`{\displaystyle \frac{1}{2\pi }}\left(1+2{\displaystyle \underset{q>0}{}}W_q(C)\mathrm{cos}(q\theta )\right)`$
$`𝒫_T[e^{i\theta }]`$ $`=`$ $`{\displaystyle \frac{1}{2\pi }}\left(1+2{\displaystyle \underset{q>0}{}}P_q\mathrm{cos}(q\theta )\right)`$ (1.10)
From this, we can immediately see that there is U(1) magnetic disorder iff $`W_q(C)`$ has an asymptotic area law falloff, and $`P_q=0`$, for all integer $`q0`$. On the other hand, if these conditions hold only for $`q=`$odd integer, then we have $`Z_2`$ magnetic disorder. It is an unambiguous prediction of the vortex theory that the lattice has only $`Z_2`$ magnetic disorder, even after abelian projection.
The magnetic disorder induced by a monopole Coulomb gas is expected to be rather different from the disorder induced by vortices. A Coulombic magnetic field distribution will in general affect loops of any $`q`$, with $`q>1`$ loops responding even more strongly than $`q=1`$ to any variation of magnetic flux through the loop. The usual statement is that all integer abelian charges are confined. This statement is confirmed explicitly in $`D=3`$ dimensions, where it is found in a semiclassical calculation that the monopole Coulomb gas derived from $`QED_3`$ confines all charges, with string tensions $`\sigma _q`$ directly proportional to the charge
$$\sigma _q=q\sigma _1$$
(1.11)
This relation is also consistent with recent numerical simulations .
In D=4 dimensions, an analytic treatment of monopole currents interacting via a two-point long-range Coulomb propagator, plus possible contact terms, is rather difficult. Nevertheless, $`QED_4`$ (particularly in the Villain formulation) can be viewed as a theory of monopole loops and photons, and in the confined (=strong-coupling) phase there is a string tension for all charges $`q`$, and all $`P_q=0`$, as can be readily verified from the strong-coupling expansion. Confinement of all charges $`q`$ is also found in a simple model of the monopole Coulomb gas, due to Hart and Teper . Finally, all multiples $`q`$ of electric charge are confined in the dual abelian Higgs model (a theory of dual superconductivity), and this model is known to be equivalent, in certain limits, to an effective monopole action with long-range two-point Coulombic interactions between monopole currents.
In the case of $`D=4`$ abelian-projected Yang-Mills theory, we do not really know if the distribution of monopole loops identified on the projected lattice is typical of a monopole Coulomb gas. What *can* be tested, however, is whether or not the field associated with these monopoles is Coulombic (as opposed, e.g., to collimated). This is done by comparing observables measured on abelian-projected lattices with those obtained numerically via a “monopole dominance” (MD) approximation, first introduced in ref. . The MD approximation involves two steps. First, the location of monopole currents on an abelian-projected lattice is identified using the standard DeGrand-Toussaint criterion . Secondly, a lattice configuration is reconstructed by assuming that each link is affected by the monopole currents via a lattice Coulomb propagator. Thus, the lattice Monte Carlo and abelian projection supply a certain distribution of monopoles, and we can study the consequences of assigning a Coulombic field distribution to the monopole charges.
We should pause here, to explain how the non-confinement of color charges in the adjoint representation is accounted for in the monopole gas or dual-superconductor pictures, in which all abelian charges are confined. The screening of adjoint (or, in general, $`j=`$integer) representations in Yang-Mills theory is a consequence of having only $`Z_2`$ global symmetry on a finite lattice in the confined phase. In a monopole Coulomb gas, on the other hand, the confinement of all abelian charges would imply a U(1) global symmetry of the projected lattice at finite temperature. Nevertheless, the $`Z_2`$ symmetry of the full lattice and U(1) symmetry of the projected lattice are not *necessarily* inconsistent. Let $`P_j^{YM}`$ represent the Polyakov line in SU(2) gauge theory in group representation $`j`$. Generalizing eq. (1.4) to $`gSU(2)`$, we have
$$𝒫_T[g]=\underset{j=0,\frac{1}{2},1\mathrm{}}{}P_j^{YM}\chi _j[g]$$
(1.12)
and the fact that eq. (1.8), rather than eq. (1.6), is satisfied follows from the identity
$$\chi _j(zg)=\chi _j(g)\text{for }j=\text{integer}$$
(1.13)
and from the fact that
$$P_j^{YM}\{\begin{array}{cc}\hfill =0& j=\text{half-integer}\hfill \\ \hfill 0& j=\text{integer}>0\hfill \end{array}$$
(1.14)
in the confined phase, due to confinement of charges in half-integer, and color-screening of charges in integer, SU(2) representations.
Now, on an abelian projected lattice, the expectation value of a Yang-Mills Polyakov line in representation $`j`$ becomes
$$P_j^{YM}\underset{m=j}{\overset{j}{}}P_{2m}$$
(1.15)
where $`P_q`$ is the abelian Polyakov line defined in eq. (1.2). If the abelian-projected lattice is U(1) symmetric, then $`P_{2m}=0`$ for all $`m0`$, while $`P_0=1`$. This means that after abelian-projection we have
$$P_j^{YM}=\{\begin{array}{cc}\hfill 0& j=\text{half-integer}\hfill \\ \hfill 1& j=\text{integer}\hfill \end{array}$$
(1.16)
and in this way the non-zero values of $`j=`$integer Polyakov lines are accounted for, even assuming that the projected lattice has a global U(1)-invariance. Similarly, adjoint Wilson loops will not have an area law falloff on the projected lattice, as expected from color-screening. This explanation of the adjoint perimeter law in the abelian projection theory has other difficulties, associated with Casimir scaling of the string tension at intermediate distances (cf. ref. ), but at least the *asympotic* behavior of Wilson loops in various representations is consistent with U(1) symmetry on the abelian projected lattice.
It is just this assumed U(1) symmetry of the abelian-projected lattice, and the monopole Coulomb gas picture which is associated with it, that we question here. The U(1) vs. $`Z_2`$ global symmetry issue can be settled by calculating abelian Wilson loops and/or Polyakov lines for abelian charges $`q>1`$. The validity of the Coulomb gas picture can also be probed by calculating $`W_q(C)`$ and $`P_q`$ with and without the monopole dominance (MD) approximation, and comparing the two sets of quantities.
The first calculation of $`q>1`$ Wilson loops, in the MD approximation, was reported recently by Hart and Teper ; their calculation confirms the Coulomb gas relation $`\sigma _qq`$ found previously for compact $`QED_3`$. In ref. , this result is interpreted as favoring the monopole Coulomb gas picture over the vortex theory. From our previous remarks, it may already be clear why we do not accept this interpretation. At issue is whether monopole magnetic fields spread out as implied by the Coulomb propagator, or whether they are collimated in units of $`\pm \pi `$. This issue cannot be resolved by the MD approximation, which imposes a Coulombic field distribution from the beginning. The MD approximation does, however, tell us that if the monopoles have a Coulombic field distribution, then the $`q=2`$ Wilson loop has an area law falloff, at least up to the maximum charge separation studied in ref. . The crucial question is whether the $`q=2`$ loops computed directly on abelian projected lattices also have an asymptotic area law falloff, or instead go over to perimeter-law behavior (usually called “string-breaking”) as predicted by the vortex theory.
Here it is important to have some rough idea of where the $`q=2`$ string is expected to break, according to the vortex picture, otherwise a null result can never be decisive. A $`q=2`$ Wilson loop will go over to perimeter behavior when the size of the loop is comparable to the thickness of the vortex. It seems reasonable to assume that the thickness of a vortex on the abelian-projected lattice is comparable to the thickness of a center vortex on the unprojected lattice. From Fig. 1 of ref. , this thickness appears to be roughly one fermi, which is also about the distance where an adjoint representation string should break in $`D=4`$, according to an estimate due to Michael . The finite thickness of the vortex is an important feature of the vortex theory, as it allows us to account for the approximate Casimir scaling of string tensions at intermediate distances (cf. refs. ). But it also means that at, e.g., $`\beta =2.5`$, we should look for string breaking at around $`R=12`$ lattice spacings. Noise reduction techniques, such as the “thick-link” approach, then become essential.
The validity of the thick-link approach, however, is tied to the existence of a transfer matrix. Since the method uses $`R\times T`$ loops with $`RT`$, one has to show that the potential extracted is mainly sensitive to the large separation $`R`$, rather than the smaller separation $`T`$, and here positivity plays a crucial rule. Since there is no transfer matrix in maximal abelian gauge, the validity of the thick link approach is questionable (and the issue of positivity is much more than a quibble, as we will see below). Moreover, even when a transfer matrix exists, string-breaking is not easy to observe by this method, and requires more than just the calculation of rectangular loops. The breaking of the adjoint-representation string has not been seen using rectangular loops alone, and only quite recently has this breaking been observed, in 2+1 dimensions, by taking account of mixings between string and gluelump operators . The analogous calculation, for operators defined in maximal abelian gauge, would presumably involve mixings between the $`q=2`$ string and “charge-lumps”; the latter being bound states of the static abelian charge and the off-diagonal (double abelian-charged) gluons.
There are, in fact, existing calculations of the $`q=2`$ potential, by Poulis and by Bali et al. , using the thick-link method. String breaking was not observed, but neither did these calculations make use of operator-mixing techniques, which seem to be necessary for this purpose. In any case, in view of the absence of a transfer matrix in maximal abelian gauge, we do not regard these calculations as decisive.
Given our reservations concerning the thick-link approach, we will opt in this article for a far simpler probe of global symmetry/magnetic disorder on the projected lattice, namely, the double-charged abelian Polyakov lines $`P_2`$. Any abelian magnetic vortex can be regarded, away from the region of non-vanishing vortex field strength, as a discontinuous gauge transformation, and it is this discontinuity which affects Wilson loops and Polyakov lines far from the region of finite vortex field strength. If the vortex flux is $`\pm \pi `$, the discontinuity will not affect even-integer $`q`$-charged Polyakov lines, and these should have a finite expectation value. The abelian-projected lattice then has only $`Z_2`$ global symmetry in the confined phase. In contrast, a monopole Coulomb gas is expected to confine all $`q`$ charges, as in compact $`QED_3`$, and the $`Z_2`$ subgroup should play no special role. In that case, $`P_q=0`$ for all $`q`$. Thus, if we find that the $`q=2`$ Polyakov line vanishes, this is evidence against vortex structure and flux collimation, and in favor of the monopole picture. Conversely, if $`q=2`$ Polyakov lines do not vanish, the opposite conclusion applies, and the vortex theory is favored.
## 2 Polyakov lines
After fixing to maximal abelian gauge in SU(2) lattice gauge theory, abelian link variables
$$U_\mu ^A(x)=\text{diag}[e^{i\theta _\mu (x)},e^{i\theta _\mu (x)}]$$
(2.1)
are extracted by setting the off-diagonal elements of link variables $`U_\mu `$ to zero, and rescaling to restore unitarity. A $`q`$-charge Polyakov line $`P_q(\stackrel{}{x})`$ is defined as
$$P_q(\stackrel{}{x})=\underset{n=1}{\overset{N_T}{}}\mathrm{exp}[iq\theta _4(\stackrel{}{x}+n\widehat{4})]$$
(2.2)
where $`N_T`$ ($`N_S`$) is the number of lattice spacings in the time (space) directions. We can consider both the expectation value of the lattice average
$$P_q=\frac{1}{N_S^3}\underset{\stackrel{}{x}}{}P_q(\stackrel{}{x})$$
(2.3)
and the expectation value of the absolute value of the lattice average
$$P_q^{abs}=\frac{1}{N_S^3}\left|\underset{\stackrel{}{x}}{}P_q(\stackrel{}{x})\right|$$
(2.4)
Polyakov lines can vanish for $`q=1`$, even in the deconfined phase, just by averaging over $`Z_2`$-degenerate vacua, which motivates the absolute value prescription. In the confined phase, one then has $`P_1^{abs}N_S^{3/2}`$. We will find that this prescription is unnecessary for $`q=2`$, and we will compute these Polyakov lines without taking the absolute value of the lattice average.
For purposes of comparison, and as a probe of the monopole Coulomb gas picture, we also compute “monopole” Polyakov lines $`P_{Mq}`$ following an MD approach used by Suzuki et al. in ref. . Their procedure is to decompose the abelian plaquette variable ($`_\mu `$ denotes the forward lattice difference)
$$f_{\mu \nu }(x)=_\mu \theta _\nu (x)_\nu \theta _\mu (x)$$
(2.5)
into two terms
$$f_{\mu \nu }(x)=\overline{f}_{\mu \nu }(x)+2\pi n_{\mu \nu }(x)$$
(2.6)
where $`n_{\mu \nu }`$ is an integer-valued Dirac-string variable, and $`\pi <\overline{f}_{\mu \nu }\pi `$. One can then invert (2.5) to solve for $`\theta _4`$ in terms of the “photon” field-strength $`\overline{f}_{\mu \nu }`$, the Dirac-string variables $`n_{\mu \nu }`$, and an irrelevant U(1) gauge-dependent term. If we assume that the photon and Dirac-string variables are completely uncorrelated, then the Dirac-string contribution is given by
$$\theta _4^M(x)=\underset{x^{}}{}D(x,x^{})_\nu ^{}n_{\nu 4}(x^{})$$
(2.7)
Here $`D(x,x^{})`$ is the lattice Coulomb propagator, and the partial derivative denotes a backward difference. The monopole dominance approximation is to replace $`\theta _4`$ by $`\theta _4^M`$ in eq. (2.2), the idea being that this procedure isolates the contribution of the monopole fields to the Polyakov lines. The correlations between the photon, monopole, and abelian lattice fields will be discussed in more detail in section 3, and in an Appendix.
### 2.1 $`𝒁_\mathrm{𝟐}`$ Magnetic Disorder
In Fig. 5 we display $`P_1^{abs}`$ and $`P_{M1}^{abs}`$ for the $`q=1`$ lines, on a $`12^3\times 3`$ lattice. There are no surprises here; we see that for $`N_T=3`$ there is a deconfinement transition around $`\beta =2.15`$.
The situation changes dramatically when we consider $`q=2`$ Polyakov lines. Fig. 6 is a plot of the values of $`P_2`$ and $`P_{M2}`$, without any absolute value prescription, on the $`12^3\times 3`$ lattice. To make the point clear, we focus on the data in the confined phase, in Fig. 7. It can be seen that $`P_2`$ is non-vanishing and negative in the confined phase; the data is clearly not consistent with a vanishing expectation value. In the MD approximation, $`P_{M2}`$ may also be slightly negative, but its value is at least an order of magnitude smaller than $`P_2`$. This seems to be a very strong breakdown of monopole dominance, in the form proposed in ref. .
In Figures 8-10 we plot the corresponding data found on a $`16^3\times 4`$ lattice. There is a deconfinement transition close to $`\beta =2.3`$, and again there is a clear disagreement between $`P_2`$ and $`P_{M2}`$, with the former having a substantial non-vanishing expectation value throughout the confined phase.
The numerical evidence, for both $`T=3`$ and $`T=4`$, clearly favors having $`Z_2`$, rather than U(1), global symmetry/magnetic disorder on the abelian projected lattice.
### 2.2 Spacelike Maximal Abelian Gauge
It is also significant that $`P_2`$ is negative. This implies a lack of reflection positivity in the Lagrangian obtained after maximal abelian gauge fixing, and must be tied to the fact that maximal abelian gauge is not a physical gauge. This diagnosis also suggests a possible cure: Instead of fixing to the standard maximal abelian gauge, which maximizes
$$R=\underset{x}{}\underset{\mu =1}{\overset{4}{}}\text{Tr}[\sigma _3U_\mu (x)\sigma _3U_\mu ^{}(x)]$$
(2.8)
we could try to use a “spacelike” maximal abelian gauge , maximizing the quantity
$$R=\underset{x}{}\underset{k=1}{\overset{3}{}}\text{Tr}[\sigma _3U_k(x)\sigma _3U_k^{}(x)]$$
(2.9)
which involves only links in spatial directions. This is a physical gauge. What happens in this case is that one disease, the loss of reflection positivity, it replaced by another, namely, the breaking of $`90^{}`$ rotation symmetry. This is illustrated in Fig. 11, where we plot spacelike and timelike Polyakov lines on a $`4^4`$ lattice, in the spacelike maximal abelian gauge defined above. We find that the values for double-charged Polyakov lines running in the time direction are much reduced in the spacelike gauge, and in fact the results shown appear consistent with zero. *Spacelike* $`q=2`$ Polyakov lines, however, which run along the $`1,2,`$ or $`3`$ lattice directions, remain negative, and in fact are larger in magnitude than Polyakov lines of the same length, and the same coupling, computed in the usual maximal abelian gauge. One therefore finds on a hypercubic lattice that $`90^{}`$ rotation symmetry is broken.
The spacelike Polyakov line operator creates a line of electric flux through the periodic lattice. The non-vanishing overlap of this state with the vacuum has, in the spacelike gauge, a direct physical interpretation: Since the $`q=2`$ electric flux line cannot, for topological reasons, shrink to zero, a finite overlap with the vacuum means that the $`q=2`$ flux tube breaks. This is presumably due to screening by double-charged (off-diagonal) gluon fields. The implication is that in a physical gauge, where Wilson loops can be translated into statements about potential energies, $`q=`$ even abelian charges are not confined.
The fact that $`q=`$ even charges are unconfined, together with the positivity property in spacelike maximal abelian gauge, leads to the conclusion that timelike $`q=2`$ Polyakov lines $`P_2`$ are positive and non-zero, although the data points for timelike $`P_2`$ shown in Fig. 11, which appear to be consistent with zero, do not yet support such a conclusion. It must be that the value of $`P_2`$ in this gauge is simply very small, and much better statistics are required to distinguish that value from zero. To get some idea of the difficulty involved, let us suppose that the magnitude of the timelike abelian line $`P_2`$ on the projected lattice is comparable to the magnitude of the gauge-invariant Polyakov line $`P_{adj}`$, in the adjoint representation of SU(2), on the full, unprojected lattice. To leading order in the strong-coupling expansion, $`P_{adj}`$ on a lattice with extension $`T`$ in the time direction is given by
$$P_{adj}=4\left(\frac{\beta }{4}\right)^{4T}$$
(2.10)
This equals, e.g., $`0.00156`$ for $`T=2`$ at $`\beta =1.5`$; quite a small signal considering that $`T`$ is only two lattice units. The obvious remedy is to increase $`\beta `$, but then one runs into a deconfinement transition at $`\beta =1.8`$. We can move the transition to larger $`\beta `$ by increasing $`T`$, but of course increasing $`T`$ again causes the signal to go down.
The best chance to extract a signal from the noise is to choose a value of $`\beta `$ which is fairly close to the deconfinement transition (but still in the confined phase), and to generate very many configurations. Here are the results obtained at $`T=2`$ lattice spacings and $`\beta =1.7`$, coming from 5000 configurations separated by 100 sweeps on a $`2\times 8^3`$ lattice:
$`P_{adj}`$ $`=`$ $`0.00447(23)`$
$`P_2`$ $`=`$ $`0.00241(52)`$ (2.11)
The result for the adjoint line is consistent with the strong-coupling prediction of $`P_{adj}=0.00426`$. The abelian line $`P_2`$ is non-zero, positive, and comparable in magnitude to $`P_{adj}`$, although it must be admitted that the errorbar is uncomfortably large. The corresponding values for $`T=3`$ and $`\beta =2.11`$, obtained from 5000 $`3\times 8^3`$ lattices, are
$`P_{adj}`$ $`=`$ $`0.00338(28)`$
$`P_2`$ $`=`$ $`0.00124(42)`$ (2.12)
Again $`P_2`$ is non-zero, although the errorbar is still too large for comfort.
Clearly the evaluation of the timelike $`P_2`$ line in spacelike maximal abelian gauge is cpu-intensive, and our results for this quantity must be regarded as preliminary. Nevertheless, these preliminary results are consistent with the conclusion previously inferred from the spacelike lines: In a physical gauge, the $`q=2`$ charge is screened, rather than confined, and we have $`Z_2`$, rather than U(1), magnetic disorder on the abelian-projected lattice.
## 3 The “Photon” Contribution
Suppose we write the link angles $`\theta _\mu (x)`$ of the abelian link variables as a sum of the link angles $`\theta _\mu ^M(x)`$ in the MD approximation, plus a so-called “photon” contribution $`\theta _\mu ^{ph}(x)`$, i.e.
$$\theta _\mu ^{ph}(x)\theta _\mu (x)\theta _\mu ^M(x)$$
(3.1)
It was found in refs. that the photon field has no confinement properties at all; the Polyakov line constructed from links $`U_\mu =\mathrm{exp}[i\theta _\mu ^{ph}]`$ is finite, and corresponding Wilson loops have no string tension. Since $`\theta _\mu ^M`$ would appear to carry all the confining properties, a natural conclusion is that the abelian lattice is indeed a monopole Coulomb gas.
To see where this reasoning may go astray, suppose we perversely *add*, rather than subtract, the MD angles to the abelian angles, i.e.
$`\theta _\mu ^{}(x)`$ $`=`$ $`\theta _\mu (x)+\theta _\mu ^M(x)`$ (3.2)
$`=`$ $`\theta _\mu ^{ph}(x)+2\theta _\mu ^M(x)`$
in effect doubling the strength of the monopole Coulomb field. It is natural to expect a corresponding increase of the string tension, and of course $`P_1=0`$ should remain true. Surprisingly, this is not what happens; doubling the strength of the monopole field in fact removes confinement.<sup>3</sup><sup>3</sup>3We have already noted that in the absence of a transfer matrix, the term “confinement of abelian charge” must be used with caution. In this section, the phrase “confinement of charge q” is just taken to mean “$`P_q=0`$”. Some results for $`P_1`$ are shown in Table 1. Here we have computed the vev of $`P_1(x)`$ without taking the absolute value of the lattice sum (i.e. we use eq. (2.3) rather than eq. (2.4)), and we find that $`P_1`$ is finite and negative in the additive configurations. The additive configuration $`\theta ^{}`$ is far from pure-gauge, and the vev of $`P_1`$ is correspondingly small. Nevertheless, $`P_1`$ is non-zero, so adding the monopole field in this case actually *removes* confinement. Clearly, the interplay between the MD and “photon” contributions is a little more subtle than previously supposed.
To understand what is going on, we return to the concept of the holonomy probability distribution
$$𝒫(\theta )=𝒫_T[e^{i\theta }]=<\delta [e^{i\theta },\underset{n=1}{\overset{N_T}{}}\mathrm{exp}[i\theta _4(\stackrel{}{x}+n\widehat{4})]]>$$
(3.3)
$`𝒫_T[e^{i\theta }]`$ is the probability density for the U(1) group elements on the group manifold. However, since the group measure on the U(1) manifold is trivial (i.e. $`d\theta `$), it is not hard to see that $`𝒫(\theta )d\theta `$ is interpreted as the probability that the phase of an abelian Polyakov line lies in the interval $`[\theta ,\theta +d\theta ]`$. In a similar way, replacing $`\theta _4`$ by $`\theta _4^M`$ or $`\theta _4^{ph}`$ on the rhs of the above equation, we can define the probability distributions $`𝒫(\theta ^M)`$ and $`𝒫(\theta ^{ph})`$, respectively, for the phases of monopole and photon Polyakov lines. All of these distributions have $`2\pi `$-periodicity, and are invariant under $`\theta \theta `$, reflections, so we need only consider their behavior in the interval $`[0,\pi ]`$. Without making any further calculations, it is already possible to deduce something about the shape of $`𝒫(\theta )`$:
* Since all $`P_q`$ are small, $`𝒫(\theta )`$ is fairly flat.
* Assuming $`Z_2`$ symmetry, $`𝒫(\theta )`$ is symmetric, in the interval $`[0,\pi ]`$, around $`\theta =\frac{\pi }{2}`$.
* Since $`P_2`$ is negative, $`𝒫(\theta )`$ should be larger in the neighborhood of $`\theta =\frac{\pi }{2}`$ than in the neighborhood of $`\theta =0`$ or $`\theta =\pi `$.
From these considerations, we deduce that $`𝒫(\theta )`$ looks something like Fig. 12. Similarly, since $`P_{Mq}0`$ in the MD approximation, we conclude that there is very nearly U(1) symmetry in this approximation, and $`𝒫(\theta ^M)`$ is almost flat, as in Fig. 13.
Now if the link angles $`\theta _\mu (x)`$ and $`\theta _\mu ^M(x)`$ are correlated to some extent, then the difference $`\theta ^{ph}(x)`$ between these variables is not random, but has some non-uniform probability distribution as illustrated in Fig. 14. Since the Fourier cosine components of $`𝒫(\theta ^{ph})`$ are typically non-zero, it follows that the photon field, by itself, has no confinement property. The crucial point is that by subtracting $`\theta ^M`$, the $`Z_2`$ symmetry of $`\theta `$ is broken due to the correlation between $`\theta `$ and $`\theta ^M`$. It is interesting to note that even if $`𝒫(\theta ^M)`$ were neither U(1) nor $`Z_2`$ symmetric, i.e. if we imagine that the $`\theta `$-configurations confine but the MD contributions do not, a correlation between the $`\theta `$ and $`\theta ^M`$ would still be sufficient to break the $`Z_2`$ symmetry of the difference configuration $`\theta ^{ph}`$. As a result, subtracting the non-confining $`\theta ^M`$ from the confining $`\theta `$ would still remove confinement.
In the center vortex picture, vortex fields supply the confining disorder, but of course this does not at all exclude a correlation of the MD variables $`\theta ^M`$ with the $`\theta `$ variables. According to the arguments in the Introduction, monopoles lie along vortices as shown in Fig. 1 (further evidence is given in the next section), and this correspondence will certainly introduce some degree of correlation between magnetic flux on the abelian-projected lattice, and magnetic flux in the MD approximation. Roughly speaking, one can say that the confining flux has the same magnitude on the abelian and MD lattices, only it is distributed differently (collimated vs. Coulombic). However, according to the center vortex picture, there must *also* be some correlation between $`\theta _\mu ^{ph}(x)`$ and $`\theta _\mu ^M(x)`$; this is necessary to convert the long-range monopole Coulomb field into a vortex field, and to break the U(1) symmetry of the MD lattice down to the $`Z_2`$ symmetry of the vortex vacuum.
In numerical simulations performed at $`\beta =2.1`$ and $`T=3`$ on a $`3\times 12^3`$ lattice, we do, in fact, find a striking correlation between $`\theta ^{ph}`$ and $`\theta ^M`$: The average “photon” angle $`\theta ^{ph}`$ tends to be positive for $`\theta ^M[0,\frac{\pi }{2}]`$, and negative for $`\theta ^M[\frac{\pi }{2},\pi ]`$. Computing the average photon angle $`\overline{\theta }^{ph}`$ in each monopole angle quarter-interval, we find
$$\overline{\theta }^{ph}=\{\begin{array}{cc}\hfill 0.027(4)& \text{for }\theta ^M[0,\frac{\pi }{2}]\hfill \\ \hfill 0.027(4)& \text{for }\theta ^M[\frac{\pi }{2},\pi ]\hfill \end{array}$$
(3.4)
This result, combined with the results displayed in Table 1, raises two interesting questions:
1. How is the correlation between $`\theta ^{ph}`$ and $`\theta ^M`$, found above in (3.4), related to the remaining $`Z_2`$ global symmetry of $`𝒫(\theta )`$; and
2. Why is $`P_1`$ negative in the additive configurations of eq. (3.2)?
To shed some light on these issues, we begin by defining $`\overline{\theta }^{ph}(\theta ^M)`$ as the average value of $`\theta ^{ph}`$ at fixed $`\theta ^M`$, and then make the drastic approximation of neglecting all fluctuations of $`\theta ^{ph}`$ at fixed $`\theta ^M`$ around its mean value. This amounts to approximating the vev of any periodic function $`F(\theta )`$
$$<F>=_\pi ^\pi 𝑑\theta F(\theta )𝒫(\theta )$$
(3.5)
by
$$<F>=_\pi ^\pi 𝑑\theta ^M\frac{1}{2\pi }F(\theta ^M+\overline{\theta }^{ph}(\theta ^M))$$
(3.6)
where we have used the fact that the probability distribution for $`\theta ^M`$ is (nearly) uniform. The accuracy of this approximation depends, of course, on the width of the probability distribution for $`\theta ^{ph}`$ at fixed $`\theta ^M`$, and on the particular $`F(\theta )`$ considered. Here we are only concerned with certain qualitative aspects of phase angle probability distributions, and hopefully the neglect of fluctuations of $`\theta ^{ph}`$ around the mean will not severely mislead us.
With the help of the approximation (3.6), we can answer the two questions posed above. In this section we will only outline the argument, which is presented in full in an Appendix.
The function $`\overline{\theta }^{ph}(\theta ^M)`$ maps the variable $`\theta ^M[\pi ,\pi ]`$, which has a uniform probability distribution in the interval, into the variable $`\overline{\theta }[\pi ,\pi ]`$, where
$$\overline{\theta }=\theta ^M+\overline{\theta }^{ph}(\theta ^M)$$
(3.7)
The non-uniform mapping induces a non-uniform probability distribution for the $`\overline{\theta }`$-variable
$`𝒫(\overline{\theta })`$ $`=`$ $`{\displaystyle \frac{1}{2\pi }}{\displaystyle \frac{d\theta ^M}{d\overline{\theta }}}`$ (3.8)
$`=`$ $`{\displaystyle \frac{1}{2\pi }}\left(1{\displaystyle \frac{d\overline{\theta }^{ph}}{d\overline{\theta }}}\right)`$
which we identify with $`𝒫(\theta )`$ in the approximation (3.6). Since $`𝒫(\theta )`$ is peaked at $`\theta =\pm \frac{\pi }{2}`$, it follows that $`d\overline{\theta }^{ph}/d\overline{\theta }`$ is minimized at $`\overline{\theta }=\pm \frac{\pi }{2}`$.
Global $`Z_2`$ symmetry implies that $`P_q=0`$ for $`q=`$ odd. Then, from eq. (1.10) we have
$`𝒫(\pi \overline{\theta })`$ $`=`$ $`𝒫(\overline{\theta })`$
$`𝒫(\pi \overline{\theta })`$ $`=`$ $`𝒫(\overline{\theta })`$
$`𝒫(\overline{\theta })`$ $`=`$ $`𝒫(\overline{\theta })`$ (3.9)
From these relationships, eq. (3.8), and the fact (shown in the Appendix) that $`\overline{\theta }^{ph}(\theta ^M)=\overline{\theta }^{ph}(\theta ^M)`$, we find that
$$\overline{\theta }^{ph}[\overline{\theta }]\overline{\theta }^{ph}[\theta ^M(\overline{\theta })]$$
(3.10)
is an odd function with respect to reflections around $`\overline{\theta }=0,\pm \frac{\pi }{2}`$. Defining $`\overline{\theta }_I^{ph}`$ as the average $`\theta ^{ph}`$ in the quarter interval $`\theta ^M[0,\frac{\pi }{2}]`$, and $`\overline{\theta }_{II}^{ph}`$ as the average $`\theta ^{ph}`$ in the quarter-interval $`\theta ^M[\frac{\pi }{2},\pi ]`$, we have
$`\overline{\theta }_I^{ph}`$ $`=`$ $`{\displaystyle \frac{2}{\pi }}{\displaystyle _0^{\pi /2}}𝑑\theta ^M\overline{\theta }^{ph}(\theta ^M)`$ (3.11)
$`=`$ $`4{\displaystyle _0^{\pi /2}}𝑑\overline{\theta }𝒫(\overline{\theta })\overline{\theta }^{ph}[\overline{\theta }]`$
$`=`$ $`4{\displaystyle _{\pi /2}^\pi }𝑑\overline{\theta }𝒫(\pi \overline{\theta })\overline{\theta }^{ph}[\pi \overline{\theta }]`$
$`=`$ $`4{\displaystyle _{\pi /2}^\pi }𝑑\overline{\theta }𝒫(\overline{\theta })\overline{\theta }^{ph}[\overline{\theta }]`$
$`=`$ $`{\displaystyle \frac{2}{\pi }}{\displaystyle _{\pi /2}^\pi }𝑑\theta ^M\overline{\theta }^{ph}(\theta ^M)`$
$`=`$ $`\overline{\theta }_{II}^{ph}`$
which explains, as a consequence of global $`Z_2`$ symmetry, the equal magnitudes and opposite signs found in eq. (3.4). This answers the first of the two questions posed above.
For expectation values of Polyakov phase angles in the additive configuration, we have
$$<F>=_\pi ^\pi 𝑑\theta ^{}F(\theta ^{})𝒫^{}(\theta ^{})$$
(3.12)
where $`𝒫^{}(\theta ^{})`$ is the probability distribution for the Polyakov angles of the additive configuration $`\theta ^{}=\theta +\theta ^M`$. Again neglecting fluctuations of $`\theta ^{ph}`$ around the mean $`\overline{\theta }^{ph}(\theta ^M)`$, and changing variables to $`\overline{\theta }^{}=\overline{\theta }+\theta ^M`$ we have
$`<F>`$ $`=`$ $`{\displaystyle _\pi ^\pi }𝑑\theta ^M{\displaystyle \frac{1}{2\pi }}F(2\theta ^M+\overline{\theta }^{ph}(\theta ^M))`$ (3.13)
$`=`$ $`{\displaystyle _{2\pi }^{2\pi }}𝑑\overline{\theta }^{}{\displaystyle \frac{1}{2\pi }}{\displaystyle \frac{d\theta ^M}{d\overline{\theta }^{}}}F(\overline{\theta }^{})`$
$`=`$ $`{\displaystyle _\pi ^\pi }𝑑\overline{\theta }^{}{\displaystyle \frac{1}{\pi }}{\displaystyle \frac{d\theta ^M}{d\overline{\theta }^{}}}F(\overline{\theta }^{})`$
where the $`2\pi `$-periodicity of the integrand was used in the last step. Then the induced probability distribution in the $`\overline{\theta }^{}`$ variable is
$`𝒫(\overline{\theta }^{})`$ $`=`$ $`{\displaystyle \frac{1}{\pi }}{\displaystyle \frac{d\theta ^M}{d\overline{\theta }^{}}}`$ (3.14)
$`=`$ $`{\displaystyle \frac{1}{\pi }}\left(1{\displaystyle \frac{d\overline{\theta }^{ph}}{d\overline{\theta }}}\right)\left(2{\displaystyle \frac{d\overline{\theta }^{ph}}{d\overline{\theta }}}\right)^1`$
As shown in the Appendix, $`\overline{\theta }=\pm \frac{\pi }{2}`$ corresponds to $`\overline{\theta }^{}=\pm \pi `$, and the assumed single-valuedness of $`\theta ^M(\overline{\theta })`$ requires $`d\overline{\theta }^{ph}/d\overline{\theta }<1`$. In that case, since $`d\overline{\theta }^{ph}/d\overline{\theta }`$ is minimized at $`\overline{\theta }=\pm \frac{\pi }{2}`$, it follows that $`𝒫(\overline{\theta }^{})`$ has a peak at $`\overline{\theta }^{}=\pm \pi `$. Given that the $`\theta ^{}`$ distribution is peaked at $`\pm \pi `$, as in Fig. 15, the $`n=1`$ coefficient in the cosine series expansion of this distribution (which by definition is $`P_1`$) is evidently negative, answering the second of the two questions posed below (3.4). Confinement is lost because the $`Z_2`$ symmetry of the $`\theta `$ distribution has been broken.
The correlation that exists between the monopole and photon contributions in abelian-projected SU(2) gauge theory implies that these contributions actually do *not* factorize in Polyakov lines and Wilson loops, in contrast to the factorization which occurs in compact QED in the Villain formulation. In fact, the terminology “photon contribution” used to describe $`\theta ^{ph}`$ is really a little misleading. The field $`\theta _\mu ^{ph}(x)`$ is best described as simply the difference $`\theta _\mu (x)\theta _\mu ^M(x)`$ between the abelian angle field and the MD angle field. It is not correct to view $`\theta _\mu ^{ph}`$ as a purely perturbative contribution, since the correlation that exists between $`\theta _\mu ^{ph}`$ and $`\theta _\mu ^M`$, which breaks U(1) down to an exact $`Z_2`$ remnant symmetry, clearly has a non-perturbative origin.
Finally, in Figs. 16-18, we show some histograms for the actual probability distributions $`𝒫(\theta ),𝒫(\theta ^M),𝒫(\theta ^{ph})`$, obtained on a $`3\times 12^3`$ lattice at $`\beta =2.1`$. $`𝒫(\theta )`$ and $`𝒫(\theta ^M)`$ are shown on the $`[0,\pi ]`$ half-interval, while $`𝒫(\theta ^{ph})`$ is displayed on the full $`[\pi ,\pi ]`$ interval. The height of the histogram is the probability for $`|\theta |,|\theta ^M|,\theta ^{ph}`$ to fall in each interval. It is clear that these numerical results agree with the conjectured behavior in Figs. 12-14.
## 4 Field Collimation
Although the finite VEV of $`q=2`$ Polyakov lines is a crucial test, it is also useful to ask whether the collimation of confining field strength into vortex tubes can be seen more directly on the lattice.
In the most naive version of the monopole Coulomb gas, the monopole field is imagined to be distributed symmetrically, modulo some small quantum fluctuations, around a static monopole. In this section we will find that the field around the position of an abelian monopole, as probed by SU(2)-invariant Wilson loops, is in fact highly asymmetric, and is very strongly correlated with the direction of the center vortex passing through the monopole position. Some of these results, for unit cubes around monopoles, have been reported previously in ref. , but are included here for completeness. The results for 3- and 4-cubes around static monopoles are new.
To circumvent the Gribov copy issue, we work in the “indirect” maximal center gauge introduced in ref. , and locate monopole and vortex positions by projections (abelian and center, respectively) of the same gauge-fixed configuration. Indirect maximal center gauge is a partial fixing of the U(1) gauge symmetry remaining in maximal abelian gauge, so as to maximize the squared trace of abelian links. The residual gauge symmetry is $`Z_2`$. The excitations of the center projected lattice are termed “P-vortices,” and have been found to lie near the middle of thick center vortices on the unprojected lattice (cf. ).
### 4.1 Monopole-Antimonopole Alternation
According to the argument depicted in Figs. 4-4, at any fixed time the monopoles found in abelian projection should lie along vortex lines, with monopoles alternating with antimonopoles along the line. To test this argument, we consider static monopoles (associated with timelike monopole currents) on each constant time volume of the lattice. Each monopole is associated with a net $`\pm 2\pi `$ magnetic flux through a unit cube. In numerical simulations performed at $`\beta =2.4`$, we find that almost every cube, associated with a static monopole, is pierced by a single P-vortex line. Only very small fractions are either not pierced at all, or are pierced by more than one line, with percentages shown in Fig. 19.
P-vortices are line-like objects on any given time slice of the lattice.<sup>4</sup><sup>4</sup>4It should be noted that vortices are surface-like objects in D=4 dimensions, so different closed loops on a given time slice may belong to the same P-vortex surface. About 61% of these vortex lines have no monopoles at all on them. We find that 31% contain a monopole-antimonopole pair. The remaining 8% of closed vortex lines have an even number of monopoles + antimonopoles, with monopoles alternating with antimonopoles as one traces a path along the loop. This is exactly the situation sketched in Fig. 1. Exceptions to the monopole-antimonopole alternation rule were found in only 1.2% of loops containing monopoles. In every exceptional case, a monopole or antimonopole was found within one lattice unit of the P-vortex line which, if counted as lying along the vortex line, would restore the alternation.
### 4.2 Field Collimation on 1-cubes
We define vortex limited Wilson loops $`W_n(C)`$ as the expectation value of Wilson loops on the full, unprojected lattice, subject to the the constraint that, on the projected lattice, exactly $`n`$ P-vortices pierce the minimal area of the loop (cf. ). We employ these gauge-invariant loop observables to probe the (a)symmetry of the color field around static monopoles, again at $`\beta =2.4`$.
Consider first, on a fixed time hypersurface, the set of all unit cubes which contain one static monopole, inside a cube pierced by a single P-vortex line. This means that two plaquettes on the cube are pierced by the vortex line, and four are not. The difference $`S`$ between the average plaquette $`S_0`$ on the lattice, and the plaquette on pierced/unpierced plaquettes of the monopole cube
$$S=S_0<\frac{1}{2}\text{Tr}[UUU^{}U^{}]_{\text{cube face}}>$$
(4.1)
is shown in Fig. 20. For comparison, we have computed the same quantities in unit cubes, pierced by vortices, which do not contain any monopole current.
It is obvious that the excess plaquette action associated with a monopole is extremely asymmetric, and almost all of it is concentrated in the P-vortex direction. Moreover, the action distribution around a monopole cube is not very different from the distribution on a cube pierced by a vortex, with no monopole at all inside. The two distributions are even more similar, if we make the additional restriction to “isolated” static monopoles; i.e. monopoles with no nearest-neighbor monopole currents. The excess action distribution for isolated monopoles, again compared to zero-monopole one-vortex cubes, is shown in Fig. 21.
### 4.3 Field Collimation on Larger Cubes
Finally we consider cubes which are $`N`$ lattice spacings on a side, in fixed-time hypersurfaces, having two faces pierced by a single P-vortex line and the other four faces unpierced. Again we restrict our attention to cubes containing either a single static monopole, or no monopole current. Each side of the cube is bounded by an $`N\times N`$ loop. Let
$`W_1^M(N,N)`$ $``$ 1-vortex loops, bounding a monopole N-cube
$`W_0^M(N,N)`$ $``$ 0-vortex loops, bounding a monopole N-cube
$`W_1^0(N,N)`$ $``$ 1-vortex loops, bounding a 0-monopole N-cube
$`W_0^0(N,N)`$ $``$ 0-vortex loops, bounding a 0-monopole N-cube (4.2)
denote the expectation value of $`N\times N`$ Wilson loops on 0/1-vortex faces of 0-monopole/1-monopole N-cubes. As a probe of the distribution of gauge-invariant flux around an N-cube, we compute the fractional deviation of these loops from $`W_0^0(N,N)`$ (which has the largest value) by
$`A_{0,1}^M={\displaystyle \frac{W_0^0(N,N)W_{0,1}^M(N,N)}{W_0^0(N,N)}}`$
$`A_{0,1}^0={\displaystyle \frac{W_0^0(N,N)W_{0,1}^0(N,N)}{W_0^0(N,N)}}`$ (4.3)
and of course $`A_0^0=0`$ by definition.
The results, for $`N=3`$ cubes and $`N=4`$ cubes, are displayed in Figs. 22-23, with the actual values for the various loop types listed in Table 2. As with the excess-action distribution around a 1-cube, shown in Figs. 20-21, it is clear that gauge-invariant Wilson loop values are distributed very asymmetrically around a cube, and are strongly correlated with the direction of the P-vortex line. The presence or absence of a monopole inside the $`N`$-cube appears to have only a rather weak effect on the value of the loops around each side of the cube; the main variation is clearly due to the presence or absence of a vortex line piercing the side. Obviously, the strong correlation of loop values with vortex lines, and the relatively weak correlation of loop values with monopole current, fits the general picture discussed in the introduction, of confining flux collimated into tubelike structures.
It seems clear that the gauge field-strength distribution around monopoles identified in maximal abelian gauge is highly asymmetric, and closely correlated to P-vortex lines, as one would naively guess from the picture shown in Fig. 1. This fact, by itself, doesn’t prove the vortex theory. One must admit the possibility that if the monopole positions were somehow held fixed, fluctuations in the direction of vortex lines passing through those monopoles might restore (on average) a Coulombic distribution. We therefore regard the findings in this section as fulfilling a necessary, rather than a sufficient, condition for the vortex theory to be correct. According to the vortex theory, confining fields are collimated along vortex lines, and this collimation should be visible on the lattice in some way, e.g. showing up in spatially asymmetric distributions of Wilson loop values. This strong asymmetry is, in fact, what we find on the lattice.
## 5 The Seiberg-Witten confinement scenario and <br>$`𝒁_\mathrm{𝟐}`$-fluxes
The beautiful description of confinement by monopoles when $`N=2`$ supersymmetric Yang-Mills theory is softly broken to $`N=1`$ supersymmetric Yang-Mills theory has enhanced the impression that the mechanism for confinement in pure Yang-Mills theory is monopole condensation. Above we have provided evidence that it is in fact $`Z_2`$ fluctuations which are responsible for confinement in Yang-Mills theory even on abelian-projected lattices; see refs. for the evidence on the full, unprojected lattices. As regards the $`N=2`$ supersymmetric Yang-Mills theory softly broken to $`N=1`$, we would simply argue that the approximate low-energy effective theory of Seiberg and Witten is not able describe all aspects of confinement at sufficiently large distance scales. In particular, the low-energy effective theory cannot explain the perimeter law of double-charged Wilson loops.
It may be useful here to make a distinction between the full effective action, obtained by integrating out all massive fields, and the “low-energy” effective action, which neglects all non-local or higher-derivative terms in the full effective action. The Seiberg-Witten calculation was aimed at determining the low-energy effective action of the softly broken $`N=2`$ theory. However, in a confining theory, non-local terms induced by massive fields can have important effects at long distances.
The relation between the Seiberg-Witten theory and pure Yang-Mills theory in four dimensions has many similarities to the relation between the Georgi-Glashow model and pure Yang-Mills theory in three dimensions.<sup>5</sup><sup>5</sup>5This analogy has been used to explain the measured (pseudo)-critical exponents in four-dimensional lattice $`U(1)`$-theory from the properties of $`N=2`$ supersymmetric Yang-Mills theory and its possible symmetry breakings to $`N=1`$ and $`N=0`$ . In both theories the presence of a Higgs field is of utmost importance for the existence of monopoles. In the Georgi Glashow model it is the existence of a monopole condensate which is responsible for the mass of the (dual) photon and for confinement of the smallest unit of electric $`U(1)`$ charge. However, as emphasized in , the effective low energy Coulomb gas picture of monopoles does not explain the fact that double-charged Wilson loops follow a perimeter law rather than an area law. The reason is that in a confining theory there is not equivalence between low-energy and long-distance physics. At sufficiently long distance it will always be energetically favorable to excite charged massive $`W`$-fields and screen $`q=`$ even external charges, thus preventing a genuine string tension between such charges. At these large scales, a description in terms of U(1)-disordering configurations (the monopole Coulomb gas) breaks down, and a description in terms of $`Z_2`$ disorder must take over. As shown in the range of validity of the monopole Coulomb gas picture decreases as the mass of the $`W`$-field decreases, and, in the limit of an unbroken $`SU(2)`$ symmetry, confinement can only be described adequately in terms of $`Z_2`$ fluctuations.
In the Seiberg-Witten theory, $`N=2`$ supersymmetry ensures that the Higgs vacuum is parameterized by an order parameter $`u=\mathrm{Tr}\varphi ^2`$ corresponding to the breaking of $`SU(2)`$ to $`U(1)`$. For large values of $`u`$ we have a standard scenario: at energy scales $`\mu \sqrt{u}`$ all field theoretical degrees of freedom contribute to the $`\beta `$-function, which corresponds to the asymptotically free theory. For energies lower than $`\sqrt{u}`$ only the $`U(1)`$ part of the theory is effective. In these considerations the dynamical confinement scale
$$\mathrm{\Lambda }_{N=2}^4=\mu ^4\mathrm{exp}(8\pi ^2/g(\mu )^2)$$
obtained by the one-loop perturbative calculation plays no role. The remarkable observation by Seiberg and Witten was that even when $`u\mathrm{\Lambda }_{N=2}^2`$, where one would naively expect that non-Abelian dynamics was important, the system remains in the $`U(1)`$ Coulomb phase due to supersymmetric cancellations of non-Abelian quantum fluctuations. As $`u`$ decreases the effective electric charge associated with the unbroken $`U(1)`$ part of $`SU(2)`$ increases, while the masses of the solitonic excitations which are present in the theory, will decrease. Dictated by monodromy properties of the so-called prepotential of the effective low energy Lagrangian, the monopoles become massless at a point $`u\mathrm{\Lambda }_{N=2}^2`$ where the effective electric charge has an infrared Landau pole and diverges. However, in the neighborhood of $`u\mathrm{\Lambda }_{N=2}^2`$, this strongly coupled theory has an effective Lagrangian description as a weakly coupled theory when expressed in terms of dual variables, namely a monopole hyper-multiplet and a dual photon vector multiplet. The perturbative coupling constant is now $`g_D=4\pi /g`$ and the point where monopoles condense corresponds to $`g_D=0`$.
A remarkable observation of Seiberg and Witten is that the breaking of $`N=2`$ to $`N=1`$ supersymmetry by adding a mass term superpotential will generate a mass gap, originating from a condensation of the monopoles. By the dual Meissner effect this theory confines the electric $`U(1)`$ charge at distances larger than the inverse $`N=2`$ symmetry breaking scale. In terms of the underlying microscopic theory it is believed that the reduction of symmetry from $`N=2`$ to $`N=1`$ allows excitations closer to generic non-supersymmetric “confinement excitations”, but that the soft breaking ensures that the theory is still close enough to the $`N=2`$ to remain an effective $`U(1)`$ theory. Thus we see that in the Seiberg-Witten scenario we can, by introducing the mass term superpotential, describe a $`U(1)`$ confining-deconfining transition from the $`N=2`$ Coulomb phase to the $`N=1`$ confining phase<sup>6</sup><sup>6</sup>6The breaking down to $`N=0`$ has been analyzed in a number of papers , where soft breaking via spurion fields of $`N=1`$ and $`N=2`$ supersymmetric gauge theories are discussed (see also ). These models are somewhat closer to realistic models for $`QCD`$ confinement, but the conclusions are, from our perspective, the same as for the original Seiberg-Witten model, so we will not discuss these models any further..
But precisely as for the Georgi-Glashow model, the monopole condensate picture for the $`N=1`$ confining theory is incomplete in the sense that it cannot describe the obvious fact that double-charged Wilson loops will have a perimeter law rather than an area law. Clearly, $`q=`$ even external charges can be screened by the massive charged $`W`$-fields in the softly broken $`N=2`$ supersymmetric Yang-Mills theory, a fact which has profound implications for the large-scale structure of confining fluctuations. But neither the non-local effects of the $`W`$-fields, nor the $`W`$-fields themselves, appear in the low energy effective action, a fact which illustrates once more that long distance physics is not captured by the (local) low energy effective action in a confining theory.
## 6 Discussion
A point which was stressed both in ref. and in the last section (see also ), and which is surely relevant to the results reported here, is that charged fields in a confining theory can have a profound effect on the far-infrared structure of the theory, even if those fields are very massive. As an obvious example, consider integrating out the quark fields in QCD, to obtain an effective pure gauge theory. This effective pure gauge theory does not produce an asymptotic area law falloff for Wilson loops, which means that confining field configurations are somehow suppressed at large distances. A second example is the Georgi-Glashow model in D=3 dimensions ($`GG_3`$), as discussed in ref. . In this case the W-bosons are massive, and if their effects at large scales are simply ignored, then the model would be essentially equivalent to the theory of photons and monopoles, i.e. a monopole Coulomb gas, analyzed many years ago by Polyakov . In the monopole Coulomb gas, all multiples of the elementary electric charge are confined; but this is not what actually happens in the Georgi-Glashow model. The reason is that W-bosons are capable of screening even multiples of electric charge, which means that even-charge Wilson loops fall only with a perimeter law, and even-charge Polyakov lines have finite vacuum expectation values in the confined phase. If we again imagine integrating out the W and Higgs fields, then the effective abelian theory confines only odd multiples of charge, the global symmetry is $`Z_2`$, rather than U(1), and the theory is clearly not equivalent to either a monopole Coulomb gas, or to compact $`QED_3`$. If one asks: how can the effective long-range theory, which involves only the photon field, be anything different from compact $`QED_3`$, the answer is that the integration over W and Higgs fields produces non-local terms in the effective action. We note, once again, that charged fields in a confining theory have very long-range effects. The fact that these fields are massive does not imply that they can only lead, in the effective abelian action, to local terms, or that the non-local terms can be neglected at large scales. These remarks also apply to the Seiberg-Witten model, as discussed in the last section.
In this article we have concentrated largely on a third example: abelian-projected Yang-Mills theory in maximal abelian gauge. Calculations on the abelian-projected lattice can be always regarded as being performed in an effective abelian theory, obtained by integrating out the off-diagonal gluon fields (and ghosts) in the given gauge. It is often argued that the off-diagonal gluon fields are massive, and therefore do not greatly affect the long-range structure of the theory. The long-range structure, according to that view, is dominated exclusively by the diagonal gauge fields (the “photons”) and the corresponding abelian monopoles, which together are equivalent to a Coulomb gas of monopoles (D=3) or monopole loops (D=4). Then, since only abelian fields are involved, the global symmetry of the effective long-range theory is expected to be U(1), and all multiples of abelian charge are confined. We have seen that reasoning of this sort, which neglects the long-distance effects of massive charged fields, can lead to erroneous conclusions. In fact, we have found that on the abelian projected lattice:
* Confinement of all multiples of abelian charge does *not* occur on the abelian-projected lattice; charge $`q=2`$ Polyakov lines have a non-zero VEV.
* As a result, the global symmetry of the abelian-projected lattice is at most $`Z_2`$, rather than U(1).
* Monopole dominance breaks down rather decisively, at least when applied to charge $`q=2`$ operators.
* The distribution of Wilson loop values is highly asymmetric on an N-cube. There is a very strong correlation between loop values and the P-vortex direction, but only a rather weak correlation with the presence or absence of a static monopole in the N-cube.
In addition, in the usual maximal abelian gauge, there is a breakdown of positivity, which is surely due to the absence of a transfer matrix in this gauge. The loss of positivity can be avoided (at the cost of rotation invariance) by going to a spacelike maximal abelian gauge, where we again find $`q=2`$ string-breaking and deconfinement. The picture of a U(1)-symmetric monopole Coulomb gas or dual superconductor, confining all multiples of the elementary abelian charge, is clearly not an adequate description of the abelian-projected theory at large distance scales. On the other hand, the results reported here fit quite naturally into the vortex picture, where confining magnetic flux on the projected lattice is collimated in units of $`\pm \pi `$.
The center vortex theory has a number of well-known (and gauge-invariant) virtues. In particular, the vortex mechanism is the natural way to understand, in terms of vacuum gauge-field configurations, the screening of color charges in zero N-ality representations, as well the loss of $`Z_N`$ global symmetry in the deconfinement phase transition . Center vortex structure is visible on unprojected lattices, through the correlation of P-vortex location with gauge-invariant observables . The evidence we have reported here, indicating vortex structure on large scales even on abelian-projected lattices, increases our confidence that center vortices are essential to the mechanism of quark confinement.
Acknowledgements
J.Gr. is happy to acknowledge the hospitality of the theory group at Lawrence Berkeley National Laboratory, where some of this work was carried out. J.Gr.’s research is supported in part by the U.S. Department of Energy under Grant No. DE-FG03-92ER40711.
## Appendix A Appendix
In this Appendix we present the detailed argument, outlined in section 3, that $`\overline{\theta }_I^{ph}=\overline{\theta }_{II}^{ph}`$ and $`P_1<0`$ in the additive $`\theta ^{}=\theta +\theta ^M`$ configurations.
The approximation used here is to ignore, at fixed $`\theta _M`$, the fluctuctions of $`\theta ^{ph}`$ around the mean value $`\overline{\theta }^{ph}(\theta ^M)`$; i.e. the vev of any periodic function $`F(\theta )`$ of the Polyakov phase $`\theta `$
$$<F>=_\pi ^\pi 𝑑\theta F(\theta )𝒫(\theta )$$
(A.1)
is approximated by
$$<F>=_\pi ^\pi 𝑑\theta ^M\frac{1}{2\pi }F(\theta ^M+\overline{\theta }^{ph}(\theta ^M))$$
(A.2)
where the factor of $`1/2\pi `$ corresponds to the uniform probability distribution for $`\theta ^M`$. The mean value $`\overline{\theta }^{ph}(\theta ^M)`$ is defined as
$`\overline{\theta }^{ph}(\theta ^M)`$ $`=`$ $`{\displaystyle \frac{1}{Z_{\theta ^M}}}{\displaystyle D\theta _\mu (x)\mathrm{arg}\left(P_1(\stackrel{}{x})e^{i\theta ^M}\right)\delta [P_{M1}(\stackrel{}{x}),e^{i\theta ^M}]e^{S_{eff}}}`$
$`Z_{\theta ^M}`$ $`=`$ $`{\displaystyle D\theta _\mu (x)\delta [P_{M1}(\stackrel{}{x}),e^{i\theta ^M}]e^{S_{eff}}}`$ (A.3)
where
$`P_1(\stackrel{}{x})`$ $`=`$ $`{\displaystyle \underset{n=1}{\overset{N_T}{}}}\mathrm{exp}[i\theta _4(\stackrel{}{x}+n\widehat{4})]`$
$`P_{M1}(\stackrel{}{x})`$ $`=`$ $`{\displaystyle \underset{n=1}{\overset{N_T}{}}}\mathrm{exp}[i\theta _4^M(\stackrel{}{x}+n\widehat{4})]`$ (A.4)
are Polyakov lines in the abelian and MD lattices, and $`S_{eff}`$ is the effective abelian action, obtained after integrating out all off-diagonal gluons and ghost fields. Due to translation invariance, $`\overline{\theta }^{ph}(\theta ^M)`$ does not depend on the particular spatial position $`\stackrel{}{x}`$ chosen in (A.3).
From its definition, $`\overline{\theta }^{ph}(\theta ^M)`$ is obviously periodic w.r.t. $`\theta ^M\theta ^M+2\pi `$. It is also an odd function of $`\theta ^M`$, i.e.
$$\overline{\theta }^{ph}(\theta ^M)=\overline{\theta }^{ph}(\theta ^M)$$
(A.5)
This is derived by first noting that the $`\theta _\mu ^M(x)`$ link angles are functions of the $`\theta _\mu (x)`$ link angles according to eqs. (2.5)-(2.7), and that $`\theta _\mu ^M(x)\theta _\mu ^M(x)`$ under the transformation $`\theta _\mu (x)\theta _\mu (x)`$. Then, making the change of variables $`\theta _\mu (x)\theta _\mu (x)`$ in the integral (A.3), we have
$`Z_{\theta ^M}`$ $`=`$ $`{\displaystyle D\theta _\mu (x)\delta [P_{M1}^{}(\stackrel{}{x}),e^{i\theta ^M}]e^{S_{eff}}}`$ (A.6)
$`=`$ $`{\displaystyle D\theta _\mu (x)\delta [P_{M1}(x),e^{i\theta ^M}]e^{S_{eff}}}`$
$`=`$ $`Z_{\theta ^M}`$
and
$`\overline{\theta }^{ph}(\theta ^M)`$ $`=`$ $`{\displaystyle \frac{1}{Z_\theta ^M}}{\displaystyle D\theta _\mu (x)\text{arg}\left(P_1^{}(x)e^{i\theta ^M}\right)\delta [P_{M1}^{}(x),e^{i\theta ^M}]e^{S_{eff}}}`$ (A.7)
$`=`$ $`{\displaystyle \frac{1}{Z_{\theta ^M}}}{\displaystyle D\theta _\mu (x)(1)\times \text{arg}\left(P_1(x)e^{i\theta ^M}\right)\delta [P_{M1}(x),e^{i\theta ^M}]e^{S_{eff}}}`$
$`=`$ $`\overline{\theta }^{ph}(\theta ^M)`$
The fact that $`\overline{\theta }^{ph}(\theta ^M)`$ is an odd function of $`\theta ^M`$, combined with $`2\pi `$-periodicity, gives us
$$\overline{\theta }^{ph}(\pi )=\overline{\theta }^{ph}(\pi )=\overline{\theta }^{ph}(0)=0$$
(A.8)
We now define the variable
$$\overline{\theta }(\theta ^M)\theta ^M+\overline{\theta }^{ph}(\theta ^M)$$
(A.9)
which is the average Polyakov phase at fixed $`\theta ^M`$. It will be assumed that $`\overline{\theta }(\theta ^M)`$ is a single-valued function of $`\theta ^M`$. Eq. (A.9) can then be inverted to define $`\theta ^M`$ implicitly as a function of $`\overline{\theta }`$
$$\theta ^M(\overline{\theta })=\overline{\theta }\overline{\theta }^{ph}[\theta ^M(\overline{\theta })]$$
(A.10)
and it will be convenient to introduce the notation
$$\overline{\theta }^{ph}[\overline{\theta }]\overline{\theta }^{ph}[\theta ^M(\overline{\theta })]$$
(A.11)
Applying the change of variable (A.9) to eq. (A.2), we have
$$<F>=_\pi ^\pi 𝑑\overline{\theta }\frac{1}{2\pi }\frac{d\theta ^M}{d\overline{\theta }}F(\overline{\theta })$$
(A.12)
where, from eqs. (A.8) and (A.9), we see that the limits of integrations are unchanged. Comparing (A.12) to (A.1), the Polyakov phase probability distribution $`𝒫(\theta )`$ can be identified with
$`𝒫(\overline{\theta })`$ $`=`$ $`{\displaystyle \frac{1}{2\pi }}{\displaystyle \frac{d\theta ^M}{d\overline{\theta }}}`$ (A.13)
$`=`$ $`{\displaystyle \frac{1}{2\pi }}\left(1{\displaystyle \frac{d\overline{\theta }^{ph}}{d\overline{\theta }}}\right)`$
in the approximation (A.2).
Assuming $`Z_2`$ symmetry in the confined phase, we have from eq. (1.10)
$$𝒫(\overline{\theta })=\frac{1}{2\pi }\left(1+2\underset{q=\text{even}}{}P_q\mathrm{cos}(q\overline{\theta })\right)$$
(A.14)
which means that $`𝒫(\overline{\theta })`$ is even w.r.t. reflections around $`\overline{\theta }=0,\pm \frac{\pi }{2}`$; i.e.
$`𝒫(\pi \overline{\theta })`$ $`=`$ $`𝒫(\overline{\theta })`$
$`𝒫(\pi \overline{\theta })`$ $`=`$ $`𝒫(\overline{\theta })`$
$`𝒫(\overline{\theta })`$ $`=`$ $`𝒫(\overline{\theta })`$ (A.15)
Comparing eq. (A.15) with (A.13), we find that $`d\overline{\theta }^{ph}/d\overline{\theta }`$ is also even under reflections around $`\overline{\theta }=0,\pm \frac{\pi }{2}`$. Since the derivative of an odd function is an even function, this means that
$$\overline{\theta }^{ph}[\overline{\theta }]=a+\varphi (\overline{\theta })$$
(A.16)
where $`\varphi (\overline{\theta })`$ is odd under reflections around $`\overline{\theta }=0`$, and $`a`$ is a constant. However, since
$$\overline{\theta }(\theta ^M=0)=\overline{\theta }^{ph}(0)=0$$
(A.17)
it follows that $`\overline{\theta }^{ph}[\overline{\theta }=0]=\overline{\theta }^{ph}(\theta ^M=0)=0`$. Then $`a=0`$, and $`\overline{\theta }^{ph}[\overline{\theta }]`$ is odd around $`\overline{\theta }=0`$. Further, from (A.8), (A.9), and the assumed single-valuedness of $`\overline{\theta }(\theta ^M)`$, it follows that $`\theta ^M(\overline{\theta }=\pm \pi )=\pm \pi `$, and therefore that
$$\overline{\theta }^{ph}[\pm \pi ]=0$$
(A.18)
Then, since $`\overline{\theta }^{ph}[0]=\overline{\theta }^{ph}[\pi ]=0`$, and $`d\overline{\theta }^{ph}/d\overline{\theta }`$ is even w.r.t. reflections around $`\frac{\pi }{2}`$, it follows that $`\overline{\theta }^{ph}[\frac{\pi }{2}]=0`$, and that $`\overline{\theta }^{ph}[\overline{\theta }]`$ is odd w.r.t. reflections around $`\frac{\pi }{2}`$. By the same reasoning, $`\overline{\theta }^{ph}[\overline{\theta }]`$ is also odd w.r.t. reflections around $`\frac{\pi }{2}`$. To summarize, $`\overline{\theta }^{ph}[\overline{\theta }]`$ has the reflection properties:
$`\overline{\theta }^{ph}[\overline{\theta }]`$ $`=`$ $`\overline{\theta }^{ph}[\overline{\theta }]`$
$`\overline{\theta }^{ph}[\pi \overline{\theta }]`$ $`=`$ $`\overline{\theta }^{ph}[\overline{\theta }]`$
$`\overline{\theta }^{ph}[\pi \overline{\theta }]`$ $`=`$ $`\overline{\theta }^{ph}[\overline{\theta }]`$ (A.19)
where the last two relationships are a consequence of global $`Z_2`$ symmetry in the confined phase. Therefore
$`\overline{\theta }_I^{ph}`$ $`=`$ $`{\displaystyle \frac{2}{\pi }}{\displaystyle _0^{\pi /2}}𝑑\theta ^M\overline{\theta }^{ph}(\theta ^M)`$ (A.20)
$`=`$ $`4{\displaystyle _0^{\pi /2}}𝑑\overline{\theta }𝒫(\overline{\theta })\overline{\theta }^{ph}[\overline{\theta }]`$
$`=`$ $`4{\displaystyle _{\pi /2}^\pi }𝑑\overline{\theta }𝒫(\pi \overline{\theta })\overline{\theta }^{ph}[\pi \overline{\theta }]`$
$`=`$ $`4{\displaystyle _{\pi /2}^\pi }𝑑\overline{\theta }𝒫(\overline{\theta })\overline{\theta }^{ph}[\overline{\theta }]`$
$`=`$ $`{\displaystyle \frac{2}{\pi }}{\displaystyle _{\pi /2}^\pi }𝑑\theta ^M\overline{\theta }^{ph}(\theta ^M)`$
$`=`$ $`\overline{\theta }_{II}^{ph}`$
This explains why the correlation between $`\theta ^{ph}`$ and $`\theta ^M`$ found numerically in eq. (3.4) is a consequence of global $`Z_2`$ invariance.
Our second task is to understand why $`P_1`$ is negative in the $`\theta ^{}=\theta +\theta ^M`$ additive configuration, given that $`𝒫(\theta )`$ is peaked around $`\theta =\frac{\pi }{2}`$ as discussed in section 3. Introducing the probability distribution $`𝒫^{}(\theta ^{})`$ for the Polyakov phases in the additive configurations
$$<F(\overline{\theta }^{})>=_\pi ^\pi 𝑑\overline{\theta }^{}F(\overline{\theta }^{})𝒫^{}(\overline{\theta }^{})$$
(A.21)
and again neglecting the fluctuations of $`\theta ^{ph}`$ at fixed $`\theta ^M`$ around the mean $`\overline{\theta }^{ph}(\theta ^M)`$,
$$<F>=_\pi ^\pi 𝑑\theta ^M\frac{1}{2\pi }F(2\theta ^M+\overline{\theta }^{ph}(\theta ^M))$$
(A.22)
Under the change of variables
$$\overline{\theta }^{}=2\theta ^M+\overline{\theta }^{ph}(\theta ^M)$$
(A.23)
eq. (A.22) becomes
$$<F>=_{2\pi }^{2\pi }𝑑\overline{\theta }^{}\frac{1}{2\pi }\frac{d\theta ^M}{d\overline{\theta }^{}}F(\overline{\theta }^{})$$
(A.24)
The limits of integration have changed, but the original limits can be restored using the $`2\pi `$-periodicity of the integrand. To demonstrate the periodicity, we first have
$`\overline{\theta }^{}(\overline{\theta }+\pi )`$ $`=`$ $`\overline{\theta }+\pi +\theta ^M(\overline{\theta }+\pi )`$ (A.25)
$`=`$ $`\overline{\theta }+\pi +(\overline{\theta }+\pi \overline{\theta }^{ph}[\overline{\theta }+\pi ])`$
$`=`$ $`\overline{\theta }+\pi +(\overline{\theta }+\pi +\overline{\theta }^{ph}[\overline{\theta }])`$
$`=`$ $`\overline{\theta }+\pi +(\overline{\theta }+\pi \overline{\theta }^{ph}[\overline{\theta }])`$
$`=`$ $`\overline{\theta }^{}(\overline{\theta })+2\pi `$
where the reflection properties (A.19) have been used. Single-valuedness of $`\overline{\theta }^{}(\overline{\theta })`$ then implies the converse property
$$\overline{\theta }(\overline{\theta }^{}+2\pi )=\overline{\theta }(\overline{\theta }^{})+\pi $$
(A.26)
Next,
$`{\displaystyle \frac{d\theta ^M}{d\overline{\theta }^{}}}`$ $`=`$ $`{\displaystyle \frac{d\theta ^M}{d\overline{\theta }}}{\displaystyle \frac{d\overline{\theta }}{d\overline{\theta }^{}}}`$ (A.27)
$`=`$ $`\left(1{\displaystyle \frac{d\overline{\theta }^{ph}}{d\overline{\theta }}}\right)\left(2{\displaystyle \frac{d\overline{\theta }^{ph}}{d\overline{\theta }}}\right)^1`$
Then, applying (A.26) plus the fact that $`d\overline{\theta }^{ph}/d\overline{\theta }`$ is even w.r.t. reflections $`\overline{\theta }\overline{\theta }`$ and $`\overline{\theta }\pi \overline{\theta }`$,
$`\left({\displaystyle \frac{d\theta ^M}{d\overline{\theta }^{}}}\right)_{\overline{\theta }^{}+2\pi }`$ $`=`$ $`\left(1{\displaystyle \frac{d\overline{\theta }^{ph}}{d\overline{\theta }}}\right)_{\overline{\theta }(\overline{\theta }^{})+\pi }\left(2{\displaystyle \frac{d\overline{\theta }^{ph}}{d\overline{\theta }}}\right)_{\overline{\theta }(\overline{\theta }^{})+\pi }^1`$ (A.28)
$`=`$ $`\left(1{\displaystyle \frac{d\overline{\theta }^{ph}}{d\overline{\theta }}}\right)_{\overline{\theta }(\overline{\theta }^{})}\left(2{\displaystyle \frac{d\overline{\theta }^{ph}}{d\overline{\theta }}}\right)_{\overline{\theta }(\overline{\theta }^{})}^1`$
$`=`$ $`\left(1{\displaystyle \frac{d\overline{\theta }^{ph}}{d\overline{\theta }}}\right)_{\overline{\theta }(\overline{\theta }^{})}\left(2{\displaystyle \frac{d\overline{\theta }^{ph}}{d\overline{\theta }}}\right)_{\overline{\theta }(\overline{\theta }^{})}^1`$
Since $`F(\overline{\theta }^{})`$ is periodic by definition, this establishes the $`2\pi `$-periodicity of the integrand in (A.24), which can then be written
$`<F>`$ $`=`$ $`{\displaystyle _\pi ^\pi }𝑑\overline{\theta }^{}{\displaystyle \frac{1}{\pi }}{\displaystyle \frac{d\theta ^M}{d\overline{\theta }^{}}}F(\overline{\theta }^{})`$ (A.29)
$`=`$ $`{\displaystyle _\pi ^\pi }𝑑\overline{\theta }^{}{\displaystyle \frac{1}{\pi }}\left(1{\displaystyle \frac{d\overline{\theta }^{ph}}{d\overline{\theta }}}\right)\left(2{\displaystyle \frac{d\overline{\theta }^{ph}}{d\overline{\theta }}}\right)^1F(\overline{\theta }^{})`$
Comparing (A.29) with (A.21)
$$𝒫(\overline{\theta }^{})=\frac{1}{\pi }\left(1\frac{d\overline{\theta }^{ph}}{d\overline{\theta }}\right)\left(2\frac{d\overline{\theta }^{ph}}{d\overline{\theta }}\right)^1$$
(A.30)
Single-valuedness of $`\theta ^M(\overline{\theta })`$ implies that $`d\overline{\theta }^{ph}/d\overline{\theta }<1`$, and with this restriction $`𝒫(\overline{\theta }^{})`$ is a maximum where $`d\overline{\theta }^{ph}/d\overline{\theta }`$ is a minimum. However, we have previously deduced from the the fact that $`P_1=0`$ and $`P_2<0`$ that the probability distribution
$$𝒫(\overline{\theta })=1\frac{d\overline{\theta }^{ph}}{d\overline{\theta }}$$
(A.31)
is $`Z_2`$ invariant and peaked at $`\overline{\theta }=\pm \pi /2`$. Again, this distribution is maximized when $`d\overline{\theta }^{ph}/d\overline{\theta }`$ is minimized, which implies that $`d\overline{\theta }^{ph}/d\overline{\theta }`$ is a minimum at $`\overline{\theta }=\pm \pi /2`$. Finally,
$`\overline{\theta }^{}\left(\overline{\theta }=\pm {\displaystyle \frac{\pi }{2}}\right)`$ $`=`$ $`{\displaystyle \frac{\pi }{2}}+\theta ^M\left(\overline{\theta }={\displaystyle \frac{\pi }{2}}\right)`$ (A.32)
$`=`$ $`{\displaystyle \frac{\pi }{2}}+\left({\displaystyle \frac{\pi }{2}}\overline{\theta }^{ph}[{\displaystyle \frac{\pi }{2}}]\right)`$
$`=`$ $`\pi `$
As a consequence, $`d\overline{\theta }^{ph}/d\overline{\theta }`$ is minimized at $`\overline{\theta }^{}=\pi `$ (and also, by the same arguments, at $`\overline{\theta }^{}=\pi `$), which means that $`𝒫(\overline{\theta }^{})`$ is peaked at $`\overline{\theta }^{}=\pm \pi `$, as illustrated in Fig. 15. This explains why we expect $`P_1<0`$ in the additive configuration.
|
no-problem/9907/hep-ex9907008.html
|
ar5iv
|
text
|
# References
EXPERIMENTAL INVESTIGATION OF CHANGES IN $`\beta `$-DECAY COUNT RATE OF RADIOACTIVE ELEMENTS
Yu.A. BAUROV<sup>1</sup><sup>1</sup>[email protected]
Central Research Institute of Machine Building,
141070, Korolyov, Moscow Region, Russia
Yu.G. SOBOLEV<sup>2</sup><sup>2</sup>[email protected], V.F. KUSHNIRUK and E.A. KUZNETSOV
Flerov Laboratory of Nuclear Reactions (FLNR),
Joint Institute for Nuclear Research,
141980, Dubna, Moscow Region, Russia
A.A. KONRADOV<sup>3</sup><sup>3</sup>[email protected]
Russian Academy of Sciences, Institute of Biochemical Physics,
117977, Moscow, Russia
ABSTRACT
The experimental data on continuous investigation of changes in $`\beta `$-decay count rate of $`{}_{}{}^{137}Cs`$ and $`{}_{}{}^{60}Co`$ from 9.12.98 till 30.04.99, are presented. The 27-day and 24-hour periods in these changes, inexplicable by traditional physics, have been found. PACS numbers: 24.80+y, 23.90+w, 11.90+t
1. Introduction In Refs. \[1-3\], periodic variations in $`\beta `$-decay rate of $`{}_{}{}^{60}Co`$, $`{}_{}{}^{137}Cs`$, and $`{}_{}{}^{90}Sr`$, have been first discovered. An analysis of the 24-day period in $`\beta `$-decay of radioactive elements as well as of the daily rotation of the Earth in various seasons of the year has led to selection of some spatial direction characterized by the fact that near the points of the Earth’s surface where the latitude tangent line to a parallel passes through this direction, the decay count rate of radioactive elements changes. The main drawback of the experiments \[1-3\] was that their final results gave no possibility to clearly understand what was an effect of the ”internal life” of the setup itself and what was due to the phenomenon of interest. In addition, the duration of these experiments was no more than three weeks, which did not allow to analyze long-period harmonics.
The aim of this paper is to find an answer to the above questions, using measurements of flux of $`\gamma `$-quanta in the process of $`\beta `$-decay of radioactive elements as in ref. . 2. The diagram of the setup. The experimental setup (Fig.1) consisted of three scintillation detectors, two of them being standard spectrometric scintillation detecting units BDEG2-23 on the basis of NaI($`Tl`$)-scintillator (63mm in diameter, 63mm in height) and FEU-82 photomultiplier (PM) with standard divider. One of these units was used to indicate the background radiation, and the second one was to present $`\gamma `$-radiation of $`{}_{}{}^{137}Cs`$. The third detector was a BGO-scintillator (46mm in diameter, 60mm in height) and a FEU-143 photomultiplier with standard divider. This detector was used to display the $`\gamma `$-radiation of a $`{}_{}{}^{60}Co`$-source.
To diminish the influence of magnetic fields on the PMs, the detectors were placed into protecting screens made as cilynders from ten sheets of annealed permalloy 0.5 mm in thickness. The internal diameter of the cylinders was equal to 10 cm, and the height was 70 cm.
The detectors were placed in such a manner that the photocathodes of PMs were at a distance of one-half of height of the cylinder. The $`\gamma `$-sources were placed just on the end face surface of the scintillators through the center of the input window. All the detectors and the temperature-sensitive-element were positioned inside the metallic cube ($`40\times 40\times 50`$ cm<sup>3</sup>) used as an additional magnetic shield. The thickness of the steel walls of the cube was equal to 3 mm. The detectors with $`\gamma `$-sources were surrounded by lead protection 5 cm in thickness. 3. The system of registering experimental information. The system of registering information consisted of two subsystems. The first one was designed for accumulating information on the counting rate in ten-second intervals from scintillation detectors as well as on the temperature, power-source voltages (high voltage of PMs, voltage of CAMAC $`=6V,24V`$) and impulse noise of crate power supply. The second subsystem of information storage were made to record ”marked” energetic distributions from scintillation detectors for the purpose of checking the stability of their amplitude distribution parameters (stability of discriminator thresholds, shape of amplitude distributions etc.). 3.1. The spectrometric sections. The set included three identical spectrometric registering sections (see Fig.1). Each section consisted of a preamplifier (PA)-emitter follower matching the impedances, spectrometric amplifier (AFA) with active filters having shaping time constants $`T_{int}=T_{dif}=0.25\mu s`$ , and system of fast discriminators (FD) of negative output signals of the amplifier and counters of gated pulses (SC). In addition, the positive output signal of the spectrometric amplifier AFA from each registering channel was fed to analog-to-digital converters (ADC) placed in a separate crate. To increase reliability of spectrometric section, all variable resistors in which, as almost 20-year operating experience has shown, sometimes the contact arm faults take place (when continuously adjust amplification in AFA and the threshold in FD), were replaced by the fixed resistors. The PMs of all sections had the general high-voltage power supply. 3.2. The system of monitoring and recording parameters. In long-term experiments, the most important requirement upon the measuring system is the possibility of continuous control over its parameters as for detecting non-stable elements, units, and connections, so to refine possible correlations of measurable quantities with the environmental parameters. The experimental setup was powered from separate terminals of distributing board for diminishing the possible influence of additional parallel loads in the power network.
To monitor the temperature of the environment, a thermometric channel with high-sensitive temperature element and amplifier module was used. This element was made on the basis of assembly of semiconductor diodes with the summary thermoelectric coefficient about 10 mV/degree. The amplifier module gave stable bias current for the temperature-sensitive element and additionally amplified the signal up to the summary termoelectric coefficient of the measuring channel of 100 mV/degree. In the same module a transformer of voltage from high-voltage power supply of the scintillation units into low voltage for 8ADC (see below) was arranged. The transformation ratio was about 3.3 V/kV.
In the measuring crate with the counters, amplifiers AFA, and the amplifier module of the thermometric channel, we have placed also a multichannel amplitude-to-digital converter 8ADC for measuring high voltage (HV) of the scintillation detectors and monitoring the secondary power voltages $`\pm 24V,\pm 6V`$ of the crate CAMAC itself, as well as a special module to register the impulse noise of these secondary power sources. Any impulse input in the crate power line with an amplitude more than 10mV recorded ”1” into the corresponding information bit of the word register of module data. The frequency spectrum of recorded impulse signals extended from tens Hz to several MHz. Thus we recorded impulse noise of the crate along with monitoring levels of constant high voltages of the power source of the scintillation units as well as low voltages of the crate power sources.
The start of measuring cycle and quantization of exposure time in the first recording subsystem were organized by a ”Master-Trigger” MT1. It comprised a pulser with quartz stabilization of frequency of output pulses (QUARTZ) and a scaling circuit. Each cycle of measurements in the experiment started with generation of a ten-second exposure signal (GATE) by the unit MT1. This signal opened all counters of the setup (SC1-SC6). After the ten-second signal of exposure of the counters, MT1 elaborated a signal ”LAM” for the controller CC1 of the measuring crate to organize a cycle of interrogation of the crate recorders and transmission of date to the storage PC1. The data file, transmitted to PC1 in each interrogation cycle, included the following data words: \- the number of readings in the counters SC1-SC6 $`8\times 16`$ bits,
\- the codes of voltages of CAMAC sources $`\pm 6V,\pm 24V`$ $`4\times 15`$ bits,
\- the code of voltage of the high-voltage power source
of the scintillation units 15 bits,
\- the code of the recorder of impulse noise 4 bits. The 15-digit codes with 8ADC contained 12 bits of the voltage code and 3 bits of the channel number.
The characteristics of the sections: a) Sensitivity (the exposure time 10 s with an accuracy of $`10^6`$ s):
| $`\pm 6V`$ | = | 5mV per channel; |
| --- | --- | --- |
| ”High-voltage power” | = | 750 mV per channel; |
| $`\pm 24V`$ | = | 12.5mV per channel; |
| ”Temperature” | = | $`1^{}`$ for 40 channels. |
b) Thresholds of the section $`{}_{}{}^{137}Cs`$ NaI($`Tl`$) (calibration against $`\gamma `$-lines 662 keV, 1173 keV, 1332 keV):
| the ”low” threshold | = | 7 keV; |
| --- | --- | --- |
| the threshold ”under the peak” | = | 425 keV; |
| the threshold ”on the peak” | = | 657 keV. |
c) Thresholds of the background section NaI ($`Tl`$) (calibration against $`\gamma `$-lines 662 keV, 1173 keV, 1332 keV):
| the ”low” threshold | = | 11 keV. |
| --- | --- | --- |
d) Thresholds of the $`{}_{}{}^{60}Co`$ BGO-section (calibration against $`\gamma `$-lines 662 keV, 1173 keV, 1332 keV):
| the ”low” threshold | = | 35 keV; |
| --- | --- | --- |
| the threshold ”under the peak” | = | 745 keV. |
The start of measurements in the second recording subsystem was organized by the ”Master-Trigger” MT2 from any signal of the discriminators FD1-FD6 (chosen by the experimenter by way of switching from one channel to another in the module M). The unit MT2 opened by its GATE-pulse the amplitude-code converters ADC1-ADC3, ”spectrum mark” counter SC7, and triggered the cycle of recording information into the storage computer PC2 after the time of amplitude-digital code transformation. The GATE-pulses from the MT1 unit of the first recording subsystem were fed to the counter SC7 input. Thus the counter gave information on numbers of ten-second exposure intervals of the first subsystem. This allowed to perform analysis (in ”off-line” mode) of the amplitude distribution parameters of the chosen channel of recording in any combination of ten-second exposures.
4. The basic results of the experiment. Brief discussion.
The long-term dynamics of the radioactive decay of $`{}_{}{}^{137}Cs`$ and $`{}_{}{}^{60}Co`$ over the period from 9 December 1998 till 30 April 1999, was measured. The above described setup made it possible to perform precision measurements with monitoring parameters of the system at the different discrimination thresholds of decay energy. The spectra in the channels for $`{}_{}{}^{137}Cs`$ are presented in Figs.2-4 with the corresponding thresholds. As an example, in Fig.5 the results of measurements over two-week time interval at the end of March, 1999, are shown, for 7 main variants of channels.
| Variants of channels | Measurements |
| --- | --- |
| 1. | BGO, the threshold of Fig.3-type; |
| 2. | BGO, the threshold of Fig.2-type; |
| 3. | NaI<sup>1</sup> with the threshold in Fig.3; |
| 4. | NaI<sup>1</sup> with the threshold in Fig.4; |
| 6. | NaI<sup>1</sup> with the threshold in Fig.2; |
| 12. | Internal temperature of the setup; |
| 13. | High voltage (HV) in channels NaI<sup>1,2</sup> and BGO. |
In the present paper we shall analyze only the channel 6 corresponding to the minimum threshold of discrimination at which only low-energetic noise component was cut off, and the channel 12. From Fig.5 one can conclude that the channel with the low discrimination was the most stable though with a remarkable local dispersion: the data densely fill a relatively broad band.
The starting series have more than $`1.2\times 10^6`$ points in summary length over the whole time interval of observation. Each point corresponds to a ten-second interval of decay number accumulation. Hence, the summary duration of continuous measurements was $`3347`$ hours, or $`140`$ days.
When analyzing the periodical structure of the series we were interested in periods no shorter than several hours. In Fig.6 the results of normalized the starting series (i.e. reducing to the interval ) averaging over one-hour period, are given. With such hourly averaging, the ”fast” component of dispersion disappeared, and the slow dynamic of the process was clearly seen. It is also clear from the Figure that the temperature inside the setup varied in antiphase with the count rate. This is well seen in the whole long series and, partially, in Fig.5. The cross-correlation function of these two series has a sharp minimum approaching -0.95 at the zero lag. This allowed us to take into account the temperature dependence of count rate measurements by way of simple addition of two normalized series. The Fourier-analysis (fast Fourier transformation - FFT) of the final temperature-compensated series have revealed two distinctly distinguishable periods. In Fig.7 a pronounced 27-day period is seen that may be caused, for example, by the influence of the Sun’s rotation around its axis (the synodic period of the Sun’s rotation relative to the Earth is equal to 27.28 days). In the hour-scale of the periods in Fig.8, a 24-hours period is well marked. It should be emphasized that this daily period is absent in the spectrum of the dynamic of the temperature itself (see Fig.9) and is found only in the dynamic of the radioactive decay, so that it can have an external cosmic reason, too.
Now let us consider the statistics of extremum values of the series of measurements. Evaluate more accurately the extent of nonuniformity of distribution of extremum values for the starting (10-second) series of observations in the low-threshold channel over the time of astronomical day. This procedure was described earlier . Here we give its brief presentation.
Under an extremum we mean here a value for which the modulus of difference with the average for the whole series is no lesser then two standard deviations. Ascribing to each extremum value that instant of day time at which this extremum was observed we shall have the resulting set of time instants in the interval from 0 till 24 hours when ”jumps beyond two sigmas” were measured. The ”null hypothesis” consists in that the extremum events occur with equal frequency in any time of day, i.e. the distribution of these instances is uniform over the day cycle. The hypothesis of uniformity of distribution can now be validated, for example, by the Kolmogoroff-Smirmoff’s test. In Figs.10 and 11 the results of computations are presented. The time of day laid as abscissa is expressed in degrees ($`0360^{}`$).
As a reference point, the time from beginning of observations is taken (the start on 9 Dec. 1998 at $`23^h`$ of astronomical time - the local time. The whole time of experiment was divided for this analysis into exact decades in days). The values of difference between the sample and uniform distribution functions for each moment of day time are plotted as ordinates (in degrees). With the dashed line the confidence levels of Kolmogoroff-Smirnoff’s criterion ($`P<0.05`$) are shown. An exit beyond these limits denotes a significant difference of the distribution from the uniform one, and the maximum point indicates the time of day (phase) when this nonuniformity was maximum. In Fig.10 the results for the maximum values, and in Fig.11 for minimum values are given.
The existence of reliable nonuniformity denotes presence of a daily period in the statistics of extremum values of the radioactive decay. A knowledge of phase (moment of maximum nonuniformity) as well as relation to the absolute time (from the beginning of the series) allows us to determine possible cosmic references connected with such a nonuniformity.
The analysis of the extremum jumps have shown that they overlie tangent lines to the Earth’s parallels making an angle of $`\pm (35^{}45^{})`$ with the direction having the right ascension coordinate $`\alpha 275^{}`$ that insignificantly ($`5^{}`$) differs from the direction fixed in Refs.\[1-3,6\]. It should be noted also that, as background measurements in the channel 5 have shown (the flux of particles in this channel was no more then 50 particles per ten second), the oscillations of the background (due to the smallness of its flux as compared with that ($`300`$ per second) going beyond the scope of $`2\sigma `$) by no means could influence on the distribution of temporal coordinates of the extremum points.
|
no-problem/9907/nucl-th9907020.html
|
ar5iv
|
text
|
# Photoproduction of Hybrid Mesons
## Introduction: Expectations for hybrids
The search for hybrids has reached an exciting and somewhat bewildering point. Theoretical predictions for hybrid masses have historically been somewhat model dependent, with masses for the lightest hybrid meson multiplet typically lying in the range 1.5-2.0 GeV. The flux-tube model prediction of 1.9 GeV hybr\_ft\_mass has been the most widely cited hybrid mass estimate. This appears to have been confirmed by recent LGT studies hybr\_latt which find that the lightest exotic $`n\overline{n}`$-hybrid is a $`1^+`$, with a mass of about 2.0 GeV. Thus, theory appears to have reached a consensus that exotic hybrids begin at around 2.0 GeV. The flux-tube model predicts that the $`I=1`$ $`1^+`$ should not be very broad; a width of about $`0.2`$ GeV is anticipated for a mass near 1.9 GeV, with dominant decay modes of $`b_1\pi `$ and $`f_1\pi `$ cp . One surprise from LGT is that the $`0^+`$ exotic is found by the MILC Collaboration to have a high mass, perhaps about 2.7 GeV (from their figure). In the flux-tube model this should be approximately degenerate with the $`1^+`$.
These predictions are in striking disagreement with recent experimental results hybr\_pi1 . VES and E852 are in agreement about the phase motion of the $`\eta \pi ^{}`$ system in $`\pi ^{}p\pi ^{}\eta n`$, which E852 notes can be fitted with a rather broad exotic “$`\pi _1(1400)`$” with $`M=1.4`$ GeV, $`\mathrm{\Gamma }_{tot}=0.4`$ GeV. This mass is about $`0.5`$ GeV below theoretical expectations, and the observed state is much wider than the flux-tube model would anticipate for a hybrid at this mass. This state appears to have been confirmed by Crystal Barrel CBetapi . A second $`I=1`$ $`1^+`$ exotic has been found by E852 near 1.6 GeV in $`\rho \pi `$, with a width slightly below 0.2 GeV. This relatively narrow state shows clear resonant phase motion against the $`\pi _2(1670)`$.
Thus hybrids may well have been discovered, and the disagreement with theoretical predictions, including LGT, makes their further study of the greatest interest. In this contribution we consider which hybrids might be photoproduced easily (diffractively or by pion exchange) in a future search for exotic mesons at CEBAF.
## Channels for hybrid photoproduction
Here we simply list accessible quantum numbers, since detailed model calculations at this early stage may be inappropriate. The three mechanisms believed to be the most important in light meson photoproduction tbphot are diffractive vector-nucleon scattering (vector dominance followed by pomeron exchange), $`t`$-channel meson exchange (with the pion giving the leading contribution at small $`t`$), and baryon resonance decay. The latter can be treated as a background, and the former two can be selectively enhanced through $`t`$ cuts and the selection of final state quantum numbers.
Diffractive scattering is poorly understood at the QCD level, although some features are generally agreed on; it corresponds to vacuum quantum number exchange, an imaginary amplitude, and may be dominated by $`gg`$ exchange. A simple $`I=0`$, $`0^{++}`$ exchange picture gives the list of quantum numbers in Fig. 1. The accessible exotics are all $`J^{PC}=even^+`$; $`I=1`$, $`G=(+)`$ is favored due to the larger contribution of $`\rho ^o`$ to vector dominance.
Assuming that the flux-tube model hybr\_ft\_mass ; cp is a useful guide to masses and decays of hybrids, the $`0^+`$ and $`2^+`$ exotics $`b_0`$ and $`b_2`$ are the most interesting because they are in the lowest flux-tube hybrid multiplet. The $`b_0`$ however is predicted to be extremely broad. The $`b_2`$ is much narrower, with a predicted width of 0.4 GeV and a 50% branch to $`a_2\pi `$. Diffractive photoproduction of the neutral $`(a_2\pi )^o`$ system therefore appears to offer an excellent opportunity for identifying a hybrid in photoproduction.
The $`s\overline{s}`$ $`2^+`$ exotic can also be diffractively photoproduced, and should decay mainly to $`K_2^{}K`$ and $`K_1K`$ (both $`K_1`$ states). In the flux-tube model this state has a width of 0.2 GeV and may be clearly evident because of smaller backgrounds in strange channels.
Pion exchange photoproduction offers several possibilities. There is a caveat that S+S couplings of hybrids in the flux-tube model are suppressed, so photoproducing a hybrid through $`\rho ^0+\pi `$ might have a relatively weak amplitude. Of course the exotic candidates $`\pi _1(1400)`$ and $`\pi _1(1600)`$ show no S+S suppression, and violation of this selection rule in improved flux-tube calculations is found to be significant cp . One-pion exchange photoproduction of hybrids certainly merits investigation, independent of these flux-tube predictions of possibly suppressed couplings.
First, for charged $`\pi `$ exchange we find the list of quantum numbers shown in Fig.2. Here $`I=1`$ is forced, and $`G=()`$ is preferred. This gives $`odd^+`$ exotics, the most interesting of course being the $`I=1`$ $`1^+`$. Here one would first study $`\eta \pi `$, $`\eta ^{}\pi `$ and $`\rho \pi `$, to see if the E852 states $`\pi _1(1400)`$ and $`\pi _1(1600)`$ are evident. Since the $`\pi _1(1600)`$ is reported to couple strongly to $`\rho \pi `$, it should certainly appear in this reaction! Earlier work on photoproduction by Condo et al. Condo found another possible exotic, a $`\pi _J(1770)`$ state which may be $`1^+`$ or $`2^+`$.
Neutral pion exchange is unusual in that $`I=0`$ should dominate. $`G=()`$ is again preferred, and the exotics are $`even^+`$ and the unusual $`0^{}`$. The $`2^+`$ is expected to be lightest, and should decay mainly to $`b_1\pi `$, with $`\mathrm{\Gamma }_{tot}=0.3`$ GeV. The $`\rho \pi `$ width of this state is predicted to be very small, however, so it might be difficult to photoproduce.
Finally, although one prefers exotic channels because they are unambiguously non-$`q\overline{q}`$, the flux-tube model and previous experiments suggest several non-exotic channels for photoproduction of hybrids. The highest priority is the “extra” $`\pi _J(1770)`$ state reported in photoproduction by Condo et al. Condo , which may be $`2^+`$ but is apparently not the $`\pi _2(1670)`$; note that a suggestively similar doubling of $`\eta _2`$ states has been reported by Crystal Barrel eta2 . The $`\pi (1800)`$ hybrid candidate discussed by VES VESpi1800 clearly does not decay as the <sup>3</sup>P<sub>0</sub> model predicts a 3S $`q\overline{q}`$ state should bcps , notably because of the S+P mode $`\pi f_0(1300)`$; here we may be seeing a $`0^+`$ hybrid or a failure of the standard $`q\overline{q}`$ decay model! An $`a_1`$-type hybrid with $`\mathrm{\Gamma }_{tot}=0.5`$ GeV is expected in $`f_2\pi ^+`$ and $`f_1\pi ^+`$. A last case, the narrowest hybrid predicted by the flux-tube model, is an $`\omega `$ state with $`\mathrm{\Gamma }_{tot}=0.1`$ GeV that should decay into $`K_1K`$ (both) and notably into the final state $`\omega \eta `$, which is attractive experimentally. This dramatically narrow state could serve as a crucial test of the flux-tube picture of hybrids.
To conclude, a note of caution may be appropriate. There are no independent tests of the flux-tube model of hybrid decays, which may be inaccurate. Should hybrids be much broader than anticipated in some channels, the coupling to meson continuua may give important mass shifts. In model studies one finds that these mass shifts are typically downwards and are numerically comparable to the widths, so broad hybrids might lie far from “quenched” theoretical predictions. This rather drastic scenario could explain why quenched hybrid masses in the flux-tube model and LGT differ considerably from the masses reported for the new E852 exotic candidates. In this confused situation the most important future experimental exercise will be to confirm (or refute!) the existence of the light exotics $`\pi _1(1400)`$ and $`\pi _1(1600)`$.
## Acknowledgements
It is a pleasure to acknowledge the kind invitation of the organisers to present this material. I would also like to thank P.R.Page for technical information regarding decays of hybrids in the flux-tube model and N.Cason, S.U.Chung, A.Dzierba, N.Isgur and E.S.Swanson for discussions of related issues. This work was supported in part by the USDOE under Contract No. DE-AC05-96OR22464 managed by Lockheed Martin Energy Research Corp.
|
no-problem/9907/physics9907044.html
|
ar5iv
|
text
|
# Non-stationary Characteristics of the instability in a Single-mode Laser with Fiber Feedback
## Abstract
Chaotic bursts are observed in a single-mode microchip Nd:YVO<sub>4</sub> laser with fiber feedback. The physical characteristic of the instability is a random switching between two different dynamical states, i.e., the noise-driven relaxation oscillation and the chaotic spiking oscillation. As the feedback strength varied, transition which features the strong interplay between two states exhibits and the dynamical switching is found to be non-stationary at the transition.
The complex dynamics in nonlinear systems with delayed feedback which possesses infinite dimensions are of current interest in various fields including physics, chemistry, biology, economy, physiology, neurology, and optical systems . Especially, instabilities of nonlinear optical resonators and lasers with delayed feedback have attracted much attention in the past decade. In history, the issue of chaotic instabilities in the output of lasers which is subjected to external feedback was initiated by the pioneering work of Lang and Kobayashi in 1980. They demonstrated the dynamical instabilities in a semiconductor laser with external feedback which features the sustaining relaxation oscillations. They also confirmed theoretically that the dynamical instabilities take place in the transition process where the lasing frequency changes from one external cavity eigenmode to another in a weak-coupling regime. Thereafter, three universal transition routes to chaos, low-frequency fluctuations (LFF) and coherence collapse have been observed in semiconductor lasers with external optical feedback for different feedback strength and/or delay time regions. To LFF and coherence collapse, there still exists the open question to concern the role of noise . Nowadays, another promising laser system for investigating the instabilities in lasers with delayed feedback would be the laser-diode-pumped microchip solid-state lasers which have been widely used in the practical applications. They are expected to exhibit extremely high-sensitive response to the external feedback. The reason is that the cavity round-trip time $`\tau _L`$ ($`\tau _p`$: photon lifetime) compared with the fluoresecence lifetimes $`\tau `$ is extremely short as demonstrated in self-mixing laser Doppler velocimetries . Generally, the lifetime ratio $`K=\tau /\tau _p`$ of solid lasers ranges from 10<sup>5</sup> to 10<sup>7</sup>, while $`K10^3`$ in laser diodes. Furthermore, their characteristic frequencies are of sub-MHz, thus the conventional measurement techniques can be utilized easily. Therefore, it will be much easier to study the various instabilities in solid-state laser systems than in laser diode systems. In fact, in the early experiment in 1979, Otsuka observed various instabilities in a microchip LNP (LiNdP<sub>4</sub>O<sub>12</sub>) solid-state laser subjected to external feedback. However, the instability is only observed in the region of two lasing modes. It is expected that the instability can also occur due to the external cavity modes with small mode spacing. The mechanism is that a random intensity fluctuation in each mode will result in mode-dependent random fluctuation in phase shift. Hence, multimode oscillation and intrinsic mode-partition noise as well as frequency dependent nonlinear refractive index are the dynamical origins to cause the chaotic bursts in the presence of fiber. Here we report an experimental result of a diode-pumped microchip Nd:YVO<sub>4</sub> (yttrium orthovanadate) solid state laser, in which mode-partition noise is not essential, but the intrinsic phase fluctuation may be dominant around the onset of instability. Furthermore, it will be shown that a novel non-stationary characteristic is inherent in the instability due to the strong coupling between two different dynamical states at transition. This non-stationary feature should be unique for the characterization of generic dynamical systems.
In our experiment, a diode-pumped microchip Nd:YVO<sub>4</sub> laser operated at single-mode regime is employed and a compound cavity is formed with single-mode fiber feedback. The laser diode (LD) and Nd:YVO<sub>4</sub> (1mm thick, 1$`\%`$ Nd<sup>3+</sup> doped, and the output coupling is $`5\pm 2\%`$ at 1064nm) are available from CASIX, Inc. The laser crystal (Nd:YVO<sub>4</sub>) is inserted into a 2mm-thick copper mount and the temperature is controlled at 25<sup>o</sup>C by a temperature controller (ILX, LDT-5910B). The pumping beam (with wavelength at $`\lambda _p`$=808nm) from LD, which is also temperature-controlled, is focused onto the laser crystal with a GRIN lens (0.22 pitch). The pumping threshold is around 300 mA. We also use a noise filter (ILX 320) to eliminate the pumping noise caused by the LD current driver (ILX, LDC-3744) and an interference filter (60$`\%`$ transmission at 1064nm and zero-transmission for the rest) to reduce the influence of pumping light on detection. In the entire pumping domain a $`\pi `$-polarized TEM<sub>00</sub> mode of laser output was observed. We used a 10m single-mode fiber (3M F-SY) as the feedback loop. The feedback beam is monitored and no significant polarization has been found. Because of the reduction of the lasing threshold (about 1-2 mA less) the feedback strength is estimated to be below 1$`\%`$. To further control the feedback strength, before the light entering the fiber, a rotatable polarizer (New Focus 5525) is utilized. In the measurement, a multi-wavelength meter (HP 86120B) is employed to monitor the variation of lasing mode and the lasing wavelength is 1064.245 nm. The lasing eigenmode frequency of the compound cavity is determined by the frequency arrangement of the Nd:YVO<sub>4</sub> laser cavity mode (mode spacing 0.25 nm; i.e., $``$ 60 GHz) and the external cavity (fiber) modes for which the number of modes is very large (the mode spacing of external cavity is around 10 MHz). We also utilize low-noise detectors (New Focus 1611; bandwidth 1GHz) for the detection of laser output. Both ac and dc ports of the detectors are connected to a transient oscilloscope (HP54542C) for data acquisition in the temporal domain. Meanwhile, a rf-spectrum analyzer (HP8591E) is employed to monitor the behavior of laser output in the rf-spectrum domain.
For later identification, we first show the result of a typical ac time series for the case of free-running and its corresponding rf-spectrum in Fig.1. Relaxation oscillation occurs around 1.6 MHz and its harmonics can be easily characterized. As the strength of feedback is increased, chaotic bursting occurs and the dominant frequency was shifted to a lower value (around 1.0 MHz) with broadened-linewidth. Typical time series of the chaotic bursting is shown in Fig.2 (a). To explore the dynamics, we employ a joint-time frequency analysis. The coincidence of frequency characteristics between Fig.1 (b) and Fig.2 (b) suggests that the low-intensity level part of chaotic bursting, the regime I in Fig.2 (a), is a noise-driven relaxation oscillation while the high-intensity level part, the regime III in Fig.2 (a), can be identified to be chaos based on a singular value decomposition analysis. We noted that the basic characteristics of the dynamical transition between two states is frequency-broadening in the rf-spectrum as shown in Fig.2 (c). Nevertheless, the lasing wavelength remains the same. That is no significant line-width broadening or hopping has been seen. Meanwhile, as instability occurs, the signal-to-noise level of the lasing mode decreases and features as a fast frequency-modulated (FM) laser characteristics. This suggests that there is a FM noise in the process of instability.
Next let us address the physical mechanism of coexistence of two dynamical states. As seen from the time series, the behavior of peak power is a key factor for the dynamics. Since our system is essentially a single-mode laser with weak feedback, the Lang-Kobayashi model is still applicable such that the photon density $`S(t)`$ follows
$$\frac{dS(t)}{dt}=K[(n(t)1)S(t)+ϵn(t)]+2\kappa \sqrt{S(t)S(tT)}cos\theta (t),$$
(1)
where $`K`$ is the time ratio between the population inversion lifetime and the photon lifetime, $`n(t)`$ is the population inversion density (or carrier density), $`ϵ`$ is the spontaneous emission factor, $`\kappa `$ is the feedback coupling strength, $`T`$ is the delay time, and $`\theta `$ is the phase difference between the output and the feedback beams. The peak power $`S_p`$, therefore, follows
$$S_p=\frac{Kϵn_p(t)}{K(n_p(t)1)+2\kappa \sqrt{1+\frac{\mathrm{\Delta }S}{S_p}}cos\theta _p},$$
(2)
where $`\mathrm{\Delta }S=S_p(t)S_p(tT)`$ and the subscript $`p`$ denotes the corresponding quantities evaluated at $`S=S_p(t)`$. The role of $`\mathrm{\Delta }S`$ is crucial. As the feedback coupling $`\kappa `$ is almost zero, the statistics $`S_p`$ should simply follow the (on-off) fluctuation of the population inversion as implied by the lasing threshold factor, $`n(t)1`$. With a nonzero-$`\kappa `$, the dynamics of $`S_p`$ will be modified by the appearance of $`\mathrm{\Delta }S`$ as well as the phase term $`\theta _p`$. Since the magnitude of the peak output is directly measurable, a further investigation of the peak photon intensity and the statistics of time-difference quantity ($`\mathrm{\Delta }S`$) will be fruitful. With oscilloscope, we repeatedly accumulate the time series of laser output and pick up the discrete peak value $`S_p(n)`$, $`n=1,2,\mathrm{}`$ where $`n`$ denotes the $`n`$th peak. To have a reliable probability, total $`320,000`$ peaks have been collected for average at any specific rotation angle of polarizer. As the polarizer is rotated, the feedback strength will be changed. Asymptotically, as $`\kappa 0`$, the system features a free-running laser such that the mean and the standard deviation are small. However, there is a dramatic increasement in both of the mean and the standard deviation around 41-42 degree of polarizer’s angle which implies the onset of transition. To further identify the transition, we evaluate the probability distribution of peak powers at different feedback strength (equivalent to the different polarizer’s angle). In the regime of noise-driven relaxation oscillation, the probability distribution of peak power $`P(S_p)`$ follows an exponential law which features a shot-noise characteristics (as shown in Fig.3 (a)). On the other hand, in the case that chaotic bursting occurred, a tailed probability distributions will be created. If we pay attention only on the part of large intensity, by which the exponential distribution can be neglected, a Gaussian distribution can be recognized as feedback strength is further increased as shown in Fig.3 (c). This shows that there are two dynamical states which follows different statistics. Dynamical transition from a simple exponential distribution to a mixed distribution does occur as shown in Fig.3 (a)-(c).
What is the influence of the onset of such a mixed distribution? By a joint probability analysis similar that used in , it can be identified that there is still a strong overlapping between the two probability distributions. This suggests that the interplay between two states may be rather unique where the statistics of time-difference should be crucial also as discussed above. This also means that we should look at the dynamical behavior of a $`k`$-step difference quantity,
$$\mathrm{\Delta }S_p(k)=S_p(k+l)S_p(l),$$
(3)
where $`S_p(l)`$ is the $`l`$th. peak power. After the summation over the whole range of $`l`$, the probability $`P(\mathrm{\Delta }S_p(k))`$ specifies the variation distribution with $`k`$-step difference. Consider that when the system is with noise-driven relaxation oscillation, the variation distributions should be the same no matter what is the value of $`k`$ since the switching characteristic is stationary. The difference can be characterized by a $`\chi ^2`$ statistics which is defined as
$$\chi ^2(j;k)=\underset{i}{\overset{M}{}}\frac{(R_iS_i)^2}{(R_i+S_i)},$$
(4)
where $`R_i`$ and $`S_i`$ are the probabilities of the $`i`$th interval of $`\mathrm{\Delta }S_p`$ for $`P(\mathrm{\Delta }S_p(k+j))`$ and $`P(\mathrm{\Delta }S_p(k))`$, respectively. The summation is carried out for all intervals except $`R_i=S_i=0`$. For a fixed $`j`$, the $`\chi ^2(j;k)`$ indicates the similarity of the distribution between the variations $`\mathrm{\Delta }S_p(k)`$ and $`\mathrm{\Delta }S_p(k+j)`$. Foremost of all, the most crucial quantity is the successive change on similarity, i.e., $`j=1`$. A large value in $`\chi ^2(1;k)`$ means that at time $`k`$ the dynamics has been switched to a state which follows a dramatically different statistical distribution and this suggests a strong interplay between states. Furthermore, when the value of $`\chi ^2(1;k)`$ is wildly varied as $`k`$ moved a non-stationary dynamics will be intrinsic in nature. The degree of non-stationarity of the whole process can be quantified by an average quantity, i.e.
$$\chi ^2(1)=\frac{1}{L}\underset{k=1}{\overset{L}{}}\chi ^2(1;k).$$
(5)
In principle, the correlation of variation distributions is essentially stationary as $`\chi ^2(1)0`$, i.e., there is no dramatic dynamical changes on the correlation of the variation distributions in the range of average. Shortly, one can explore the dynamical characteristics of switching with this $`\chi ^2`$ statistics. Let us next address our experimental results. As shown in Fig.4 (a) where $`L=200`$, $`\chi ^2(1)`$s are almost close to zero for the relaxation oscillation regime as expected. The interesting point is the appearance of high $`\chi ^2(1)`$ at the medium strength of feedback (around $`40^{}`$ of polarizer’s angle), which also corresponds to the transition indicated by the probability distribution, the mean, and the standard deviation. This shows that the transition is associated with a non-stationary characteristic and, thus, the successive change on similarity is wild. It remains a surprise that for a larger feedback strength (small polarizer’s angle), $`\chi ^2(1)`$ is still almost zero again. This peculiar feature is due to the observation that an increase of feedback strength causes an increase of staying times at particular state and, as a result, the successive change on similarity will become smooth. In more details, Fig.4 (b) presents the degree of probability association, $`\chi ^2(1;k)`$. For some particular feedback strength, $`\chi ^2(1;k)`$ never keeps near zero, which suggests the instability has a strong non-stationary switching. On the other hand, in the regime of large polarizer’s angle (noise-driven relaxation oscillation), the $`\chi ^2(1;k)`$ is almost zero (Fig.4 (c)). Here, we would like to emphasize that the transition indicated by a high $`\chi ^2(1)`$ is crucial and it reveals a rather amazing characteristics: a weak-feedback induced instability can even be associated with a wild and non-stationary successive change on the similarity of variation probability distributions.
Let us discuss the origin of such chaotic bursting and even non-stationary characteristics in such an instability. Referring to Eq.(1) and Eq.(2), since the dc value of the photon density $`S(t)`$ is never zero, the term $`cos\theta (t)`$ sometimes has to be zero such as to terminate the feedback process. In such a way, the noise-driven relaxation oscillation can be created with non-zero $`\kappa `$. This means that a fixed phase difference ($`\theta \pi /2`$) during the period of relaxation oscillation has to be established. Therefore, the persistence in fixed phase difference is essential for switching back to the relaxation oscillation and should be critical for the overlapping in joint-probability as well as the non-stationary switching. In other words, the random phase fluctuation due to the feedback effect is essential for the chaotic bursting behavior. Hence, an inclusion of fast frequency-modulation characteristics (FM noise) to the phase fluctuation is necessary in the process as experimentally suggested from the measurement of the multiwavelength meter. Indeed, under this condition, the simulation of the single-mode Lang-Kobayashi model can reproduce the chaotic bursting features reported above. Finally, we would like to note that the similar instability has also been observed with various fiber lengths (7m, 5m, 4m, and 3m) as well as multimode fiber. Moreover, in stead of polarizer, a N.D. filter is also employed and similar results are concluded. The further details as well as the analysis of $`\chi ^2(j)`$ with large-$`j`$ will be reported elsewhere.
Acknowledgment: The work of NCKU group is partially supported by the National Science Council, Taiwan, ROC under project no. NSC88-2112-M-006-001. JLC thank S.-L Hwong, J.-Y. Ko, and A.-C. Hsu in helping the preparation of manuscript.
|
no-problem/9907/nucl-th9907059.html
|
ar5iv
|
text
|
# On the current status of OZI violation in 𝜋𝑁 and 𝑝𝑝 reactionsSupported by Forschungszentrum Jülich
## 1 Introduction
Assuming the ideal SU(3) octet-singlet mixing Okubo, Zweig and Iizuka proposed Okubo ; Zweig ; Iizuka that the production of a $`\varphi `$-meson from an initial non-strange state is strongly suppressed in comparison to $`\omega `$-meson production. Indeed, because of SU(3) breaking the octet and singlet states are mixed and for an ideal mixing angle $`\theta _V=35.3^0`$ the $`\varphi `$-meson is a pure $`s\overline{s}`$ state. In case of $`\varphi `$ production from $`\pi N`$, $`NN`$ or $`N\overline{N}`$ reactions the OZI rule states that the contribution from the diagram with a $`s\overline{s}`$ pair disconnected from the initial $`u,d,\overline{u},\overline{d}`$ should ideally vanish. The experimental deviation from the ideal mixing angle $`\mathrm{\Delta }\theta _V`$=3.7<sup>0</sup> PDG can be used Lipkin to estimate the ratio $`R(\varphi /\omega )4.2\times 10^3`$ of the cross sections with a $`\varphi `$ and $`\omega `$ in the final state. This deviation of the experimental ratio $`R`$ from zero is denoted as OZI rule violation. A large ratio $`R`$ might indicate an intrinsic $`s\overline{s}`$ content of the nucleon since in that case the $`\varphi `$-meson production is due to a direct strangeness transfer from the initial to the final state and thus OZI allowed.
The OZI violation problem has lead to a large experimental activity involving different hadronic reactions. Here we perform a systematical data analysis for $`\pi N`$ and $`pp`$ reactions and discuss their theoretical interpretation in context with the most recent data point from the DISTO Collaboration Balestra .
## 2 $`\omega `$ and $`\varphi `$ production in $`\pi N`$ reactions
Without involving any theoretical assumption about the production mechanism the data LB on the total $`\pi N\omega N`$ and $`\pi N\varphi N`$ cross sections may be analyzed in terms of the corresponding transition amplitudes. The amplitude for a two-body reaction with stable particles in the final state is related to the total cross section $`\sigma `$ as Feynman
$$|M_V|=4\left[\pi \sigma s\right]^{1/2}\left[\frac{\lambda (s,m_N^2,m_\pi ^2)}{\lambda (s,m_N^2,m_V^2)}\right]^{1/4},$$
(1)
where $`\lambda (x,y,z)=(xyz)^24yz`$, while $`m_N`$, $`m_\pi `$, $`m_V`$ denote the nucleon, pion and vector meson masses, respectively, and $`s`$ is the squared invariant collision energy. Moreover, we compare the transition amplitudes for $`\omega `$ and $`\varphi `$ production at the same excess energy $`ϵ=\sqrt{s}m_Nm_V`$. As was discussed in Ref. Hanhart , Eq. (1) can be used for the evaluation of the amplitudes for the production of unstable ($`\omega `$ and $`\varphi `$) mesons at excess energies $`ϵ>\mathrm{\Gamma }_V`$, where $`\mathrm{\Gamma }_V`$ denotes the width of the vector meson spectral function due to its vacuum decay.
Furthermore, due to the experimental set up the $`\pi ^{}p\omega n`$ data from Ref. Karami should not be considered as total cross sections, but as differential cross sections $`\sigma _{dif}`$ integrated over a given range of the final neutron momentum Hanhart . Indeed, the $`\pi ^{}p\omega n`$ cross sections given in Ref. Karami for different intervals $`[q_{min},q_{max}]`$ of neutron momenta in the center-of-mass system can be related to the transition amplitude $`M_V`$ as
$`\sigma _{dif}={\displaystyle \underset{q_{min}}{\overset{q_{max}}{}}}{\displaystyle \frac{|M_V|^2}{4\pi ^2\lambda ^{1/2}(s,m_p^2,m_\pi ^2)}}{\displaystyle \frac{q^2}{\sqrt{q^2+m_n^2}}}`$
$`{\displaystyle \frac{\mathrm{\Gamma }_Vm_V}{(s2\sqrt{s(q^2+m_n^2)}+m_n^2m_V^2)^2\mathrm{\Gamma }_V^2m_V^2}}dq,`$ (2)
where $`m_p`$ and $`m_n`$ are the proton and neutron masses, respectively, and $`s`$ is given as a function of $`q`$. Eq. (2) agrees with that in Ref. Hanhart in the non-relativistic limit. Furthermore, in the calculations we use the set of the neutron momentum intervals $`[q_{min},q_{max}]`$ as in Ref. Karami .
Figs. 1,2 show the transition amplitudes for the $`\pi N\omega N`$ and $`\pi N\varphi N`$ reactions evaluated from the experimental data LB ; Karami . Note, that the $`\pi ^{}p\omega n`$ transition amplitude evaluated from the data of Ref. Karami (full dots at small $`ϵ`$) by Eq. (2) does not depend on energy within the errorbars and agrees well with that extracted from the other data LB .
Since the data are not available for a comparison at exactly the same excess energies we fit the transition amplitudes by the function
$$|M_V|=M_0+M_1\mathrm{exp}(\gamma ϵ)$$
(3)
with the parameters given in Table 1. The solid lines in Figs. 1, 2 show the approximation (3) while the dashed areas indicate the uncertainty of the parameterization. Note, that the approximation is compatible with an almost constant transition amplitude for $`ϵ<`$ 100 MeV and reasonably reproduces the experimental results up to $`ϵ=`$10 GeV.
The resulting ratio of the $`\pi N\omega N`$ to $`\pi N\varphi N`$ transition amplitudes is shown in Fig. 3a) by the solid line as a function of the excess energy $`ϵ`$. It is important to note that the ratio $`R=|M_\omega |/|M_\varphi |`$ is almost constant within the given uncertainties up to $`ϵ=`$10 GeV, where the data are available.
Since the $`\omega /\varphi `$ ratio is always discussed as a constant, that is compared to the SU(3) predictions, we calculate the average value of $`<R>`$ in the range $`0<ϵ<10`$ GeV. Fig. 3b) shows the reduced $`\chi ^2`$ as a function of the constant $`<R>`$, which approaches a minimum at
$$<R>=\frac{|M_{\pi N\omega N}|}{|M_{\pi N\varphi N}|}=8.7\pm 1.8.$$
(4)
with the dispersion given for a 95% confidence level.
Furthermore, a visual way to control our estimate for $`<R>`$ is to compare the experimental data directly by multiplying the $`\pi N\varphi N`$ amplitude by the factor $`<R>`$ as shown in Fig. 4. We note that four experimental points for the $`\pi ^{}p\varphi n`$ reaction around $`ϵ=`$1 GeV deviate by a factor of $``$1.8 from the hypothesis applied. New experimental data with high accuracy are obviously necessary for a final conclusion about the ratio of the $`\pi N\omega N`$ and $`\pi N\varphi N`$ reaction amplitudes.
## 3 $`\omega `$ and $`\varphi `$ production in $`pp`$ reactions
In our normalization the $`ppppM`$ total cross section for the production of an unstable meson with total width $`\mathrm{\Gamma }`$ is given as
$`\sigma `$ $`=`$ $`{\displaystyle \frac{1}{2^8\pi ^3s\lambda ^{1/2}(s,m_N^2,m_N^2)}}{\displaystyle \underset{m_{min}}{\overset{\sqrt{s}2m_N}{}}}{\displaystyle \frac{1}{2\pi }}{\displaystyle \frac{\mathrm{\Gamma }dx}{(xm_V)^2+\mathrm{\Gamma }^2/4}}`$ (5)
$`\times `$ $`{\displaystyle \underset{4m_N^2}{\overset{(\sqrt{s}x)^2}{}}}|M|^2\lambda ^{1/2}(s,y,x^2)\lambda ^{1/2}(y,m_N^2,m_N^2)`$
$`\times `$ $`C^2(q=0.5\sqrt{y4m_N^2}){\displaystyle \frac{dy}{y}},`$
where $`m_{min}`$ is the minimal mass of the unstable particle and $`C(q)`$ describes the final state interaction (FSI) between the nucleons Watson ; Migdal ; GellMann ; Taylor .
Fig. 5 shows the average production amplitude for the $`pppp\omega `$ reaction evaluated by Eq. (5) from the data LB ; Hibou using the FSI models from Refs.DKT ; SC1 <sup>1</sup><sup>1</sup>1A comparison between the different models for the final state interaction is presented in Refs.SC1 ; SC2 . We note that the uncertainty in the evaluation of the $`pppp\omega `$ production amplitude due to the different models of the FSI corrections is substantially smaller than the dispersion of the experimental results.
The $`pppp\omega `$ reaction amplitude evaluated from the data LB ; Hibou is approximated by the function (3) with parameters given in Tab. 1 and is shown in Fig.5 by the solid line. The dashed area in Fig.5 indicates again the uncertainty of the approximation which was calculated with the error correlation matrix.
Recently the DISTO Collaboration reported an experimental result Balestra on the ratio of the $`pppp\varphi `$ and $`pppp\omega `$ total cross section at a beam energy of 2.85 GeV. For the further analysis we need the $`\varphi `$-meson production cross section explicitly, which can be obtained by normalization to the available data on $`\omega `$-meson production LB ; Hibou . Our extrapolation for the $`pppp\omega `$ production amplitude at 2.85 GeV is shown in Fig.5 by the star and provides
$`\sigma (pppp\omega )`$ $`=`$ $`45\pm 7\mu b,`$
$`\sigma (pppp\varphi )`$ $`=`$ $`0.17_{0.06}^{+0.07}\mu b.`$ (6)
Now the DISTO data point Balestra for the $`pppp\varphi `$ total cross section can be used for the evaluation of the reaction amplitude. Fig. 6 shows the experimental results for the average $`pppp\varphi `$ production amplitude as a function of the excess energy. Since there are only three experimental points we cannot perform a statistical analysis of the $`|M_\omega |`$/$`|M_\varphi |`$ ratio similar to the $`\pi NVN`$ analysis. Note that the $`pppp\varphi `$ data are available only for $`ϵ>`$80 MeV, where the FSI enhancement as well as the correction due to the final $`\varphi `$-meson width almost play no role.
Now, to compare the data one might take the ratio of the $`pppp\omega `$ and $`pppp\varphi `$ amplitudes as a constant. The two experimental points at high energy give a ratio $`R`$8.5. Fig. 6 shows the $`pppp\varphi `$ production amplitude together with the $`pppp\omega `$ experimental results divided by the factor 8.5. To illustrate the $`ϵ`$-dependence the data are simply connected by upper and lower lines through their error bars. Fig.7, furthermore, shows the data for the $`pppp\varphi `$ production amplitude using the fit (3) for the $`pppp\omega `$ amplitude again divided by the factor 8.5. Here the DISTO data point sticks out from the error band to some extent. However, it is not clear if one might take the $`\omega `$/$`\varphi `$ ratio as independent on $`ϵ`$. As we already demonstrated for the $`\pi N\omega N`$ and $`\pi N\varphi N`$ reactions, the $`|M_\omega |`$/$`|M_\varphi |`$ ratio substantially depends on the excess energy for $`ϵ>`$ 300 MeV. In this sense, the DISTO result does not strictly contradict the $`pppp\varphi `$ data available at high energy.
Furthermore, since additional experimental results Arenton ; Golovkin are available for the ratio of the $`\varphi `$/$`\omega `$ total or differential cross sections above 8 GeV bombarding energy, we also show this ratio calculated with Eq.(5) in Fig.8 as a function of the incident proton energy.
We have performed a $`\chi ^2`$ fit to the available data on the ratio of the $`pppp\varphi `$ and $`pppp\omega `$ cross sections with a constant ratio of the $`|M_\omega |`$/$`|M_\varphi |`$ production amplitude and obtained the value of 8.5$`\pm `$1.0. Here the error is due to the parent standard deviation. The confidence level of the fit is below 50%. Again the DISTO result is not consistent with the constant ratio $`|M_\omega |`$/$`|M_\varphi |`$=8.5. We mention that the DISTO result on $`\varphi `$-meson production can be fixed by $`|M_\omega |`$/$`|M_\varphi |`$=5.72$`{}_{1.17}{}^{}{}_{}{}^{+1.01}`$ with the $`pppp\omega `$ amplitude taken from the approximation (3).
## 4 Theoretical interpretations
In general Ellis the experimental results on the $`\varphi /\omega `$ ratio are compared to a constant as given by Lipkin Lipkin ,
$`R^2(\varphi /\omega )={\displaystyle \frac{g_{\varphi \rho \pi }^2}{g_{\omega \rho \pi }^2}}={\displaystyle \frac{g_{\varphi NN}^2}{g_{\omega NN}^2}}={\displaystyle \frac{\sigma (\pi N\varphi X)}{\sigma (\pi N\omega X)}}`$
$`={\displaystyle \frac{\sigma (NN\varphi X)}{\sigma (NN\omega X)}}=tan^2(\mathrm{\Delta }\theta _V)=4.2\times 10^3,`$ (7)
where $`\mathrm{\Delta }\theta _V=3.7^0`$ PDG is the deviation from the ideal $`\omega \varphi `$ mixing angle. It is important to note, that Eq. (7) provides the $`\varphi /\omega `$ ratio for hadronic reactions which can be expressed by the diagrams shown in Fig. 9 that contain the $`V\rho \pi `$ and $`VNN`$ vertices.
Furthermore, the ratio of the $`\omega \rho \pi `$ to $`\varphi \rho \pi `$ coupling constant can be evaluated from the relevant partial decay width Sakurai1 ; GellMann1 . The $`\varphi \rho \pi `$ coupling constant can be measured (as first proposed by Sakurai Sakurai1 ) by the $`\varphi \rho \pi `$ decay via
$`\mathrm{\Gamma }_{\varphi \rho \pi }`$ $`=`$ $`{\displaystyle \frac{g_{\varphi \rho \pi }^2}{16\pi ^2m_\varphi ^5}}{\displaystyle \underset{2m_\pi }{\overset{m_\varphi m_\pi }{}}}𝑑\mu \lambda ^{3/2}(m_\varphi ^2,\mu ^2,m_\pi ^2)`$ (8)
$`\times `$ $`{\displaystyle \frac{\mu ^2\mathrm{\Gamma }_{\rho 2\pi }(\mu )}{(\mu ^2m_\rho ^2)^2+\mu ^2\mathrm{\Gamma }_{\rho 2\pi }^2(\mu )}}.`$
Taking into account the energy dependence of the $`\rho `$-meson width and experimental numbers from the PDG PDG we obtain $`g_{\varphi \rho \pi }`$ as shown in Table 2.
The separate $`\omega \rho \pi `$ decay is not energetically allowed and to determine the $`\omega \rho \pi `$ coupling constant Gell-Mann and Zachariasen GellMann1 proposed to study the radiative decays $`\omega \pi \gamma `$ and $`\rho \pi \gamma `$. In their approach (see also the review of Meißner Meissner ) this process is dominated by the $`\omega \rho \pi `$ vertex with the intermediate vector meson coupled to the photon via vector dominance. The $`\omega \rho \pi `$ coupling constant can be measured by GellMann1 ; Kaymakcalan ,
$$\mathrm{\Gamma }(\omega \pi ^0\gamma )=\frac{g_{\omega \rho \pi }^2}{96m_\omega ^5}\frac{\alpha }{\gamma _\rho ^2}\left[m_\omega ^2m_\pi ^2\right]^3,$$
(9)
where $`\alpha `$ is the fine structure constant. Furthermore, a direct measurement of $`\gamma _\rho `$ is possible by means of the vector meson decay into leptons Nambu
$$\mathrm{\Gamma }(\rho l^+l^{})=\frac{\pi }{3}\left[\frac{\alpha }{\gamma _\rho }\right]^2\sqrt{m_V^24m_l^2}\left[1+\frac{2m_l^2}{m_\rho ^2}\right],$$
(10)
where $`m_\rho `$ and $`m_l`$ are the masses of the vector meson and lepton, respectively. In a similar way $`g_{\omega \rho \pi }`$ can be measured via the $`\rho \pi ^0\gamma `$ decay. The relevant coupling constants obtained with the latest PDG fit to experimental data are listed in Table 2.
On the other hand, Gell-Mann, Sharp and Wagner GellMann2 proposed to determine $`g_{\omega \rho \pi }`$ through the $`\omega 3\pi `$ decay assuming that the $`\omega `$ first converts into $`\rho \pi `$ followed by $`\rho 2\pi `$. The relation between the $`\mathrm{\Gamma }(\omega 3\pi )`$ and $`\omega \rho \pi `$ coupling constants is given in Ref. Lichard . A more elaborate analysis of the $`\omega 3\pi `$ decay includes the four-point contact term due to the direct coupling between the $`\omega `$-meson and three pions Meissner ; Klingl1 ; Kaymakcalan , however, the contribution from this anomalous coupling to $`\mathrm{\Gamma }(\omega 3\pi )`$ is only about 10%. The analysis from Refs. Klingl1 ; Klingl2 provides $`g_{\omega \rho \pi }=10.88`$.
Note that the mixing angle can also be determined by the ratio of the $`\omega \pi ^0\gamma `$ and $`\varphi \pi ^0\gamma `$ radiative decay widths by applying vector dominance (9), which gives $`g_{\omega \rho \pi }/g_{\varphi \rho \pi }`$= 12.9$`\pm `$0.4. An alternative model Meissner ; Klingl1 ; Meissner2 proposed a direct $`\omega \pi \gamma `$ coupling, instead of the vector dominance, where the ratio of $`g_{\omega \rho \gamma }`$ to $`g_{\varphi \rho \gamma }`$ yields $`16.8\pm 1.0`$. Both models predicts values close to the mixing angle $`\theta _V=37^0`$, determined from the mass splitting in the vector-meson nonet, but depend on the vector dominance or direct coupling assumption. The direct $`\varphi \rho \pi `$ decay is a more standard way, although it leads to a rather large uncertainty in the determination of the $`\varphi \rho \pi `$ coupling.
To provide a graphical overview, Fig. 10 illustrates the ratio of the $`\omega \rho \pi `$ and $`\varphi \rho \pi `$ coupling constants evaluated from the partial decay width. We also show the ratio given by the $`\pi NVN`$ and $`ppVpp`$ data assuming that this ratio is energy independent. The DISTO result is shown separately and – as discussed above – is not consistent with the other data for $`pp`$ reactions. However, within the present uncertainties the experimental results – as evaluated from all different sources – appear to be compatible; they all disagree with the SU(3) estimate based on the $`\omega \varphi `$ as given by the PDG PDG .
We note, furthermore, that any production mechanism different from those in Fig. 9 will invalidate the overall scaling based on the $`R^2(\varphi /\omega )`$ function Nakayama2 ; Nakayama1 . For instance, as found in Refs. Locher ; Buzatu1 ; Mull ; Buzatu2 ; Gortchakov ; Anisovich ; Markushin , two-step processes with intermediate $`K\overline{K}`$, $`K^{}\overline{K}`$ $`K^{}\overline{K^{}}`$ states may contribute substantially to $`\varphi `$ production in antiproton-proton annihilation. Certainly, such OZI allowed processes could have also an effect on $`\varphi `$-meson production in $`\pi N`$ and $`NN`$ reactions, but their actual contribution so far is unknown here. In view of Fig. 3a we speculate that their contribution should be rather low for excess energies $`ϵ`$300 MeV.
## 5 Summary
We have analyzed the experimental data available for $`\omega `$ and $`\varphi `$-meson production from $`\pi N`$ and $`pp`$ reactions and have evaluated the ratio of the reaction amplitudes. Indeed the experimental $`\varphi `$/$`\omega `$ ratio substantially deviates from the SU(3) estimate $`R^2(\varphi /\omega )=4.2\times 10^3`$, which is based on the $`\omega \varphi `$ mixing angle of $`\theta _V`$=39<sup>0</sup>.
However, it is important to recall that this SU(3) estimate is given by the ratio of the $`\varphi \rho \pi `$ to $`\omega \rho \pi `$ and $`\varphi NN`$ to $`\omega NN`$ coupling constants and is related only to the reaction mechanisms involving the relevant $`V\rho \pi `$ and $`VNN`$ vertex. Obviously, any other production mechanism Locher ; Buzatu1 ; Mull ; Buzatu2 ; Gortchakov ; Anisovich ; Markushin as well as different form factors in the $`V\rho \pi `$ and $`VNN`$ vertices will lead to a deviation of the experimental ratios from the simple scaling $`R^2(\varphi /\omega )=4.2\times 10^3`$.
On the other side, by fitting the experimental ratio with a constant, our comparison of the $`\pi N`$ and $`pp`$ data with the ratio of the $`\varphi \rho \pi `$ and $`\omega \rho \pi `$ coupling constant (as evaluated from the measured partial decay) shows an overall compatibility. The full analysis indicates that – within the experimental uncertainties – the data on the partial decays as well as on $`\pi N`$ and $`pp`$ reactions provide an average ratio $`R^2(\varphi /\omega )1.6\times 10^2`$, which is close to the DISTO data point, however, disagrees with the SU(3) estimate based on the $`\omega \varphi `$ mixing angle of $`\theta _V`$=39<sup>0</sup>.
###### Acknowledgements.
We appreciate valuable discussions with W. Kühn and J. Ritman as well as comments and suggestions from C. Hanhart and J. Haidenbauer.
|
no-problem/9907/cond-mat9907357.html
|
ar5iv
|
text
|
# Coupling of carbon nanotubes to metallic contacts
## I Introduction
Carbon nanotubes represent an intriguing new material that has attracted much attention both from theorists and experimentalists since the early 1990s. Particularly exciting is the possibility of one dimensional metallic conductors at room temperature that can be used as a probe in scanning probe microscopy or as a low resistance ballistic interconnect for electron devices. From a more basic point of view, much can be learnt about the physics of conduction by studying the conductance of such a one dimensional conductor at low temperatures. To exploit these possibilities it is important to understand the physics of the nanotube-metal contacts and to experimentally demonstrate low resistance contacts in a reproducible manner. The contact between carbon nanotubes and metal can occur at the end of the tube (end-contact) and along the circumference of the tube (side-contact) . The low contact resistance demonstrated by de Pablo et. al and Soh et. al are due to a strong interaction between metal and carbon atoms at the end of the nanotube, or/and due to lack of translational symmetry. In comparison, the interaction between metal and carbon atoms in side-contacted nanotubes is weak.
An interesting manifestation of weak distributed coupling is that the contact resistance is inversely proportional to contact length as observed experimentally by references and . Recently, Tersoff in a perceptive paper qualitatively discussed the importance of k-vector conservation when the coupling between nanotube and metal is weak. The important physical quantities are diameter and chirality of the nanotube, Fermi wave vector of the metal, area of contact, and details of the metal-nanotube contact. In this paper, we study the physics of side-contacted nanotube-metal contacts by addressing how these physical quantities affect the transmission of electrons from the nanotube to the metal contact. For small diameter nanotubes, our conclusions do not fully agree with Ref. . We find that for small diameter armchair tubes, the threshold value of Fermi wave vector below which the conductance is very small is $`2\pi /3a_0`$ and not $`4\pi /3a_0`$, which is the value for graphene. $`a_0=2.46\AA `$ is the lattice vector length of graphene. In contrast to armchair tubes, the threshold for zigzag tubes is zero. Our calculations also show that the conductance scales with contact length, a phenomena that has been observed experimentally in the work of Tans et. al and Frank et. al.
In the remainder of the introduction, we discuss the salient results using simple arguments. The method is discussed in section II and the numerical results and discussion are presented in section III. We present our conclusions in section IV.
The first Brillouin zone of graphene touches the Fermi surface at six points (Fig. 1). Of these only two points are inequivalent (that is, do not differ by a reciprocal lattice vector). The conduction properties of graphite at low bias are controlled by the nature of eigenstates around these points. Consider a metal making uniform contact to graphene. The in-plane wave vector should be conserved when an electron tunnels from the metal to the nanotube. As a result, for good coupling between metal and graphene, the metal Fermi wave vector should be comparable to $`4\pi /3a_0`$, which corresponds to the Fermi wave vector of graphene.
To discuss the case of nanotubes making contact to metal, we consider the scattering rate ($`1/\tau _{cm}`$) from the metal to nanotube within the Born approximation,
$`1/\tau _{cm}<\mathrm{\Psi }_c|H_{cm}|\mathrm{\Psi }_m>\text{ ,}`$ (1)
where, $`\mathrm{\Psi }_m`$ ($`\mathrm{\Psi }_c`$) is the metal (nanotube) wave function and $`H_{cm}`$ represents the nanotube-metal coupling. The wave function of an (n,m) nanotube is $`\mathrm{\Psi }_ce^{ik_tpu}\varphi _c`$, where $`k_t`$ is the axial wave vector, $`u`$ is the 1D unit cell length, $`p`$ is an integer representing the various unit cells and $`\varphi _c`$ is a vector representing the wave function of all atoms in a unit cell. It is assumed that the wave function of the metal is separable in the axial and radial directions of the nanotube, $`|\mathrm{\Psi }_m>e^{ik_mpu}|\varphi _m>`$, where $`k_m`$ is the metal wave vector component along the nanotube axis. When the coupling between the nanotube and metal is uniform, the scattering rate is \[Eq. (1)\],
$`1/\tau _{cm}t_{cm}<\varphi _c|\varphi _m>{\displaystyle \underset{p}{}}e^{i(k_mk_t)pu}\text{ ,}`$ (2)
where, the summation is performed over all unit cells making contact to metal and $`t_{cm}`$ represents a uniform coupling constant between the metal and nanotube. It is clear from Eq. (2) that provided the metal and nanotube make contact over several unit cells, wave vector conservation along the axial direction is enforced as $`_pe^{i(k_mk_t)pu}\frac{1}{u}\delta (k_mk_t)`$. The axial wave vector corresponding to $`E=0`$ are $`2\pi /3a_0`$ and $`0`$ for armchair and zigzag tubes respectively, and the wave vector for other chiralities varies between these two limits. As a result, the threshold value of Fermi wave vector below which coupling between an armchair (zigzag) nanotube and metal is poor is $`2\pi /3a_0`$ ($`0`$). The threshold value of the metal Fermi wave vector for chiral tubes is in between that of zigzag and armchair tubes. As the diameter of the nanotube increases, wave vector conservation along the circumference becomes increasingly important, as the strip approaches a graphene sheet.
## II Method
The method used to calculate transmission probability is essentially the same as that in reference , with the only addition being the connection of a metal contact. So in this section, we mainly focus on connection to the metal contact. The metal contact has a rectangular cross section in the (x,z) plane and is infinitely long along the y-axis as shown in Fig. 2. The nanotube lies on the metal contact akin to the experiment of Tans et. al . In reference , the nanotube bends over the edge of the metal and the influence of this on transport has recently been modeled by Rochefort et. al. . In this work, the main focus is to model the coupling between the metal and nanotube. So we assume the nanotube to lie rigidly on the metal and neglect the effect of bending (Fig. 2). A perfectly cylindrical nanotube would touch the metal surface only along a line. To simplify modeling this interface, we stretch the entire circumference of the nanotube over the metal surface and assume coupling between carbon atoms in a sector of the circumference and the metal. Finally, charge self consistency has been neglected.
The transmission and local density of states are calculated in a structure that can be conceptually divided into four parts: section of the nanotube (D), which lies on the metal electrode (M), semi-infinite regions of the nanotube L and R \[Fig. 2\]. The Hamiltonian of the system can be written as,
$`H=H_c+H_m+H_{cm}\text{ and}`$ (3)
$`H_c=H_D+H_L+H_R+H_{LD}+H_{RD}`$ (4)
where, $`H_c`$ is the pi-electron tight binding Hamiltonian of the nanotube with the on-site potential and hopping parameter between nearest neighbor carbon atoms equal to 0 and 3.1 eV respectively. $`H_{LD}`$ and $`H_{RD}`$ are terms in the Hamiltonian coupling D to L and R respectively. $`H_m`$ and $`H_{cm}`$ are the free particle and nanotube-metal coupling terms of the Hamiltonian. The Green’s function $`G^r`$ is obtained by solving: $`\left[EH_D\mathrm{\Sigma }_L^r\mathrm{\Sigma }_R^r\mathrm{\Sigma }_m^r\right]G^r(E)=I`$, where the self energy $`\mathrm{\Sigma }_\alpha =V_{D\alpha }g_\alpha ^rV_{\alpha D}`$ ($`\alpha `$ L, R and M). $`g_\alpha ^r`$ is the surface Green’s function of lead $`\alpha `$ and $`V_{D\alpha }`$ ($`V_{\alpha D}`$) is the coupling between $`D`$ ($`\alpha `$) and $`\alpha `$ ($`D`$). The transmission probability between leads $`\alpha `$ and $`\beta `$ \[$`T_{\alpha \beta }`$\] is given by,
$`T_{\alpha \beta }(E)=Trace[\mathrm{\Gamma }_\alpha (E)G^r(E)\mathrm{\Gamma }_\beta (E)G^a(E)]\text{ ,}`$ (5)
where $`\mathrm{\Gamma }_\alpha (E)=2\pi V_{D\alpha }\rho _\alpha (E)V_{\alpha D}`$ and $`\rho _\alpha (E)=\frac{1}{\pi }Im[g_\alpha ^r(E)]`$ is the surface density of states of lead $`\alpha `$.
The Green’s function of the metal contact is calculated within the free electron approximation using the procedure outlined below. The metal contact has a rectangular cross section of dimensions $`L_x`$ and $`L_z`$ in the x and z directions respectively, and is infinitely long in the y direction. While the (y,z)-coordinates are assumed to be continuous, the x-coordinate is assumed to be discrete with lattice spacing $`a=L_x/(N_x+1)`$, where $`N_x`$ is the number of lattice points. The wave functions ($`\mathrm{\Psi }_{mkn}`$) and eigen values ($`E_{mkn}`$) are given by,
$`\mathrm{\Psi }_{mkn}(r)`$ $`=`$ $`X_m(x)Y_k(y)Z_n(z)\text{ , where,}`$ (6)
$`X_m(x)`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{L_x}}}sin({\displaystyle \frac{m\pi x}{L_x}})\text{ , }Y_k(y)={\displaystyle \frac{1}{\sqrt{L_y}}}exp(iky)\text{ , }Z_n(z)={\displaystyle \frac{1}{\sqrt{L_z}}}sin({\displaystyle \frac{m\pi z}{L_z}})\text{ and}`$ (7)
$`E_{mkn}`$ $`=`$ $`{\displaystyle \frac{\mathrm{}^2}{2m_oa^2}}[1cos({\displaystyle \frac{m\pi }{N_x+1}})]+{\displaystyle \frac{\mathrm{}^2k^2}{2m_o}}+{\displaystyle \frac{\mathrm{}^2}{2m_o}}({\displaystyle \frac{n\pi }{L_z}})^2\text{ ,}`$ (8)
where, $`m`$ and $`n`$ are positive integers, and $`m_0`$ is the free electron mass. Using Eqns. (6) and (8) in the following equation for the Green’s function,
$`g(r,r^{},E)={\displaystyle \underset{m,k,n}{}}{\displaystyle \frac{\mathrm{\Psi }_{mkn}(r)^{}\mathrm{\Psi }_{mkn}(r^{})}{EE_{mkn}+i\eta }}\text{ ,}`$ (9)
we obtain,
$`g(r,r^{},E)={\displaystyle \frac{im_o}{\mathrm{}^2}}{\displaystyle \frac{1}{L_xL_z}}{\displaystyle \underset{m,n}{}}{\displaystyle \frac{exp[ik_I|yy^{}|]}{k_I}}sin({\displaystyle \frac{m\pi x}{L_x}})sin({\displaystyle \frac{m\pi x^{}}{L_x}})sin({\displaystyle \frac{n\pi z}{L_z}})sin({\displaystyle \frac{n\pi z^{}}{L_z}})\text{ ,}`$ (10)
where,
$`k_I=\{k^2({\displaystyle \frac{n\pi }{L_z}})^2{\displaystyle \frac{1}{a^2}}[1cos({\displaystyle \frac{m\pi }{N_x+1}})]+i\eta \}^{\frac{1}{2}}\text{ and }k=\sqrt{{\displaystyle \frac{2m_oE}{\mathrm{}^2}}}\text{.}`$ (11)
For carbon nanotubes, the zero of energy ($`E=0`$) is taken to lie at the band center. On the other hand, in deriving Eq. (10) the zero of energy corresponded to the band bottom of the free electron metal. In the calculations, there should be only one zero of energy, which we take to lie at the band center of the nanotube. We also neglect charging effects and assume the Fermi energy of the metal to lie at the band center of the nanotube. Then, in the coordinate system where $`E=0`$ corresponds to the band center of the nanotube, Eq. (10) can be used by transforming,
$`k=\sqrt{{\displaystyle \frac{2m_oE}{\mathrm{}^2}}}\text{ to }k=\sqrt{{\displaystyle \frac{2m_oE}{\mathrm{}^2}}+k_f^2}\text{ in Eq. (}\text{11}\text{),}`$ (12)
where, $`k_f`$ is the Fermi wave vector of the metal.
The component of the Green’s function that enters the calculation of the density of states and transmission probability corresponds to $`x=x^{}=a`$, the surface of the metal contact on which the nanotube lies. The $`(y,z)`$ coordinates correspond to the atomic location of the stretched out nanotube lying on the metal. For uniform coupling between the metal and nanotube, we take $`V_{DM}=tD_0`$, where $`t`$ is the strength of coupling between the free electron metal and a nanotube atom and $`D_0`$ is a diagonal matrix whose dimension is equal to the number of carbon atoms in $`D`$. The diagonal entry $`D_0(i,i)=1(0)`$ if the carbon atoms make (do not make) contact to the metal.
## III Results and Discussion
We first present results for dependence of the threshold value of the metal Fermi wave vector on chirality, using armchair and zigzag tubes connected to the metal contact. We then discuss the diameter dependence of conductance using the case of a zigzag tube as an example. Finally, the case of disorder in coupling between the nanotube and metal is considered. We consider only weak coupling between the nanotube and metal. The average value of the non zero diagonal elements of the coupling strength $`\mathrm{\Gamma }_M`$ are tabulated in Table 1 for the various values of the metal Fermi wave vector considered. The main guide for the choice of $`\mathrm{\Gamma }_M`$ is that it be much smaller than the corresponding coupling strength between two carbon atoms of the nanotube (diagonal component of $`\mathrm{\Gamma }_L`$ is approximately equal to $`0.3eV`$ for a (2,2) nanotube). A larger/smaller value of $`\mathrm{\Gamma }_M`$ results in a larger/smaller value of transmission in Figures 2 to 5. We calculate the transmission versus contact length between nanotube and metal for various Fermi wave vectors in the metal and all atoms around the circumference of the tube are assumed to make uniform contact with the metal. We emphasize that when the metal makes contact to only a sector of the nanotube such as in Ref. , the results of Fermi wave vector dependence on chirality and the conductance dependence on contact length are still valid. These features depend on the nanotube-metal coupling along the axial direction. So any change due to the finite sector will not qualitatively change the results.
Experiments typically involve transmission of electrons between two metal contacts. The quantity $`T_{ML}`$ discussed in this section is however the transmission probability between a metal contact and a semi-infinte nanotube (Fig. 2). We consider this quantity because a long nanotube section between two metal contacts requires much more numerically intensive calculations. The physics discussed with regards to $`T_{ML}`$ in Figs. 3-5 hold in the case of two metallic contacts also, though a direct numerical comparison is not appropriate.
In the case of armchair tubes, when the metal Fermi wave vector $`k_f`$ is smaller than $`2\pi /3a_0`$ ($`0.85\AA ^1`$), $`T_{ML}`$ does not change significantly with contact length as shown for $`k_f=0.75\AA ^1`$ in Fig. 3(a). For values of $`k_f`$ above the threshold, the transmission monotonously increases with an increase in contact length. The monotonic increase is due to weak metal-nanotube coupling, in which case an increase in contact length simply results in an increase in the transition probability to scatter from metal to nanotube. The transmission will eventually saturate with increase in contact length as there are only two conducting modes at the band center. For the configuration considered, $`T_{ML}`$ can have a maximum value of unity. The second feature of Fig. 3(a) is the increase in transmission with increase in $`k_f`$. This can be understood by noting that electrons with a wave vector component along the nanotube axis that is larger than $`2\pi /3a_0`$ scatter from the metal to nanotube, and a larger $`k_f`$ implies a large number of available metal electron states. For the purpose of these calculations, we considered a (2,2) armchair tube; The essential physics would in principle be true for the more realistic (10,10) nanotube also.
The case of zigzag tubes is different because bands at $`E=0`$ cross at $`k=0`$. Then, electrons in the metal electrode with any $`k_f`$ (no threshold) can scatter into a metallic zigzag tube. The results for a (3,0) tube are shown in Fig. 3(b). Here, there are two important points. The first point is that as there is no threshold metal Fermi wave vector, the transmission increases monotonically with contact length even for $`k_f=0.4\AA ^1`$, which is smaller than the threshold for armchair tubes. The second point is that the transmission for $`k_f`$ equal to $`1.2\AA ^1`$ is much smaller than that for armchair tubes \[Fig. 3(a); the transmission of the three smaller values of $`k_f`$ have been multiplied by a factor of ten.\]. This is because the nanotube wave vector around the circumference ($`k_c`$) of a zigzag tube is large; $`k_c=4\pi /3a_0`$ for the crossing bands and as a result, the overlap integral \[Eq. (1)\] is small. As $`k_f=1.75\AA ^1`$ is larger than the threshold for graphite, the transmission probability is larger, and comparable to that for armchair tubes \[Fig. 3(b)\].
What happens when the diameter increases? In the limit of large diameter, a nanotube is akin to graphene and the threshold $`k_f`$ to couple well with metal should approach $`4\pi /3a_0`$ Numerically, it is difficult to simulate large diameter tubes along with large contact lengths because of time and memory requirements associated with the calculation of $`g_M^r`$. To convey the main point we consider two simpler cases, the first case compares the transmission probability of the two smallest semi-metallic zigzag tubes with varying contact lengths and the second case considers zigzag tubes of varying diameters with a rather small contact length. Fig. 4 compares the transmission probability versus contact length of the (3,0) and (6,0) nanotubes; The (6,0) nanotube has double the diameter of the (3,0) nanotube. The (6,0) nanotube correspondingly has a smaller transmission and the trend of decrease in transmission will continue with further increase in diameter. The inset is a calculation of transmission probability versus diameter of semi-metallic zigzag tubes for a contact length of 42.6 $`\AA `$ (ten unit cells). $`T_{ML}`$ decreases with increase in diameter because wave vector conservation becomes increasingly important with increase in diameter. Shown also in this figure for comparison are $`1/\text{diameter}`$ and $`1/\sqrt{\text{diameter}}`$.
We now address the role of disorder. Disorder in either the nanotube, metal or nanotube-metal coupling will in general result in larger transmission when compared to the disorder-free case. Wave vector conservation is relaxed due to scattering from defects and transmission will increase with increase in contact length even when the metal $`k_f`$ is below the threshold value. We consider the case of disorder in nanotube-metal coupling ($`H_{cm}`$). Disorder in all elements of the coupling between the nanotube and metal was introduced randomly. The disorder in coupling of atom $`i`$ to the metal contact can be written as, $`t_i=\alpha t^{av}+(1\alpha )t_i^{rand}`$, where $`t^{av}`$ is the average value of $`t_i`$ over all sites connected to the metal and $`\alpha `$ is a fraction between zero and unity. $`t_i^{rand}`$ is the random component whose average is equal to $`t^{av}`$. In Fig. 5, the two strengths of disorder correspond to $`\alpha =0`$ and $`\alpha =0.5`$ (smaller $`\alpha `$ corresponds to larger disorder), such that $`t^{av}`$ has the same value as in Fig. 3(a). For an armchair tube in contact with a metal with $`k_f=0.75\AA ^1`$, the transmission was very small and more importantly did not vary with contact length \[Fig. 3(a)\]. Introducing disorder changes this trend and causes a monotonic increase in transmission with length of contact \[Fig. 5\]. Similarly, for large diameter tubes, in the presence of disorder there should be significant transmission when $`k_f`$ is smaller than the threshold $`4\pi /3a_0`$. The requirement of wave vector conservation is also relaxed when the phase coherence length is small. So we expect the coupling to improve with decrease in phase coherence length.
## IV Conclusions
In this paper, we addressed some aspects of the physics of a nanotube side-contacted to metal, a problem of current importance. Coupling of carbon nanotubes to metal depends on both chirality and diameter. Wave vector conservation of an electron scattered from the nanotube to metal plays a central role in determining the transport properties. The difference between small and large diameter nanotubes is that while in the former wave vector conservation is important only in the axial direction, in the latter it is important in both the axial and circumferential directions. As a result, small diameter armchair and zigzag tubes have a cut-off value of the metal Fermi wave vector equal to $`2\pi /3a_0`$ and zero, respectively. For chiral tubes, the cut-off value of the metal Fermi wave vector lies in between these two limits, with the value decreasing with increase in chiral angle. A large diameter nanotube is akin to a graphene sheet and the cut-off value of the metal Fermi wave vector in this case approaches $`4\pi /3a_0`$ with increase in diameter. Disorder in the metal, nanotube or their coupling relaxes the requirement of k-vector conservation and in general improves coupling. The groups of references and have shown increase in conductance with contact length. In this paper, we discussed two situations that could lead to this. The first situation requires the metal Fermi wave vector to be larger than the threshold discussed in the text and holds even when there is no disorder. The second situation requires disorder in coupling to the metal but there is no restriction on the value of the Fermi wave vector.
## V Acknowledgements
We acknowledge useful discussion with W. A. de Heer (Georgia Tech), Cees Dekker and Zhen Yao (both of Delft University) and thank J. Tersoff (IBM) for providing us with a preprint of reference . We thank Mario Encinosa (FAMU) for many useful comments on the manuscript, and Alexei Svizhenko (NASA Ames) for help with commands to parallelize code.
Figure Captions:
Fig. 1: First Brillouin zone of graphene. Points $`P`$, $`P^{}`$, $`P^{\prime \prime }`$, $`Q`$, $`Q^{}`$, $`Q^{\prime \prime }`$ touch the Fermi surface. $`a_0`$ is the lattice vector length of graphene. A metal with Fermi wave vector smaller (inner circle) and larger (outer circle) than $`4\pi /3a_0`$ couples poorly and well to graphene respectively. $`\mathrm{\Delta }E_{NC}`$ is the energy difference between the first non crossing subband below and above E=0.
Fig. 2: A metal making contact to a nanotube. The $`(x,z)`$ dimensions of the metal form a rectangular cross section with lengths $`(L_x,L_z)`$. The $`y`$ direction is infinitely long.
Fig. 3: Transmission probability for (a) armchair and (b) zigzag tubes versus contact length. In both cases the largest contact length corresponds to sixty unit cells. The main point of (a) is that for the metal Fermi wave vector smaller than the threshold $`2\pi /3a_0`$, coupling between the nanotube and metal is small and increasing the contact length does not change the transmission probability. For metal Fermi wave vector larger than $`2\pi /3a_0`$, the transmission probability increases with increase in contact length and also with increase in $`k_f`$ for a given contact length. The main point of (b) is that there is no threshold in the metal Fermi wave vector. Even in the case of a small value of the metal Fermi wave vector ($`0.4\AA ^1`$), the transmission increases with increase in the contact length, albeit the magnitude of transmission is small. As in the armchair case, the transmission probability increases with increase in $`k_f`$ for a given contact length. The values of $`T_{ML}`$ in (b) corresponding to $`k_f`$ equal to 0.4, 0.75 and 1.2$`\AA `$ are multiplied by a factor of ten.
Fig. 4: Comparison of transmission probability of (3,0) and (6,0) nanotubes versus contact length. The transmission probability decreases with increase in diameter. Inset: The y-axis is $`T_{ML}`$ for metallic zigzag tubes scaled by 1.0 e+4. The solid line is the diameter dependence of $`T_{ML}`$ for a contact length of 42.6 $`\AA `$. The upper and lower dashed lines are $`1/\sqrt{\text{diameter}}`$ and $`1/\text{diameter}`$ dependences, shown for comparison.
Fig. 5: Comparison of transmission probability versus contact length for a (2,2) armchair tube, with and without disorder in nanotube-metal coupling. The metal Fermi wave vector is $`0.75\AA ^1`$. Note that for the case without disorder, the transmission is poor and increasing the contact length does not help. Introducing disorder changes this picture and the transmission begins to increase with increase in contact length because k-vector conservation is relaxed.
|
no-problem/9907/hep-ph9907340.html
|
ar5iv
|
text
|
# What QCD Tells Us About Nature – and Why We Should Listen Keynote talk at PANIC ‘99, Uppsala Sweden, June 10, 1999. A Turn of the Millennium Conference. IASSNS-HEP/99-64
## 1 QCD is our most perfect physical theory
Here’s why:
### 1.1 It embodies deep and beautiful principles.
These are, first of all, the general principles of quantum mechanics, special relativity, and locality, that lead one to relativistic quantum field theory . In addition, we require invariance under the nonabelian gauge symmetry $`SU(3)`$, the specific matter content of quarks – six spin 1/2 Dirac fermions which are color triplets – and renormalizability. These requirements determine the theory completely, up to a very small number of continuous parameters as discussed below.
Deeper consideration reduces the axioms further. Theoretical physicists have learned the hard way that consistent, non-trivial relativistic quantum field theories are difficult to construct, due to the infinite number of degrees of freedom (per unit volume) needed to construct local fields, which tends to bring in ultraviolet divergences. To construct a relativistic quantum theory, one typically introduces at intermediate stages a cutoff, which spoils the locality or relativistic invariance of the theory. Then one attempts to remove the cutoff, while adjusting the defining parameters, to achieve a finite, cutoff-independent limiting theory. Renormalizable theories are those for which this can be done, order by order in a perturbation expansion around free field theory. This formulation, while convenient for mathematical analysis, obviously begs the question whether the perturbation theory converges (and in practice it never does).
A more straightforward procedure, conceptually, is to regulate the theory as a whole by discretizing it, approximating space-time by a lattice . This spoils the continuous space-time symmetries of the theory. Then one attempts to remove dependence on the discretization by refining it, while if necessary adjusting the defining parameters, to achieve a finite limiting theory that does not depend on the discretization, and does respect the space-time symmetries. The redefinition of parameters is necessary, because in refining the discretization one is introducing new degrees of freedom. The earlier, coarser theory results from integrating out these degrees of freedom, and if it is to represent the same physics it must incorporate their effects, for example in vacuum polarization.
In this procedure, the big question is whether the limit exists. It will do so only if the effects of integrating out the additional short-wavelength modes, that are introduced with each refinement of the lattice, can be captured accurately by a re-definition of parameters already appearing in the theory. This, in turn, will occur only in a straightforward way if these modes are weakly coupled. (Another simple possibility is that short-distance modes of different types cancel in vacuum polarization. This is what occurs in supersymmetric theories. Other types of ultraviolet fixed points are in principle possible, but difficult to imagine or investigate.) But this is true, if and only if the theory is asymptotically free.
One can investigate this question, i.e., whether the couplings decrease to zero with distance, or in other words whether the theory is asymptotically free, within weak coupling perturbation theory . One finds that only nonabelian gauge theories with simple matter content, and no non-renormalizable couplings, satisfy this criterion . Supersymmetric versions of these theories allow more elaborate, but still highly constrained, matter content.
Summarizing the argument, only those relativistic field theories which are asymptotically free can be argued in a straightforward way to exist. And the only asymptotically free theories in four space-time dimensions involve nonabelian gauge symmetry, with highly restricted matter content. So the axioms of gauge symmetry and renormalizability are, in a sense, gratuitous. They are implicit in the mere existence of non-trivial interacting quantum field theories.
Thus QCD is a member of a small aristocracy: the closed, consistent embodiments of relativity, quantum mechanics, and locality. Within this class, it is among the least affected members.
### 1.2 It provides algorithms to answer any physically meaningful question within its scope.
As I just discussed, QCD can be constructed by an explicit, precisely defined discretization and limiting procedure. This provides, in principle, a method to compute any observable, in terms that could be communicated to a Turing machine.
In fact marvelous things can be accomplished, in favorable cases, using this direct method. For a stirring example, see Figure 4 below.
The computational burden of the direct approach is, however, heavy at best. When one cannot use importance sampling, as in addressing such basic questions as calculating scattering amplitudes or finding the ground state energy at finite baryon number density, it becomes totally impractical.
For these reasons various improved perturbation theories continue to play an enormous role in our understanding and use of QCD. The most important and well-developed of these, directly based on asymptotic freedom, applies to hard processes and processes involving heavy quarks . It is what is usually called “perturbative QCD”, and leads to extremely impressive results as exemplified by Figures 1-3, below. The scope of these methods is continually expanding, to include additional “semi-hard” processes, as will be discussed in many talks at the Conference. When combined with some additional ideas, they allow us to address major questions regarding the behavior of the theory at high temperatures and large densities, as I’ll touch on below.
Chiral perturbation theory , which is based on quite different aspects of QCD, is extremely useful in discussing low-energy processes, though it is difficult to improve systematically. A proper discussion of this, or of many other approaches each of which offers some significant insight (traditional nuclear physics, bag model, Reggeism, sum rules, large N, Skyrme model, … ) would not be appropriate here.
I would however like to mention a perturbation theory which I think is considerably underrated, that is strong coupling perturbation theory . It leads to a simple, appealing, and correct understanding of confinement , and even its existing, crude implementations provide a remarkably good caricature of the low-lying hadron spectrum. It may be time to revisit this approach, using modern algorithms and computer resources.
### 1.3 Its scope is wide.
There are significant applications of QCD to nuclear physics, accelerator physics, cosmology, extreme astrophysics, unification, and natural philosophy. I’ll say just a few words about each, in turn.
Nuclear physics : Understanding atomic nuclei was of course the original goal of strong interaction physics. In principle QCD provides answers to all its questions. But in practice QCD has not superseded traditional nuclear physics within its customary domain. The relationship between QCD and traditional nuclear physics is in some respects similar to the relationship between QED and chemistry. The older disciplines retain their integrity and independence, because they tackle questions that are exceedingly refined from the point of view of the microscopic theories, involving delicate cancellations and competitions that manufacture small net energy scales out of much larger gross ones. QCD offers many insights and suggestions, however, of which we will hear much at this Conference. There is also an emerging field of extreme nuclear physics, including the study of nuclei with hard probes and heavy-ion collisions, where the influence of QCD is decisive.
Accelerator physics : Most of what goes on at high-energy accelerators is described by QCD. This application has been so successful, that experimenters no longer speak of “tests of QCD”, but of “QCD backgrounds”! Two- and even three-loop calculations of such “backgrounds” are in urgent demand. What can one add to that sincere testimonial?
Cosmology : Because of asymptotic freedom, hadronic matter becomes not impenetrably complex, but rather profoundly simpler, under the extreme conditions predicted for the early moments of the Big Bang. This stunning simplification has opened up a large and fruitful area of investigation.
Extreme astrophysics : The physics of neutron star interiors, neutron star collisions, and collapse of very massive stars involves extreme nuclear physics. It should be, and I believe that in the foreseeable future it will be, firmly based on microscopic QCD.
Unification and Natural philosophy : See below.
### 1.4 It contains a wealth of phenomena.
Let me enumerate some major ones: radiative corrections, running couplings, confinement, spontaneous (chiral) symmetry breaking, anomalies, instantons. Much could be said about each of these, but I will just add a few words about the first. The Lamb shift in QED is rightly celebrated as a triumph of quantum field theory, because it shows quantitatively, and beyond reasonable doubt, that loop effects of virtual particles are described by the precise, intricate rules of that discipline. But in QCD, we probably have by now 50 or so cases where two- and even three-loop effects are needed to do justice to experimental results – and the rules are considerably more intricate!
### 1.5 It has few parameters …
A straightforward accounting of the parameters in QCD would suggest 8: the masses of six quarks, the value of the strong coupling, and the value of the P and T violating $`\theta `$ parameter. The fact that there are only a small finite number of parameters is quite profound. It is a consequence of the constraints of gauge invariance and renormalizability (or alternatively, as we saw, existence, by way of asymptotic freedom).
In reality there are not 8 parameters, but only 6. $`g`$ is eliminated by dimensional transmutation . This means, roughly stated, that because the coupling runs as a function of distance, one cannot specify a unique numerical value for it. It will take any value, at some distance or other. One can put (say) $`g(l)1`$, thereby determining a length scale $`l`$. What appeared to be a choice of dimensionless coupling, is revealed instead to be a choice of unit of length, or equivalently (with $`\mathrm{}=c=1`$) of mass. Only the dimensionless ratio of this mass to quark masses can enter into predictions for dimensionless quantities. So what appeared to be a one-parameter family of theories, with different couplings, turns out to be a single theory measured using differently calibrated meter-sticks.
The $`\theta `$ term is eliminated, presumably, by the Peccei-Quinn mechanism . To assure us of this, it would be very nice to observe the quanta associated with this mechanism, namely axions . In any case, we know for sure that the $`\theta `$ term is very small. For purposes of strong interaction physics, within QCD itself, we can safely set it to zero, invoking P or T symmetry, and be done with it.
### 1.6 … or none.
This economy of parameters would already be quite impressive, given the wealth of phenomena described. However if we left it at that we would be doing a gross injustice to QCD, and missing one of its most striking features.
To make my point, let me call your attention to a simplified version of QCD, that I call “QCD Lite”. QCD Lite is simply QCD truncated to contain just two flavors of quarks, both of which are strictly massless, and with the $`\theta `$ parameter set to zero. These choices are natural in the technical sense, since they can be replaced by symmetry postulates. Indeed, assuming masslessness of the quarks is tantamount to assuming exact $`SU(2)\times SU(2)`$ chiral symmetry, and $`\theta =0`$ is tantamount to assuming the discrete symmetries P or T. (Actually, once we have set the quark masses to zero, we can dial away $`\theta `$ by a field redefinition.)
Now there are two especially remarkable things about QCD Lite . The first is that it is a theory which contains no continuous free parameters at all. Its only inputs are the numbers 3 (colors) and 2 (flavors). The second is that it provides an excellent semi-quantitative theory of hadronic matter.
Indeed, in reality the strange and heavier quarks have very little influence on the structure or masses of protons, neutrons, atomic nuclei, pions, rho mesons, … . Leaving them out would require us to abridge, but not to radically revise, the Rosenfeld Tables. This is proved by the remarkable quantitative success, at the 5-10% level, of lattice gauge theory in the ‘quenched’ approximation . For in this approximation, the influence of the heavier quarks on the lighter ones is systematically ignored.
The only major effect of putting the u and d masses to zero is to make the pions, which are already quite light by hadronic standards, strictly massless. The perturbation to the proton mass, for instance, can be related using chiral perturbation theory to the value of the so-called $`\sigma `$ term, a directly measurable quantity . When this is done, one finds that the u and d quark masses are responsible for only about 5% of the proton mass.
Thus QCD Lite provides a truly remarkable realization of John Wheeler’s program, “Getting Its From Bits”. For here we encounter an extremely rich and complex class of physical phenomena – including, in principle, nuclear and particle spectra – that can be calculated, accurately and without ambiguity, using as sole inputs the numbers 3 and 2.
### 1.7 It is true.
I will not waste a lot of words on this, showing instead a few pictures, each worth many thousands of words.
Figure 1 displays graphically that many independent types of experiments at different energy scales have yielded determinations of the strong coupling constant, all consistent with the predicted running. The overall accuracy and consistency of this phenomenology is reflected in the precision with which this coupling is determined, to wit 5% (at the Z mass). A remarkable feature of the theory is that a wide range of possible values for the coupling at relatively small energies focuses down to quite a narrow range at the highest accessible energies. Thus any “reasonable” choice of the scale at which the coupling becomes numerically large leads, within a few per cent, to a unique value of the coupling at the Z mass. Our successful QCD predictions for high energy experiments have no wiggle-room!
While Figure 1 is impressive, it does not do complete justice to the situation. For several of the experimental ‘points’ each represents a summary of hundreds of independent measurements, any one of which might have invalidated the theory, and which display many interesting features. Figures 2 and 3 partially ameliorate the omission. Figure 2 shows some of the experimental data on deep inelastic scattering – all subsumed within the ‘DIS’ point in Figure 1 – unfolded to show the complete $`Q^2`$ and $`x`$ dependence. Our predictions for the pattern of evolution of structure functions with $`Q^2`$ –decrease at large $`x`$, increase at small $`x`$ – are now confirmed in great detail, and with considerable precision. Particularly spectacular is the rapid growth at small $`x`$. This was predicted , in the form now observed, very early on. However, even at the time we realized this rise could not continue forever. The proliferating partons begin to form a dense system, and eventually one must cease to regard them as independent. There is a very interesting many-body problem developing here, which seems ripe for experimental and theoretical investigation, and may finally allow us to make contact between microscopic QCD and the remarkably successful Regge-pole phenomenology.
Figure 3 displays a comparison of the experimental distribution of jet energies, in 3-jet events, with the QCD prediction. Shown is the energy fraction of the second hardest jet, compared to its kinematic maximum. For a detailed explanation, see . The rise at $`x_21`$ reflects the singularity of soft gluon bremsstrahlung, and matches the prediction of QCD (solid line). For comparison, the predictions for hypothetical scalar gluons are shown by the dotted line. This is as close to a direct measurement of the core interaction of QCD, the basic quark-gluon vertex, as you could hope to see. The other piece of this Figure displays a related but more sophisticated comparison, using the Ellis-Karliner angle.
Finally Figure 4 shows the comparison of the QCD predictions to the spectrum of low-lying hadrons. Unlike what was shown in the previous two Figures, and most of the points in Figure 1, this tests the whole structure of the theory, not only its perturbative aspect. The quality of the fit is remarkable. Note that only one adjustable parameter (the strange quark mass) and one overall choice of normalization go into the calculation. Otherwise it’s pure “Its From Bits”. Improvements due to enhanced computing power and to the use of domain wall quarks , that more nearly respect chiral symmetry, are on the horizon.
Since there seems to be much confusion (and obfuscation) on the point, let me emphasize an aspect of Figure 4 that ought to be blindingly clear: what you don’t see in it. You don’t see massless degrees of freedom with long-range gauge interactions, nor parity doublets. That is, confinement and chiral symmetry breaking are simply true facts about the solution of QCD, that emerge by direct calculation. The numerical work has taken us way beyond abstract discussion of these features.
### 1.8 It lacks flaws.
Finally, to justify the adverb in “most perfect” I must briefly recall for you some prominent flaws in our other best theories of physics, which QCD does not share.
Quantum electrodynamics is of course extremely useful – incomparably more useful than QCD – and successful in practice. But there is a worm in its bud. It is not asymptotically free. Treated outside of perturbation theory, or extrapolated to extremely high energy, QED becomes internally inconsistent. Modern electroweak theory shares many of the virtues of QED, but it harbors the same worm, and in addition contains many loose ends and continuous free parameters. General relativity is the deepest and most beautiful theory of all, but it breaks down in several known circumstances, producing singularities that have no meaningful interpretation within the theory. Nor does it mesh seamlessly with the considerably better tested and established framework we use for understanding the remainder of physics. Specifically, general relativity is notoriously difficult to quantize. Finally, it begs the question of why the cosmological term is zero, or at least fantastically small when measured in its natural units. Superstring theory promises to solve some and conceivably all of these difficulties, and to provide a fully integrated theory of Nature, but I think it is fair to say that in its present form superstring theory is not defined by clear principles, nor does it provide definite algorithms to answer questions within its claimed scope, so there remains a big gap between promise and delivery.
## 2 Breaking New Ground in QCD, 1: High Temperature
The behavior of QCD at high temperature, and low baryon number density, is relevant to cosmology – indeed, it describes the bulk of matter filling the Universe, during the first few seconds of the Big Bang – and to the description of both numerical and physical experiments. There are ambitious experimental programs planned for RHIC, and eventually LHC, to probe this physics. It is also, I think, intrinsically fascinating to ask – what happens to empty space, if you keep adding heat?
The equilibrium thermodynamics of QCD at finite temperature (and zero chemical potential) is amenable to direct simulation, using the techniques of lattice gauge theory. Figure 5 does not quite represent the current, rapidly evolving, state of the art, but it does already demonstrate some major qualitative points.
The chiral symmetry breaking condensate, clearly present (as previously advertised) at zero temperature, weakens and seems to be gone by $`T150\mathrm{Mev}`$. Likewise at these temperatures there is a sizable increase in the value of the Polyakov loop, indicating that the force between distant color sources has considerably weakened. Furthermore the energy density increases rapidly, approaching the value one would calculate for an ideal gas of quarks and gluons. The pressure likewise increases rapidly, but lags somewhat behind the ideal gas value.
All these phenomena indicate that at these temperatures and above a description using quarks and gluons as the degrees of freedom is much simpler and more appropriate than a description involving ordinary hadrons. Indeed, the quarks and gluons appear to be quasi-free. That is what one expects, from asymptotic freedom, for the high-energy modes that dominate the thermodynamics.
It is an interesting challenge to reproduce the pressure analytically . Since the only scale in the problem is the large temperature, if one can organize the calculation so as to avoid infrared divergences, asymptotic freedom will legitimize a weak coupling treatment. Even more interesting would be to do this by a method that also works at finite density, since the equation of state is of great interest for astrophysics and is not accessible numerically.
There is no doubt, in any case, that QCD predicts the existence of a quark-gluon plasma phase, wherein its basic degrees of freedom, normally hidden, come to occupy center stage.
While transition to a quark-gluon plasma at asymptotically high temperatures is not unexpected, the abruptness and especially the precocity of the change is startling. Below 150 Mev the only important hadronic degrees of freedom are the pions. Why does this rather dilute pion gas suddenly go berserk?
The change is enormous, quantitatively. The pions represent precisely 3 degrees of freedom. The free quarks and gluons, with all their colors, spins, and antiparticles, represent 52 degrees of freedom.
There are many ideas for detecting signals of quark-gluon plasma formation in heavy ion collisions, which you will be hearing much of. I would like discuss briefly a related but more focused question, on which there has been dramatic progress recently . This is the question, whether there is a true phase transition in QCD accessible to experiment.
One might think that the answer is obviously “yes”, since there are striking differences between ordinary hadronic matter and the quark-gluon plasma. This is not decisive, however. Let me remind you that the dissociation of ordinary atomic gases into plasmas is not accompanied by a phase transition, even though these states of matter are very different (so different, that at Princeton they are studied on separate campuses). Similarly, confinement of quarks is believed to go over continuously, at high temperature, into screening – certainly, no one has demonstrated the existence of an order parameter to distinguish between them.
What about chiral symmetry restoration? For massless quarks, there is a definite difference between the low-temperature phase of broken chiral symmetry and the quasi-free phase with chiral symmetry restored, so there must be a phase transition. A rather subtle analysis using the renormalization group indicated that for two massless quarks one might have a second-order phase transition, while for three massless quarks it must be first-order . This is the pattern observed in numerical simulations.
The real world has two very light quarks and one (the strange quark) whose mass is neither clearly small nor clearly large compared to basic QCD scales. Here, then, a sharp question emerges. If the strange quark is effectively heavy, and the other quarks are taken strictly massless, we should have a second-order transition. If the strange quark is effectively light, we should have a first-order transition. Which is the case, for the physical value of the strange quark mass? Although this has been a controversial question, there seems to be an emerging consensus among lattice gauge theorists that it is second-order.
Unfortunately, this means that with small but finite $`u`$ and $`d`$ quark masses we will not have a sharp phase transition at all, but only a crossover. And not a particularly sharp one, at that. For while we are ordinarily encouraged to treat these masses as small perturbations, they are responsible for the pion masses, which are far from negligible at the temperatures under discussion. So the relevant correlation length never gets very large.
Nevertheless it was interesting to point out that in the $`m_sT`$ plane one could naturally connect the first- and second- order behaviors, as in Figure 6. The line of first-order transitions ends at a tricritical point. This is a true critical point, with diverging correlation lengths and large fluctuations. The first-order line, since it is a locus of discontinuities, and therefore the existence of a tricritical point where it terminates, are features which survive the small perturbation due to non-zero light quark masses. All these statements can be tested against numerical simulation.
Stephanov, Rajagopal and Shuryak have brought the subject to a new level of interest, taking off from the simple but brilliant observation that one expects similar behavior in the $`\mu T`$ plane. The big advantage of this is, that while $`m_s`$ is not a control parameter one can vary experimentally, the chemical potential $`\mu `$ is. They have proposed quite specific, characteristic signatures for passage near this transition in the thermal history of a fireball, such as might be obtained in heavy ion collisions . The signatures involve enhanced fluctuations and excess, non-thermal production of low-energy pions.
I believe it ought to be possible to refine the prediction, by locating the tricritical point theoretically. For while it is notoriously difficult to deal with large chemical potentials at small temperature numerically, there are good reasons to be optimistic about high temperatures and relatively small chemical potentials, which is our concern here .
If all these strands can be brought together, it will be a wonderful interweaving of theory, experiment, and numerics.
## 3 Breaking New Ground in QCD, 2: High Density
The behavior of QCD at high baryon number density, and low temperature, is of direct interest for describing neutron star interiors, neutron star collisions, and events near the core of collapsing stars. Unfortunately, it has proved quite difficult to calculate this behavior directly numerically using lattice gauge theory techniques. This is because in the presence of a chemical potential the functional integral for the partition function is no longer positive definite (or even real) configuration by configuration, so importance sampling fails, and the calculation converges only very slowly.
On the other hand, there has been remarkable progress on this problem over the last year or two using analytical techniques. This has shed considerable new light on many aspects of QCD. We have new, fully calculable mechanisms for confinement and chiral symmetry breaking . Amazingly, we find that two famous, historically influential “mistakes” from the prehistory of QCD – the Han-Nambu assignment of integer charge to quarks, and the Sakurai model of vector mesons as Yang-Mills fields – emerge from the microscopic theory at high density. And we find that in the slightly idealized version of QCD with three degenerate light quarks, there need be no phase transition separating the calculable high density phase from (the appropriate version of) nuclear matter !
At the request of the organizers I will be giving a separate seminar on these developments , so here I will be telegraphic.
Why might we expect QCD to become analytically tractable at high density? At the crudest heuristic level, it is a case of asymptotic freedom meets the fermi surface.
Let us suppose, optimistically, that a weak coupling treatment is going to be appropriate, and see where it leads.
If the coupling is weak and the density large, our first approximation to the ground state is large fermi balls for all the quarks. Due to the Pauli exclusion principle, the modes deep within the ball will be energetically costly to excite, and the important low-energy degrees of freedom will be the modes close to the fermi surface. But these modes will have large momentum. Thus their interactions, generically, will either hardly deflect them, or will involve large momentum transfer. In the first case we don’t care, while the second involves a weak coupling, due to asymptotic freedom.
On reflection, one perceives two big holes in this argument. First, it doesn’t touch the gluons. They remain massless, with singular interactions and strong couplings in the infrared that do not appear to be under control. Second, as we learn in the theory of superconductivity, the fermi surface is generically unstable, even at weak coupling. This is because pairs of particles (or holes) of equal and opposite momenta are low-energy excitations which can all scatter into one another. Thus one is doing highly degenerate perturbation theory, and in that circumstance even a small coupling can have large qualitative effects.
Fortunately, our brethren in condensed matter physics have taught us how to deal with the second problem, and its proper treatment also cures the first. There is an attractive interaction between quarks on the opposite sides of the fermi surface, and they pair up and condense. In favorable cases – and in particular, for three degenerate or nearly degenerate flavors – this color superconductivity produces a gap for all the fermion excitations, and also gives mass to all the gluons.
Thus a proper weak-coupling treatment automatically avoids all potential infrared divergences, and our optimistic invocation of asymptotic freedom provides, at asymptotically high density, its own justification.
## 4 Breaking New Ground in QCD, 3: Unification
The different components of the standard model have a similar mathematical structure, all being gauge theories. Their common structure encourages the speculation that they are different facets of a more encompassing gauge symmetry, in which the different strong and weak color charges, as well as electromagnetic charge, would all appear on the same footing. The multiplet structure of the quarks and leptons in the standard model fits beautifully into small representations of unification groups such as $`SU(5)`$ or $`SO(10)`$. There is the apparent difficulty, however, that the coupling strengths of the different standard model interactions are widely different, whereas the symmetry required for unification requires that they share a common value.
The running of couplings suggests an escape from this impasse . Since the strong, weak, and electromagnetic couplings run at different rates, their inequality at currently accessible scales need not reflect the ultimate state of affairs. We can imagine that spontaneous symmetry breaking – a soft effect – has hidden the full symmetry of the unified interaction. What is really required is that the fundamental, bare couplings be equal, or in more prosaic terms, that the running couplings of the different interactions should become equal beyond some large scale.
Using simple generalizations of the formulas derived and tested in QCD, which are none other than the ones experimentally validated in Figure 1, we can calculate the running of all the couplings, to see whether this requirement is met. In doing so one must make some hypothesis about the spectrum of virtual particles. If there are additional massive particles (or, better, fields) that have not yet been observed, they will contribute significantly to the running of couplings once the scale exceeds their mass.
Let us first consider the default assumption, that there are no new fields beyond those that occur in the standard model. The results of this calculation are displayed in Figure 7 . Considering the enormity of the extrapolation this works remarkably well, but the accurate experimental data indicates unequivocally that something is wrong.
There is one particularly attractive way to extend the standard model, by including supersymmetry. Supersymmetry cannot be exact, but if it is only mildly broken (so that the superpartners have masses $`<`$ 1 Tev) it can help explain why radiative corrections to the Higgs mass parameter, and thus to the scale of weak symmetry breaking, are not enormously large. In the absence of supersymmetry power counting would indicate a hard, quadratic dependence of this parameter on the cutoff. Supersymmetry removes the most divergent contribution, by canceling boson against fermion loops. If the masses of the superpartners are not too heavy, the residual finite contributions due to supersymmetry breaking will not be too large. The minimal supersymmetric extension of the standard model, then, makes definite predictions for the spectrum of virtual particles starting at 1 Tev or so. Since the running of couplings is logarithmic, it is not extremely sensitive to the unknown details of the supersymmetric mass spectrum, and we can assess the impact of supersymmetry on the unification hypothesis quantitatively. The results, as shown in Figure 8 , are quite encouraging.
A notable result of the unification of couplings calculation, especially in its supersymmetric form, is that the unification occurs at an energy scale which is enormously large by the standards of traditional particle physics, perhaps approaching $`10^{1617}`$ Gev. From a phenomenological viewpoint, this is fortunate. The most compelling unification schemes merge quarks, antiquarks, leptons, and antileptons into common multiplets, and have gauge bosons mediating transitions among all these particle types. Baryon number violating processes almost inevitably result, whose rate is inversely proportional to the fourth power of the gauge boson masses, and thus to the fourth power of the unification scale. Only for such large values of the scale is one safe from experimental limits on nucleon instability.
From a theoretical point of view the large scale is fascinating because it brings us, from the internal logic of particle physics, to the threshold of quantum gravity.
I find it quite remarkable that the logarithmic running of couplings, discovered theoretically and now amply verified within QCD, permits a meaningful quantitative discussion of these extremely ambitious and otherwise thinly rooted ideas, and even allows us to discriminate between different possibilities (especially, SUSY vs. non-SUSY).
## 5 Lessons: The Nature of Nature
Since QCD is our most perfect example of a fundamental theory of Nature, it is appropriate to use it as a basis for drawing broad conclusions about how Nature works, or, in other words, for “natural philosophy”.
Let me do this by listing some adjectives we might use to describe the theory; the implication being, that these adjectives therefore describe Nature herself:
alien : As has been the case for all fundamental physical theories since Galileo, QCD is formulated in abstract mathematical terms. In particular, there are no hints of moral concepts or purposes. Nor, in thousands of rigorous experiments, have we encountered any signs of active intervention in the unfolding of the equations according to permanent laws.
simple : In its appropriate, natural language, QCD can be written in one line, using only symbols that cleanly embody its conceptual basis.
beautiful : That she achieves so much with such economy of means, marks Nature as a skillful artist. She plays with symmetries, creating and destroying them in varied, fascinating ways.
weird : Quantum mechanics is notoriously weird, and QCD incorporates it in its marrow. Less remarkable, but to me no less weird, is the need to define QCD through a limiting procedure. This would seem to be a rather difficult and inefficient way to run a Universe.
comprehensible : QCD wonderfully illustrates Einstein’s remark, “The most incomprehensible thing about Nature is that it is comprehensible.” One hundred years ago people did not know there were such things as an atomic nuclei and a strong interaction; just over fifty years ago the pion and kaon were discovered; just over twenty-five years ago the strong interaction problem still seemed hopelessly intractable. That we, collectively, have got from there to here so quickly, against overwhelming odds, is an extraordinary achievement. It is a tribute to our culture, and to the glory of the human mind. And it is there that where we must locate, for now, the most incomprehensible thing about Nature.
Figure Captions
Figure 1 Experimental verification of the running of the coupling, as predicted in . The determinations, running from left to right, are from: corrections to Bjorken sum rule, corrections to Gross-Llewyllen-Smith sum rule, hadronic width of $`\tau `$ lepton, b $`\overline{\mathrm{b}}`$ threshold production, prompt photon production in pp and $`\overline{\mathrm{p}}`$ collisions, scaling violation in deeply ineslastic scattering, lattice gauge theory calculations for heavy quark spectra, heavy quarkonium decays, shape variables characterizing jets at different energies (white dots), total $`e^+e^{}`$ annihilation cross section, jet production in semileptonic and hadronic processes, energy dependence of photons in Z decay, W production, and electroweak radiative corrections. From .
Figure 2 Evolution of the structure function F<sub>2</sub>, as measured and compared with QCD predictions (solid lines). From .
Figure 3 Energy and angular characterization of three-jet events, testing the basic quark-gluon vertex very directly. From .
Figure 4 Comparison of the hadronic spectrum with first-principles calculations from QCD, using techniques of lattice gauge theory. From .
Figure 5 Top part: Evolution of the energy and of the pressure of 2-flavor QCD as a function of temperature, showing precocious and rapid approach to a quasi-free quark-gluon plasma. Bottom part: Evolution of the chiral condensation order parameter and of the Polyakov loop, which is a measure of the inverse induced mass of an inserted color source, and vanishes in a confined phase. One sees clear signals of chiral symmetry restoration and deconfinement .
Figure 6 Connecting the second- and first- order chiral symmetry restoration transitions predicted for two, respectively three, light quark flavors. The end of the first-order line is a tricritical point. A similar diagram may be valid for fixed strange quark mass and varying chemical potential.
Figure 7 Running of the couplings extrapolated toward very high scales, using just the fields of the standard model. The couplings do not quite meet. Experimental uncertainties in the extrapolation are indicated by the width of the lines .
Figure 8 Running of the couplings extrapolated to high scales, including the effects of supersymmetric particles starting at 1 Tev. Within experimental and theoretical uncertainties, the couplings do meet .
|
no-problem/9907/cond-mat9907067.html
|
ar5iv
|
text
|
# Distribution of fractal dimensions at the Anderson transition
## Abstract
We investigated numerically the distribution of participation numbers in the 3d Anderson tight-binding model at the localization-delocalization threshold. These numbers in one disordered system experience strong level-to-level fluctuations in a wide energy range. The fluctuations grow substantially with increasing size of the system. We argue that the fluctuations of the correlation dimension, $`D_2`$ of the wave functions are the main reason for this. The distribution of these correlation dimensions at the transition is calculated. In the thermodynamic limit ($`L\mathrm{}`$) it does not depend on the system size $`L`$. An interesting feature of this limiting distribution is that it vanishes exactly at $`D_{2\mathrm{m}\mathrm{a}\mathrm{x}}=1.83`$, the highest possible value of the correlation dimension at the Anderson threshold in this model.
The localization-delocalization Anderson transition has posed for a long time a fascinating problem. In systems with short range interaction, purely diagonal disorder and unbroken time-reversal symmetry, without spin-orbit interaction, it occurs for dimensions $`d>2`$ . In the thermodynamic limit (i.e. infinite system size) the transition point separates the systems where all wave functions are localized from the systems where some part of them is extended. Exactly at the transition one finds in the center of the band extended wave functions. Due to the proximity of the unavoidable localization at one side of the transition they have a self-similar fractal (actually multifractal) structure. This is a direct consequence of quantum critical fluctuations at the transition point.
This multifractality of the wave functions at the localization threshold is one of the most important features discovered since the pioneering work of Anderson . This fruitful idea has became widely recognized (see e.g. Refs. ) and has considerably helped our understanding of different phenomena in mesoscopic systems related to electron localization. For instance the well known log-normal distribution of the conductance in disordered metals can be taken as a finger-print of multifractal wave functions which survive in the weakly disordered state (so-called pre-localized states ).
Usually the multifractal (as well as fractal) structure of a wave function manifests itself in the size dependence of the participation number (PN)
$$𝒩=\left(\left|\psi (𝐫)\right|^4𝑑𝐫\right)^1L^{D_2},$$
(1)
where $`L`$ is the system size and $`D_2<d`$ is the correlation dimension of the wave function $`\psi (𝐫)`$. For a localized state $`D_2=0`$ and $`𝒩`$ do not depend on $`L`$. On the other hand $`D_2=d`$ for a delocalized wave function which extends uniformly over the sample. The inequality $`0<D_2<d`$ means that a multifractal wave function is delocalized but, in the thermodynamic limit, nevertheless occupies only an infinitesimal fraction of the sample.
Due to strong level-to-level fluctuations it is very hard to verify the size dependence of $`𝒩`$ for a particular state in a computer experiment. At the transition these fluctuations increase with increasing system size. Therefore, one has to study the size dependence of the averages of $`𝒩`$ or, preferably, the distribution function. The size dependence of the fluctuations of $`𝒩`$ is then converted to the size dependence of this distribution function. Choosing suitable size dependent variables one can collapse these distributions (for different system sizes) to one universal curve. However, to do this systematically one needs to understand the origin of these fluctuations. In our opinion the main source of above mentioned giant fluctuations of PN at the transition are the fluctuations of the fractal dimension $`D_2`$ of the wave functions.
In a disordered system, exactly at the transition , critical wave functions with very different degrees of delocalization (participation numbers) should coexist independent of their energy. This is a direct consequence of an infinitesimal neighborhood of the localized behavior at one side of the transition and delocalized behavior at another side. Accordingly each delocalized wave function should have a different $`D_2`$. If the distribution function $`𝒫(D_2)`$ becomes size-independent in the thermodynamic limit our hypothesis will be correct.
In the present paper we present numerical results for this distribution (more exactly for the distribution of logarithms of the PN) and show that it is indeed universal, i.e. it does not depend on the system size in the thermodynamic limit. This is somewhat reminiscent of the previous idea of Shapiro about the existence of universal distributions at the Anderson transition. This problem has recently been addressed analytically in a couple of papers, but far from the transition point.
In Ref. the variance of the inverse participation ratio (IPR) was calculated for disordered metallic samples in leading order of small parameter $`1/g^2`$, where $`g1`$ is a dimensionless conductance of the system. It was suggested that the relative value of the IPR-fluctuations should be of order unity at the transition point and that their distribution should be universal. A similar question was raised recently in Ref. where the distribution of the IPR was calculated for a large but finite conductance $`g`$ of small metallic grains.
To investigate this problem at the transition we numerically solve the standard Anderson tight-binding model with diagonal disorder on a 3d simple cubic lattice with the Hamiltonian
$$=\underset{i}{}\epsilon _i|ii|+\underset{ij}{}t_{ij}|ij|.$$
(2)
To model a disorder we distribute the site energies $`\epsilon _i`$ uniformly in the interval $`W/2<\epsilon _i<W/2`$. For the off-diagonal elements we took $`t_{ij}=1`$ for nearest neighbors and otherwise zero. The Anderson transition is in this model at a critical value, $`W_c=16.5`$ (see, e.g. Ref. and references therein).
Diagonalizing the matrix $`_{ij}`$ for a cubic sample with $`N=L^3`$ sites and open boundary conditions we find a set of $`N`$ orthonormal eigenvectors $`e_s(j)`$ and corresponding eigenvalues, $`E_j`$. The participation number for a state $`j`$ is defined as usual by
$$𝒩_j=\left(\underset{s=1}{\overset{N}{}}e_s^4(j)\right)^1.$$
(3)
Fig. 1 shows the participation numbers $`𝒩_j`$ versus energy $`E_j`$ for a 3d system with $`N=2744`$ sites. At the transition there is a strong level-to-level fluctuation of the PN. In the whole energy range states are found which are very close in energy but whose $`𝒩_j`$ differ by up to two orders of magnitude. This means that the energy of a particular state is not indicative of the spatial behavior of the wave function. In a next step we, therefore, calculate the distribution function of the PN in an energy band around zero energy (where the Anderson transition takes place). We take the width of this band equal $`10`$ i.e. we include all states with $`|E_j|<5`$. The distribution remains practically the same if we take a much narrower strip, $`|E_j|<1`$ but the computation time increases significantly.
The normalized distribution functions of the participation numbers for different system sizes are shown in Figure 2. As expected the distributions are strongly size dependent. Increasing the system size the position of the maximum shifts to higher values, approximately as $`N^{0.3}`$, and the amplitude of the maximum decreases as $`N^{0.46}`$.
Let us suppose now that the fluctuation of the PN at the transition is due to fluctuations of $`D_2`$ of the wave functions. According to Eq. (1), without loss of generality, we can take this relation in the form $`𝒩N^{D_2/3}`$. Then if $`F_N(𝒩)`$ is the normalized distribution function of PN for an ensemble of systems with $`N`$ sites and different disorder, we can extract the distribution function of the correlation dimensions, $`D_2`$, in this ensemble
$$𝒫_N(D_2)=(1/3)F_N\left(N^{D_2/3}\right)N^{D_2/3}\mathrm{ln}N.$$
(4)
Fig. 3 shows that this distribution of correlation dimensions is much less sensitive to the system size than the one of the PN. Moreover, with increasing $`N`$ it obviously approaches some size independent function, $`𝒫_{\mathrm{}}(D_2)`$, i.e. in the thermodynamic limit a universal distribution function of correlation dimensions does exist at the Anderson transition. For a given model of disorder it should be the same for different realizations. In other words we believe that the distribution function $`𝒫_{\mathrm{}}(D_2)`$ is a self-averaged quantity and can be obtained by analyzing one sufficiently large system.
Fig. 3 shows an additional interesting feature of this distribution: with increasing argument the function $`𝒫_N(D_2)`$ rapidly decays to zero. The drop gets steeper with increasing size $`N`$. This is seen more clearly in a logarithmic plot of $`𝒫_N(D_2)`$. For systems with $`N512`$, $`𝒫_N(D_2)`$ drops four orders of magnitude in the small interval $`1.5<D_2<2`$. We think, therefore, that in the thermodynamic limit there is a point, $`D_{2\mathrm{m}\mathrm{a}\mathrm{x}}`$ where the function $`𝒫_{\mathrm{}}(D_2)`$ approaches exactly zero. In such a case this would be the highest possible value of the correlation dimension $`D_2`$ at the Anderson transition. In the following we present additional evidence in support of this idea.
For that purpose we calculate the size dependence of the average PN and of the PN of the the most extended state in the system, i.e. for a given realization of disorder the state with maximal participation number, $`𝒩_{\mathrm{max}}`$. The ensemble averages of both quantities (denoted by aapn and ampn, respectively) against system size $`N`$ are plotted in Fig. 4. First, both quantities scale with $`N`$ as a power law, $`aN^{D_2/3}`$. Secondly the value $`D_2=1.26`$ for the average participation number (aapn) is very close to the position of the maximum in the distribution function of correlation dimensions shown in Figure 3. It is surprising that averaging over all states does not affect the power law behavior given by Eq. (1) for a single state. For the ensemble average of the maximum participation number, $`𝒩_{\mathrm{max}}`$ we find $`D_2=1.83`$. This is the correlation dimension of the most extended state in the system at the transition. To answer the question whether it really is the maximal correlation dimension, $`D_{2\mathrm{m}\mathrm{a}\mathrm{x}}`$, we should study the fluctuations of this quantity. A maximal correlation dimension $`D_{2\mathrm{m}\mathrm{a}\mathrm{x}}`$ does exist if the fluctuation of this quantity goes to zero with $`N\mathrm{}`$.
Fig. 5 shows the relative fluctuations of the maximum participation number versus system size. Firstly the fluctuations are rather small and decrease slowly with increasing $`N`$. According to Eq. (1) we can relate them to the fluctuations of the correlation dimension $`\delta D_{2\mathrm{m}\mathrm{a}\mathrm{x}}`$
$$\delta _{\mathrm{mpn}}\delta 𝒩_{\mathrm{max}}/𝒩_{\mathrm{max}}=(\delta D_{2\mathrm{m}\mathrm{a}\mathrm{x}}/3)\mathrm{ln}N.$$
(5)
We conclude from the above that, in the thermodynamic limit ($`N\mathrm{}`$), the fluctuation of the correlation dimension for the most extended state $`\delta D_{2\mathrm{m}\mathrm{a}\mathrm{x}}0`$ at least not slower than $`1/\mathrm{ln}N`$. Therefore $`D_{2\mathrm{m}\mathrm{a}\mathrm{x}}=1.83`$ is indeed the highest possible value of the wave function correlation dimension at the Anderson transition. The distribution function $`𝒫_{\mathrm{}}(𝒟_2)`$ should approach exactly zero at this value.
There are numerous previous calculations of the correlation dimension $`D_2`$ at the 3d Anderson transition. From the scaling with system size of the density-density correlation of the wave functions, the correlation dimension was estimated to be $`D_2=1.7\pm 0.3`$ (for $`W=16`$). From the size dependence of the participation number averaged, both over different wave functions in a small energy interval $`0.25`$ around zero and over disorder (with a Gaussian distribution of site energies), the correlation dimension $`D_2=1.6\pm 0.1`$ was estimated. In Ref. from the spectral compressibility, $`\chi `$ of the levels, using $`\chi =(1D_2/d)/2`$ derived in Ref. the correlation dimension was estimated as $`D_2=1.4\pm 0.2`$. From the time decay of the temporal autocorrelation function, $`C(t)t^{D_2/3}`$, the correlation dimension, $`D_2=1.5\pm 0.2`$ was obtained. Using box-counting procedures $`D_2=1.7\pm 0.2`$ , $`D_2=1.52\pm 0.11`$ , and $`D_2=1.68`$ was calculated. Again using box counting and averaging the results over wave functions in a small energy interval, $`\mathrm{\Delta }E=0.01`$, around zero and over five different realizations of disorder $`D_2=1.46`$ was found . Again using box-counting techniques in Ref. , $`D_2=1.33\pm 0.02`$ was obtained.
This shows clearly the quite large uncertainty, in the literature, in the existing values of $`D_2`$ at the transition. The reason is obviously that one deals with a distribution of this quantity. Among different characteristics of this distribution one can consider for example the position of the maximum, at about $`1.3`$, the average value, $`\overline{D_2}=1.26`$, and the correlation dimension of the most extended state at the transition, $`D_{2\mathrm{m}\mathrm{a}\mathrm{x}}=1.83`$.
Similarly the distribution function of the information dimension $`D_1`$ as well as of the other generalized dimensions $`D_q`$ at the transition can be obtained. The results of this analysis will be published elsewhere. In the thermodynamic limit all these distribution functions are expected to be universal and self-averaged quantities. Using the Legendre transformation the multifractal spectrum $`f_j(\alpha )`$ for each wave function with energy $`E_j`$ can be calculated (compare Ref ). In other words critical wave functions at the transition should be characterized by a whole distribution of $`f(\alpha )`$’s which is already a functional. In particular, the positions of the maxima, $`\alpha _0`$’s of these functions should be distributed.
In conclusion, we investigated numerically the distribution of the correlation fractal dimension $`D_2`$ for the critical wave functions at 3d Anderson transition. This distribution appears to be universal i.e. it no longer depends on the system size in the thermodynamic limit. Extrapolating these results to other fractal dimensions we conclude that each critical wave function should possess its own (infinite) set of generalized fractal dimensions, $`D_d`$ at the transition. And one should rather speak about the distribution functional of these functions.
The authors are very grateful to B.I. Shklovskii for fruitful discussions. One of us (D.A.P.) gratefully acknowledges the financial support and hospitality of the University of Minnesota where part of this work was done.
|
no-problem/9907/cond-mat9907466.html
|
ar5iv
|
text
|
# Detection of Coulomb Charging around an Antidot in the Quantum Hall Regime
## Abstract
We have detected oscillations of the charge around a potential hill (antidot) in a two-dimensional electron gas as a function of a large magnetic field $`B`$. The field confines electrons around the antidot in closed orbits, the areas of which are quantised through the Aharonov-Bohm effect. Increasing $`B`$ reduces each state’s area, pushing electrons closer to the centre, until enough charge builds up for an electron to tunnel out. This is a new form of the Coulomb blockade seen in electrostatically confined dots. Addition and excitation spectra in DC bias confirm the Coulomb blockade of tunneling.
This paper addresses the fundamental question of whether charging can occur in an open system. Coulomb blockade (CB) of tunnelling is generally only observed in electrostatically confined “dots” where there is only partial transmission through the entrance and exit constrictions. It has recently been seen when one constriction is open , when both constrictions transmit exactly one one-dimensional (1D) channel , or when some transmitted channels are decoupled from trapped states . However, an unambiguous demonstration requires a completely open system, such as an antidot, which is a potential hill in a two-dimensional electron gas (2DEG). When a magnetic field $`B`$ is applied perpendicular to the 2DEG, a set of states, discrete in position and energy, is formed around the antidot, for each Landau level (LL). Aharonov-Bohm (AB) conductance oscillations arising from resonances through such states have been studied extensively in the integer and fractional quantum Hall (QH) regimes . It has often been assumed that CB does not occur with antidot states because, as charge tries to build up, the system must immediately respond to screen it. However, pairs of AB oscillations from the two spins of the lowest LL were found to lock in antiphase, and this was attributed to charging . In a dot system, it was suggested that the charging of edge channels is responsible for a similar regularity of the magnetoconductance peaks .
The aim of the present work was to detect such charge oscillations of an antidot, utilising a non-invasive voltage probe similar to that employed by Field et al. . They fabricated a 1D constriction as a charge detector next to a dot but in a different circuit separated from it by a narrow gate. When the constriction was nearly pinched off, its resistance was very sensitive to potential variations nearby, and hence it could detect charge oscillations in the dot. We have fabricated a similar device with an antidot instead of a dot (see inset to Fig. 1(b)). A charging signal with the same period as the AB oscillations in the conductance $`G_{\mathrm{ad}}`$ is clearly visible. The lineshape and phase show that CB of tunnelling through the antidot is occurring. DC-bias measurements are used to measure addition and excitation spectra, confirming this interpretation. The charging energy saturates at high $`B`$ and the single-particle (SP) energy spacing varies as $`1/B`$.
The devices were fabricated from a GaAs/AlGaAs heterostructure with a 2DEG of sheet carrier density $`2.2\times 10^{15}`$ m<sup>-2</sup> and mobility 370 m<sup>2</sup>/Vs after illumination by a red LED. An SEM micrograph of a device is shown in the inset to Fig. 1(b). A square dot gate (G<sub>dot</sub>), 0.3 $`\mu `$m on a side, was contacted by a second metal layer evaporated on top of an insulator (not shown), to allow independent control of gate voltages. The lithographic widths of the antidot and detector constrictions were 0.45 $`\mu `$m and 0.3 $`\mu `$m respectively. All constrictions showed good 1D ballistic quantisation at $`B=0`$. A voltage of $`4.5`$ V on the separation gate (G<sub>sep</sub>), of width 0.1 $`\mu `$m, divided the 2DEG into separate antidot and detector circuits. The detector gate (G<sub>det</sub>) squeezed the detector constriction to a resistance between 0.1 and 5 M$`\mathrm{\Omega }`$ to make it very sensitive to nearby charge. To maximise the sensitivity transresistance measurements were made by modulating the dot-gate voltage (or the voltage on the side-gate G<sub>side</sub>) at 10 Hz with 0.5 mV rms and applying a DC current of 1 nA through the detector constriction. Simultaneously, the transconductance of the antidot circuit was measured with a 10 $`\mu `$V DC source-drain bias, when necessary. The experiments were performed in a dilution refrigerator with a base temperature below 50 mK.
Figure 1 shows the transresistance $`dR_{\mathrm{det}}/dV_{\mathrm{G}\mathrm{side}}`$ (transconductance $`dG_{\mathrm{ad}}/dV_{\mathrm{G}\mathrm{side}}`$) vs $`B`$ of the detector (antidot) circuit in two different field regions: (a) $`\nu _\mathrm{c}=2`$ and (b) $`\nu _\mathrm{c}<1`$, where $`\nu _\mathrm{c}`$ is the filling factor in both antidot constrictions, which were determined from $`G_{\mathrm{ad}}`$. The filling factors in the bulk 2DEG were $`\nu _\mathrm{b}=7`$ and 2, respectively. The oscillations in $`G_{\mathrm{ad}}`$ occur as SP states around the antidot rise up through the Fermi energy $`E_\mathrm{F}`$. The AB effect causes the overall period $`\mathrm{\Delta }B`$ to be $`h/eS`$, where $`S`$ is the area enclosed by the state at $`E_\mathrm{F}`$. The curve in (a) has pairs of spin-split peaks, whereas in (b) only one spin of the lowest LL is present. The dips in $`dR_{\mathrm{det}}/dV_{\mathrm{G}\mathrm{side}}`$ correspond to a saw-tooth in the change $`\mathrm{\Delta }R_{\mathrm{det}}`$ from the background resistance (see Fig. 1(c)). Thus the net charge $`\mathrm{\Delta }q`$ nearby suddenly becomes more positive (making the effective gate voltage less negative) whenever the antidot comes on to resonance (since the dips line up with the zeros in $`dG_{\mathrm{ad}}/dV_{\mathrm{G}\mathrm{side}}`$). Hence we conclude that this charge oscillation is associated with states near the antidot. A second sample showed very similar results.
We explain the charging as follows. As $`B`$ increases, all the states encircling the antidot move inwards, reducing their areas to keep the flux enclosed constant, and hence a net charge $`\mathrm{\Delta }q`$ builds up in the region around the antidot. This resembles CB in a dot . At low bias, the electron in the highest occupied state cannot escape until $`\mathrm{\Delta }q`$ reaches $`e/2`$, then it tunnels out to a nearby lead or into a localised state in the bulk, and $`\mathrm{\Delta }q`$ becomes $`+e/2`$. At this point charge can move easily through the antidot, and so each dip in the detector signal lines up with such a conductance resonance, as found experimentally (Fig. 1). There is no electrostatically confined region around the antidot, so charging seems impossible . However, electrons are magnetically confined to the antidot and the rigidity of the quantum-mechanical orbitals prevents charge relaxation. Other states further away from the antidot might try to screen the charge build-up. However, those in the same LL have a fixed density once it is full, and so cannot screen. Also, due to the discreteness of the SP states, rearrangement of charge below $`E_\mathrm{F}`$ within the partially filled region near the antidot can only cause discrete changes in the charge, and would probably cost too much interaction energy. One might speculate that the detector would pick up not the charging of the antidot but the change in screening by SP states near $`E_\mathrm{F}`$ because they could adjust their areas or the wavefunction could leak out to the other edges on resonance . However, such screening should be symmetric around the resonances. Therefore the transresistance would be the derivative of periodic dips or peaks, not of a saw-tooth as seen in our measurements.
The charging of the antidot is not dependent on the presence of conductance oscillations in the antidot circuit. Thus it is still possible to observe the signal with no applied bias in the antidot circuit, or when the side-gate voltage is zero so that there is no tunnelling between that edge and the antidot. Indeed, as shown in Fig. 2, the dips in the detector signal become large and sharp when the antidot constrictions are set to a QH plateau ($`\nu _\mathrm{c}=2`$ in this case), where the antidot states are decoupled from the extended edge states. Away from the QH plateau, since the states are coupled to the current leads, electrons’ wavefunctions penetrate into the leads, reducing the effective maximum charge on the antidot and leading to weaker charging, i.e. smaller charging energy.
Around $`B=3`$ T the spin-splitting of the peaks becomes exactly half the period, and the amplitudes of the two peaks in each pair become identical, giving what appear to be $`h/2e`$ AB oscillations (see Fig. 2) . We have investigated the temperature dependence of both the charging and conductance signals in this regime. The Fourier transforms of the charging signal appearing at around 2.5 T in Fig. 2 and the $`G_{\mathrm{ad}}`$ oscillations at around 2.8 T, measured separately, decrease at different rates (see Fig. 3(a)). Thermally broadened Fermi-liquid theory for sinusoidal oscillations gives a good fit for $`G_{\mathrm{ad}}`$ with an energy level spacing of $`70\mu `$eV. The conductance oscillations are suppressed at high temperature because of thermal broadening of the edge channels around the side gates at $`E_\mathrm{F}`$ when the thermal energy becomes comparable to the sum of the SP energy spacing and the charging energy $`e^2/C`$ (if CB occurs), where $`C`$ is the total capacitance of the antidot. For the charging signal, since the oscillations are not sinusoidal, a more detailed model is required than that used above. Here, we assume that the detector is only sensitive to thermal excitation which adds or removes electrons around the antidot, but not to excitation between SP states.
The electrochemical potential of the antidot $`\mu _{\mathrm{ad}}(N,B)`$ is the energy required to add an electron to the lowest unoccupied state, which encloses, say, $`N`$ flux quanta $`h/e`$. Then the probability that thermal excitation moves an electron from a lead at chemical potential $`\mu `$ to that state is given by the Fermi function $`f\left(\mu _{\mathrm{ad}}(N,B)\mu \right)`$. For one period $`\frac{\mathrm{\Delta }B}{2}<B<\frac{\mathrm{\Delta }B}{2}`$, where the centre of the charge transition is at $`B=0`$, the blurred saw-tooth charge oscillation can be written as $`\mathrm{\Delta }q(B)=e\left(B/\mathrm{\Delta }B+f\left(\mu _{\mathrm{ad}}(N,B)\mu \right)\frac{1}{2}\right)`$. Since the charging energy is parabolic in the net charge, and hence varies as $`(B\pm \frac{\mathrm{\Delta }B}{2})^2`$ depending on which state is occupied, it can be shown simply that $`\mu _{\mathrm{ad}}(N,B)\mu =\mathrm{\Delta }E_{\mathrm{tot}}B/\mathrm{\Delta }B`$, where $`\mathrm{\Delta }E_{\mathrm{tot}}=\mathrm{\Delta }E+e^2/C`$. Here, $`\mathrm{\Delta }E`$ is the average energy spacing of adjacent states (of whichever spin), equal to $`\mathrm{\Delta }E_{\mathrm{sp}}/2`$ when both spins encircle the antidot; $`\mathrm{\Delta }E_{\mathrm{sp}}`$ is the energy spacing of adjacent SP states of the same spin. For $`h/2e`$ oscillations, we assume that a spin-down state lies midway in energy between spin-up states. For $`\nu _\mathrm{c}1`$, $`\mathrm{\Delta }E=\mathrm{\Delta }E_{\mathrm{sp}}`$. Thus $`f\left(\mu _{\mathrm{ad}}(N,B)\mu \right)=\left(1+\mathrm{exp}\left(\beta B/\mathrm{\Delta }B\right)\right)^1`$ where $`\beta =\mathrm{\Delta }E_{\mathrm{tot}}/k_\mathrm{B}T^{}`$. Here $`T^{}=\sqrt{T^2+\mathrm{\Gamma }^2}`$ is the effective temperature, to account for an intrinsic broadening $`\mathrm{\Gamma }`$ at low temperatures due to the AC excitation voltage and the finite lifetime of the states. The integral of the detector signal with respect to $`B`$ (approximately equivalent to the integral with respect to $`V_{\mathrm{G}\mathrm{dot}}`$) was fitted to $`\mathrm{\Delta }q(B)`$, after subtracting the background slope (inset to Fig. 3(c)). From the fit at various temperatures (Fig. 3(b), circles), we obtained $`\mathrm{\Delta }E_{\mathrm{tot}}=160\mu `$eV. We could not measure the temperature dependence in the region $`B2.8`$ T in Fig. 2 due to the small charging signal. However, a fit to the data at $`T50`$ mK gave $`\mathrm{\Delta }E_{\mathrm{tot}}=90\mu `$eV, assuming that $`\mathrm{\Gamma }`$ does not change. The temperature dependence of $`h/e`$ oscillations where $`\nu _\mathrm{c}`$ was just less than one ($`B4.1`$ T, diamonds in Fig. 3(b)) gave $`\mathrm{\Delta }E_{\mathrm{tot}}=140\mu `$eV. These energies are plotted in Fig. 3(c) and will be discussed below.
A further way of measuring the energy spacing is to apply a DC bias . Fig. 4 shows greyscale plots of the DC-bias dependence of AB oscillations in the differential conductance (measured with a 5 $`\mu `$V rms AC (10 Hz) source-drain voltage in addition to the DC bias), for the values of $`\nu _\mathrm{c}`$ shown. In (a) and (b), peaks are shown in black, since resonant transmission occurs due to inter-LL scattering . This is not present at higher $`B`$; instead, resonant backscattering gives dips (shown in black in (c) and (d)). (a)–(c) show sets of spin-split resonances. In (a), where spin splitting is poorly resolved, adjacent peaks cross at $`250\mu `$V or $`50\mu `$V. As energies, since this is an addition spectrum, these correspond to $`e^2/C+\mathrm{\Delta }E_{\mathrm{sp}}E_\mathrm{Z}`$ and $`e^2/C+E_\mathrm{Z}`$ respectively, where $`E_\mathrm{Z}`$ is the Zeeman splitting. Thus the average energy is just $`\mathrm{\Delta }E_{\mathrm{tot}}`$. This enables a comparison of energies at various $`B`$ (see Fig. 3(c)). At higher $`B`$, spin-splitting becomes obvious (Figs. 4(b) and (c)), but the crossings give similar $`\mathrm{\Delta }E_{\mathrm{tot}}`$.
The DC bias at which states of different spin cross gives an upper limit for $`e^2/C`$, and this limit increases with $`B`$, as does $`E_\mathrm{Z}`$. It is likely that the charging energy is small at low $`B`$, since the magnetic confinement is weak; indeed, the charging signal is hard to see for $`B<0.6`$ T. However, at $`B=1.4`$ T, in the middle of Fig. 4(c), $`\mathrm{\Delta }E_{\mathrm{tot}}`$ drops rapidly by 30% (open symbol in Fig. 3(c)). This corresponds to the field at which the conductance falls off the $`\nu _\mathrm{c}=2`$ plateau (for these particular gate voltages). The figure shows a similar drop (for the gate voltages used in the temperature dependence described above), around 2.6 T, also corresponding to moving off the $`\nu _\mathrm{c}=2`$ plateau. Temperature dependences of the conductance and charging oscillations there confirm the DC bias result. There is no reason why $`\mathrm{\Delta }E_{\mathrm{sp}}`$ should change so suddenly. These drops occur when the coupling of the antidot to the leads increases, reducing the charging energy, as described above.
In Fig. 4(c) additional parallel lines appear around the smaller diamonds, offset by $`60\mu `$V in DC bias. We interpret these as arising from tunnelling via the first excited state of the antidot, which is $`\mathrm{\Delta }E_{\mathrm{sp}}E_\mathrm{Z}`$ higher in energy. Similar lines are not resolved around the larger diamonds since the spacing is just $`E_\mathrm{Z}`$. The observation of this excitation spectrum confirms that there is a Coulomb blockade of tunnelling through the antidot.
For a constant potential slope, $`\mathrm{\Delta }E_{\mathrm{sp}}`$ should vary as $`1/B`$. At $`B=0.35`$ T, $`\mathrm{\Delta }E_{\mathrm{tot}}=150\mu `$eV and $`e^2/C<50\mu `$eV (the upper limit from the DC-bias measurements), so $`200\mu `$eV$`<\mathrm{\Delta }E_{\mathrm{sp}}<300\mu `$eV. Thus at $`B=1.4`$ T we expect $`50\mu `$eV$`<\mathrm{\Delta }E_{\mathrm{sp}}<80\mu `$eV. This is close to the value $`\mathrm{\Delta }E_{\mathrm{sp}}100\mu `$eV obtained from the addition and excitation spectra at 1.4 T, which also give $`E_\mathrm{Z}35\mu `$eV, in good agreement with $`g\mu _\mathrm{B}B`$ with $`g=0.44`$ for electrons in GaAs. From Fig. 4(c), $`e^2/C=\mathrm{\Delta }E_{\mathrm{tot}}\mathrm{\Delta }E_{\mathrm{sp}}/2`$ falls from $`100\mu `$eV on the plateau to $`65\mu `$eV when the antidot is coupled to the leads. When on the plateau, $`e^2/C`$ appears to saturate at $`150\mu `$eV above $`B2`$ T, since by then the states around the antidot are well defined and so the full $`\pm e/2`$ charge can build up, with the capacitance fairly constant. Maasilta and Goldman found from the lineshapes of individual peaks at $`\nu =1`$ and $`\frac{1}{3}`$ that $`\mathrm{\Delta }E_{\mathrm{tot}}`$ was almost constant, but interpreted this as a self-consistent variation of the potential slope, with no CB. In our picture, the constancy of $`\mathrm{\Delta }E_{\mathrm{tot}}`$ comes from the interplay of $`e^2/C`$ and $`\mathrm{\Delta }E_{\mathrm{sp}}`$.
In summary, we have fabricated a charge detector in close proximity to an antidot. The antidot is seen to discharge each time a state around the antidot comes on to resonance, showing that there is a Coulomb blockade of tunnelling via the antidot. We have measured addition and excitation spectra, confirming this interpretation. The charging energy drops whenever there is coupling to the leads, as the charge becomes delocalised. This is the first conclusive demonstration of charging in an open system. It arises from the rigidity of the quantum-mechanical wavefunction, as for an electron in an atom. It must form part of the explanation for the pure $`h/2e`$ AB oscillations .
This work was funded by the UK EPSRC. We thank C. H. W. Barnes and C. G. Smith for useful discussions. M. K. acknowledges financial support from Cambridge Overseas Trust.
Present address: The Technology Partnership PLC, Melbourn Science Park, Melbourn, SG8 6EE, UK.
|
no-problem/9907/astro-ph9907161.html
|
ar5iv
|
text
|
# June 1999 UM-P-99/17 RCHEP-99/06 Matter-affected neutrino oscillations in ordinary and mirror stars and their implications for gamma-ray bursts
## I Introduction
The ongoing quest for a complete understanding of gamma-ray bursts (GRBs) dates back to their fortuitous discovery in the 1960s . Over the years, a class of models called the “fireball model”, which seeks to explain the temporal structure of the bursts and the non-thermal nature of their spectra, has emerged . In essence, the model consists of a sudden injection of radiative energy into a compact region with relatively few baryons ($`10^5M_{}`$), where an opaque $`e^{}e^+\gamma `$ plasma is consequently formed. This “fireball” expands relativistically, until internal processes such as collisions within the now optically thin outflow trigger the reconversion of its kinetic energy to radiation. The photons then escape to infinity, to be hailed by us earthly inhabitants as a GRB. Subsequent deceleration of the relativistic fireball through interactions with the ambient interstellar medium gives rise to further emissions at longer wavelengths: the so-called afterglow .
Although the discovery of afterglows in the x-ray, optical and/or radio domains for several bursts in 1997 are not greeted unanimously by the GRB community as signatures of the classical fireball model , the observations have, nonetheless, established firmly for the first time the cosmological origin of GRBs. Redshifts of $`z=0.835`$ and $`z=3.418`$ were reported for two of the earliest observations respectively , and the recent measurement of $`z=1.600`$ in the afterglow of GRB 990123 was performed with unprecedented accuracy . The central engines that power each fireball have not yet been identified, though, at cosmological distances, these GRB progenitors must be capable of generating some $`10^{51}10^{53}`$ ergs of energy in a short period of time. Prime candidates include binary neutron star merger and its variants , and core collapse , in which the concomitant release of gravitational binding energy is almost entirely in the form of neutrinos and antineutrinos. In principle, about one thousandth of these neutrinos annihilate to form electron–positron pairs, and ultimately photons via $`\nu \overline{\nu }e^{}e^+\gamma \gamma `$, creating a plasma that is the fireball.<sup>*</sup><sup>*</sup>*Of course, the merger/collapse may possess sufficient rotational energy to power the fireball through the coupling of its angular momentum to a strong magnetic field . This, however, is not the topic of this paper. The major flaw, however, is that the nascent compact object is inevitably surrounded by an excess of baryons, which, if neutrino annihilation is to occur in this environment, would lead to formation of a nonrelativistic fireball that is inconsistent with observations. This is the “baryon-loading problem”.
Following the recent announcement of very strong evidence for atmospheric neutrino oscillations at SuperKamiokande , a novel solution to the baryon-loading problem that exploits this simple quantum mechanical phenomenon has been put forward . By invoking large amplitude oscillations between the muon neutrino $`\nu _\mu `$ and a “sterile” neutrino $`\nu _s`$ with an oscillation length comparable to the width of the baryonic region, it has been proposed that a neutrino that begins as a $`\nu _\mu `$ traverses the region largely as a $`\nu _s`$, and converts back to $`\nu _\mu `$ upon exit. The sterile neutrino is, by definition, inert. Thus annihilation does not take place inside the baryon-contaminated region, thereby preventing the formation of a dirty, nonrelativistic fireball. The ostensible efficiency of energy deposition is $`10^210^0`$ relative to direct annihilation in the absence of baryons and oscillations, subject to the geometry of the GRB progenitor.
Although an attractive idea, the analysis in Ref. fails to address two crucial issues: (i) If the central engines are indeed mergers and/or collapses, there is no reason to assume that only $`\mu `$-type neutrinos are (thermally) emitted. Thus all neutrino flavours must individually oscillate into a sterile neutrino to substantially eliminate $`\nu \overline{\nu }`$ annihilation in the baryonic region. The conversion of $`\nu _\mu `$ to $`\nu _s`$ (and their antiparticles) alone will not solve the baryon-loading problem. (The only way around this would be to hypothesise a different type of central engine that produced $`\mu `$-type neutrinos entirely from pion decay.) (ii) Matter effects may significantly alter the oscillation pattern. Modifications to the effective neutrino masses and mixings due to interactions with the medium have, in the past, been studied extensively in various astrophysical and cosmological contexts. A notable example is the proposed MSW solution to the solar neutrino problem , in which matter effects inside the Sun are largely responsible for the depletion of the $`\nu _e`$ flux that impinges on Earth. Matter-affected oscillations may lay further claims on the generation of neutrino asymmetries in the early universe , and the energetics of and $`r`$-process nucleosynthesis in Type II supernovae . Gamma-ray burst progenitors are necessarily dense objects; it is thus our purpose to reassess the scenario presented in the said analysis in the light of matter-affected oscillations, and, more generally, to demonstrate the latter’s importance in influencing the energetics of a neutrino-driven GRB.
Yet, the story does not end here. It has been suggested that GRBs may be attributed to the mergers/collapses of “mirror” stars composed of matter that is blind to ordinary interactions . The accompanying mirror neutrinos may oscillate into ordinary neutrinos, whose subsequent annihilation will occur in regions with few ordinary baryons, thereby easily eliminating the baryon-loading problem. In this paper, we shall also examine this possibility more closely, taking into account the role of matter effects.
Before proceeding, we should note that one ought to have an open mind at this stage as to the mechanism by which GRBs are energised. Neutrino–antineutrino annihilation may well not be the sole means (or even a means) of achieving this end. Other forms of energy and energy extraction mechanisms, notably the exploitation of the compact object’s spin energy through coupling to a strong magnetic field , have been proposed which may be complementary or alternative to the annihilation process. Be that as it may, neutrino kinetic energy is certainly a very important source to consider, given its function as dissipator of gravitational binding energy in mergers and collapses. If the neutrino is to play a role, its properties must be properly understood and its activities incorporated into prospective GRB models. These form the basis of the present work.
## II Neutrino oscillations and matter effects
Neutrino oscillations follow directly from non-degenerate neutrino masses and non-trivial mixing amongst the flavours. The former criterion ensures that each propagation eigenstate evolves with a distinct phase governed by its energy (and thus squared mass by $`E=\sqrt{p^2+m^2}p+\frac{m^2}{2p}`$). Subsequent development of phase differences gives rise to the periodicity of the oscillation phenomenon, which, for neutrinos of momentum $`p`$ in vacuum, is determined by the ratio $`2\pi \frac{2p}{\mathrm{\Delta }m_{ij}^2}`$, where $`\mathrm{\Delta }m_{ij}^2=m_i^2m_j^2`$ is the squared mass difference between the $`i`$th and $`j`$th mass (propagation) eigenstates. The oscillation amplitude scales with the amount of mixing between the states. For a two-neutrino system, this is characterised by one mixing angle $`\theta `$, such that
$`|\nu _\alpha `$ $`=`$ $`\mathrm{cos}\theta |\nu _1+\mathrm{sin}\theta |\nu _2,`$ (1)
$`|\nu _\beta `$ $`=`$ $`\mathrm{sin}\theta |\nu _1+\mathrm{cos}\theta |\nu _2,`$ (2)
where $`\nu _1`$ and $`\nu _2`$ are the mass eigenstates, and the subscripts $`\alpha `$ and $`\beta `$ label two different flavour eigenstates respectively. By the $`CPT`$ theorem, the same oscillation parameters govern the flavour evolution of both neutrino and antineutrino systems in vacuum.
In the presence of matter, the neutrino gains an effective mass from interacting with the ambience . The nature of the gain — its magnitude and sign — is subject to the density of the medium and the interaction channels that are available therein. Thus two oscillating neutrino flavours that interact differently with the environment will develop between them a phase difference that is dissimilar to its vacuum counterpart, thereby modifying the oscillation length and, though somewhat less obviously from our qualitative discussion, the oscillation amplitude. For a two-neutrino systemHereafter, we shall consider only two-neutrino systems. in a uniform medium, the probability that $`\nu _\alpha `$ will oscillate to $`\nu _\beta `$, where $`\alpha \beta `$, at time $`t`$ is given by
$$P(\alpha \beta ,t)=\mathrm{sin}^22\theta _{\text{eff}}\mathrm{sin}^2\frac{\pi t}{\lambda _{\text{eff}}},$$
(3)
where the quantity $`\lambda _{\text{eff}}=2\pi \frac{2E}{\mathrm{\Delta }m_{\text{eff}}^2}`$ is the effective oscillation length, and
$`\mathrm{\Delta }m_{\text{eff}}^2=\mathrm{\Delta }m_{\alpha \beta }^2\sqrt{\left({\displaystyle \frac{2EV_{\alpha \beta }}{\mathrm{\Delta }m_{\alpha \beta }^2}}\mathrm{cos}2\theta \right)^2+\mathrm{sin}^22\theta },`$ (4)
$`\mathrm{sin}^22\theta _{\text{eff}}={\displaystyle \frac{\mathrm{sin}^22\theta }{\left(\frac{2EV_{\alpha \beta }}{\mathrm{\Delta }m_{\alpha \beta }^2}\mathrm{cos}2\theta \right)^2+\mathrm{sin}^22\theta }},`$ (5)
with $`V_{\alpha \beta }=\mathrm{\Phi }_\alpha \mathrm{\Phi }_\beta `$, where $`\mathrm{\Phi }_\alpha `$ ($`\mathrm{\Phi }_\beta `$) is the matter potential for $`\nu _\alpha `$ ($`\nu _\beta `$), and we have used $`Ep`$. Note that for clarity, all squared mass differences will now carry the subscripts $`\alpha \beta `$ (denoting flavours) such that $`\mathrm{\Delta }m_{\mu \tau }^2`$, for example, corresponds to the squared mass difference between the two mass eigenstates relevant for the $`\nu _\mu \nu _\tau `$ system.
A typical celestial medium is an electrically neutral concoction of electrons/positrons and nucleons (both bound and free). Thus a $`\nu _e`$ propagating therein has both charged and neutral current interactions, while $`\nu _\mu `$ and $`\nu _\tau `$ have only the latter, and $`\nu _s`$ has none. To the lowest order in $`G_F`$, their respective matter potentials are
$`\mathrm{\Phi }_e`$ $`=`$ $`\sqrt{2}G_F\left(N_e{\displaystyle \frac{1}{2}}N_n\right)={\displaystyle \frac{G_F}{\sqrt{2}}}{\displaystyle \frac{\rho }{m_N}}\left(3Y_e1\right),`$ (6)
$`\mathrm{\Phi }_\mu `$ $`=`$ $`\mathrm{\Phi }_\tau ={\displaystyle \frac{G_F}{\sqrt{2}}}N_n={\displaystyle \frac{G_F}{\sqrt{2}}}{\displaystyle \frac{\rho }{m_N}}\left(Y_e1\right),`$ (7)
$`\mathrm{\Phi }_s`$ $`=`$ $`0,`$ (8)
where $`G_F`$ is the Fermi constant, $`N_e`$ denotes the electron minus positron number density, $`N_n`$ the neutron number density, $`\rho `$ the nucleon density, $`m_N`$ the nucleon mass, and $`Y_e`$ the number of electrons per nucleon. Note that for antineutrinos, $`\mathrm{\Phi }_{\overline{\alpha }}=\mathrm{\Phi }_\alpha `$, such that a $`\overline{\nu }_\alpha \overline{\nu }_\beta `$ system receives modifications to its effective oscillating parameters generally unlike those for a $`\nu _\alpha \nu _\beta `$ system in an identical medium.
## III Oscillations in GRB progenitors
Accompanying a merger/collapse event is the copious production of $`\nu _e,\nu _\mu `$ and $`\nu _\tau `$ and their antiparticles, with mean energies ranging from $`10`$ to $`30`$ MeV. In this section, we shall examine the oscillations of these “active” neutrinos with (i) “sterile” neutrinos, and (ii) amongst themselves in GRB progenitors per se, adhering strictly only to laboratory bounds on the oscillation parameters. Constraints arising from cosmological circumstances such as big bang nucleosynthesis (BBN) and closure will be noted at the appropriate points.
### A Active–sterile oscillations
We shall suppose here that each active flavour mixes with a sterile neutrino. The consideration of sterile neutrinos is well motivated for a number of reasons:
1. Right-handed neutrinos, which are sterile with respect to ordinary weak interactions, are necessary for a complete correspondence between lepton and quark degrees of freedom in the standard model of particle physics.
2. A particular class of light, effectively sterile fermions called “mirror neutrinos” arises if Improper Lorentz Transformations are retained as exact symmetries of Nature (see later).
3. Phenomenologically, they are strongly advocated through the need to resolve the apparent conflict between the three neutrino anomalies — solar , atmospheric and LSND — and the measured width of the $`Z^0`$ boson. The former in its entirety calls for an oscillation solution requiring at least four neutrinos, while the latter constrains the number of light active flavours to three.
Experiments performed thus far do not preclude the existence of yet more sterile species. Indeed, it is theoretically quite natural for the number of light sterile flavours to equal the number of quark and lepton generations, viz. three; this is our assumption for the rest of the paper. Furthermore, if the “sterile” flavours are identified with mirror neutrinos as in point 2 above, then they must come in triplicate. Thus our analysis here will also set the stage for the study of mirror stars in the next section.
#### 1 Large mixing angle
Suppose that each active flavour $`\nu _\alpha `$ exhibits large vacuum mixing with a sterile “partner” $`\nu _\alpha ^{}`$, that is, $`\mathrm{cos}2\theta 0`$.From here onwards, the symbol $`\nu _s`$ shall denote a generic sterile neutrino, while $`\nu _\alpha ^{}`$ is taken to mean the assigned sterile partner of $`\nu _\alpha `$, one for each of $`\nu _e`$, $`\nu _\mu `$ and $`\nu _\tau `$. This is a most natural consequence from the perspective of model building, arising from the general Dirac–Majorana mass matrix for each generation of neutrinos,
$$\left[\begin{array}{cc}\overline{\nu }_L& \overline{(\nu _R)^c}\end{array}\right]\left(\begin{array}{cc}0& m\\ m& M\end{array}\right)\left[\begin{array}{c}(\nu _L)^c\\ \nu _R\end{array}\right],$$
(9)
where the Dirac mass $`m`$ is taken to be much larger than the right-handed neutrino Majorana mass $`M`$.<sup>§</sup><sup>§</sup>§Note that the zero in the top-left corner of the mass matrix is enforced by electroweak gauge invariance in the absence of weak-isospin triplet Higgs bosons. Upon diagonalisation, the two resulting pseudo-Dirac neutrinos, one of which we identify as the right-handed sterile neutrino, are essentially maximally mixed . Pairwise maximal mixing also arises in the mirror matter model. Indeed, large amplitude pairwise oscillations of the active flavours into distinct sterile states are a priori necessary if all active $`\nu \overline{\nu }`$ annihilation in the baryon-contaminated mantle is to be prevented.
For this case of (almost) maximal mixing, matter effects are virtually identical for both neutrinos and antineutrinos by Eq. (4). From an inspection of the same equation, we identify two regions of interest:
$$\frac{\mathrm{\Delta }m_{\alpha \alpha ^{}}^2}{2E}\begin{array}{c}\\ \stackrel{>}{}\end{array}\left|V_{\alpha \alpha ^{}}\right|,$$
(10)
where we have used $`\mathrm{sin}2\theta 1`$. These shall be labelled as the first and second conditions respectively. For convenience, we rewrite Eq. (10) in more accessible units,
$`{\displaystyle \frac{\mathrm{\Delta }m_{ee^{}}^2}{E}}`$ $`\begin{array}{c}\\ \stackrel{>}{}\end{array}`$ $`\left|760\left({\displaystyle \frac{\rho }{10^{10}\text{g}\text{cm}^3}}\right)\left(3Y_e1\right)\right|{\displaystyle \frac{\text{eV}^2}{\text{MeV}}}\text{for}\nu _e\nu _e^{},`$ (13)
$`{\displaystyle \frac{\mathrm{\Delta }m_{\mu \mu ^{},\tau \tau ^{}}^2}{E}}`$ $`\begin{array}{c}\\ \stackrel{>}{}\end{array}`$ $`\left|760\left({\displaystyle \frac{\rho }{10^{10}\text{g}\text{cm}^3}}\right)\left(Y_e1\right)\right|{\displaystyle \frac{\text{eV}^2}{\text{MeV}}}\text{for}\nu _{\mu ,\tau }\nu _{\mu ,\tau }^{},`$ (16)
where the symbols are defined as for Eq. (6).The presence of neutrinos contributes to the matter potentials in Eq. (6). However, we do not expect such a contribution to have too serious a consequence since the effective neutrino number density is generally small except near the neutrino emitting surface. In any case, the exclusion of the neutrino background should not alter the qualitative aspect of the present work.
In accordance with Eq. (4), the first condition corresponds to $`\mathrm{sin}2\theta _{\text{eff}}0`$, implying that oscillations are strongly suppressed. Typically, the density of the resultant disk in a binary neutron star merger is at least $`\rho 10^9\text{g}\text{cm}^3`$ with $`Y_e0.020.1`$ , while that of the surrounding mantle in a collapse event is expected to be no less than $`\rho 10^6\text{g}\text{cm}^3`$ with $`Y_e0.20.5`$.These numbers correspond to the density at $`r300`$ km and the number of electrons per nucleon, respectively, in a Type II supernova at $`0.6`$ s post bounce . The SuperKamiokande results put the squared mass difference for $`\nu _\mu \nu _x`$, where $`\nu _x`$ is some as yet unidentified neutrino, at $`10^3\stackrel{<}{}\mathrm{\Delta }m^2/\text{eV}^2\stackrel{<}{}10^2`$. The corresponding upper bound for maximal $`\nu _e\nu _x`$ mixing is currently $`10^3\text{eV}^2`$ . It follows from Eq. (13) that the average $`1030`$ MeV $`\nu _e`$ and $`\nu _\mu `$ have virtually no chance of oscillating into a sterile species inside the baryonic region. Oscillations become more suppressed with the increase of neutrino energy. The formation of a dirty fireball therefore cannot be avoided with the introduction of $`\nu _\mu \nu _\mu ^{}`$ or $`\nu _e\nu _e^{}`$ oscillations, which argues against the scenario of Ref..
The second condition in Eq. (13) entails vacuum-like maximal oscillations. Taking the density of the neutrino emitting surface to be $`\rho 10^{11}\text{g}\text{cm}^3`$, it is clear from Eq. (13) that matter effects remain unimportant in all or part of the baryonic region only for sub-keV $`\nu _e`$ and $`\nu _\mu `$. But these neutrinos are of little consequence; apart from inabundance, their energies are well below the threshold for $`e^{}e^+`$ pair production.
At this stage, the acute reader would have noticed that in the case of core collapse, complete cancellation of matter effects for a $`\nu _e\nu _e^{}`$ system arises when $`Y_e0.33`$ by Eq. (13), where the effective mixing is temporarily vacuum-like and thus maximal. Substantial $`\nu _e^{}`$’s may be generated if such cancellation persists (approximately) over a distance comparable to the effective oscillation length of the system (i.e., the adiabatic condition — see later). Supposing that $`\mathrm{\Delta }m_{ee^{}}^210^3\text{eV}^2`$ and $`E10`$ MeV, Eq. (13) demands the change in $`Y_e`$ in this region to be less than $`10^4`$ even at a fixed density as low as $`\rho 10^6\text{g}\text{cm}^3`$. Holding $`Y_e`$ constant at, say, $`0.33+10^{10}`$, the same equation requires that any deviation in density to be $`<10^5\text{g}\text{cm}^3`$. However, given the dramatic rise and fall of $`Y_e`$ and the density respectively in a mere few hundred kilometres, and that the oscillation length for the system concerned is $`25`$ km, the said conditions are unlikely to be satisfied across a region comparable to the latter. The production of $`\nu _e^{}`$’s is again suppressed, albeit by a different mechanism.
On the other hand, no laboratory upper bound on $`\mathrm{\Delta }m^2`$ exists for maximal $`\nu _\tau \nu _\tau ^{}`$ mixing. There are cosmological constraints from closure and big bang nucleosynthesis. The former yields an upper bound of about $`40100`$ eV for long lived neutrinos.<sup>\**</sup><sup>\**</sup>\**Purported BBN constraints must be interpreted with care because important loopholes frequently exist. However, one can safely say that a maximally mixed active–sterile pair with a $`\mathrm{\Delta }m^2`$ value in the range to be considered is disfavoured by BBN. Supposing $`\nu _\tau `$ to be much lighter than $`\nu _\tau ^{}`$ or vice versa, this condition effectively sets an upper limit of $`10^4\text{eV}^2`$ on the squared mass difference. It transpires that if one pushes $`\mathrm{\Delta }m_{\tau \tau ^{}}^2`$ to the extreme, it is in fact possible to attain vacuum-like maximal $`\nu _\tau `$ oscillations with its sterile partner even at a density of $`\rho 10^{10}\text{g}\text{cm}^3`$, according to Eq. (13). The cost of an increased $`\mathrm{\Delta }m^2`$, however, is the simultaneous shortening of the oscillation length. If the latter is to be comparable to the size of the baryonic region $`R`$ and approximately maximal mixing is to be maintained throughout, then by Eqs. (4) and (10), the following condition must hold:
$$2\pi \stackrel{>}{}\left|V_{\tau \tau ^{}}^{\text{max}}R\right|,$$
(17)
or equivalently,
$$\left|\left(\frac{R}{\text{km}}\right)\left(\frac{\rho _{\text{max}}}{10^{10}\text{g}\text{cm}^3}\right)\left(Y_e1\right)\right|\stackrel{<}{}3.3\times 10^6,$$
(18)
where “max” denotes maximum beyond the neutrinosphere. Given that the average neutrino traverses a few kilometres of baryonic matter in a merger, not to mention the extent of the mantle in a collapse event, the reader can verify that Eq. (18) cannot be satisfied in any realistic GRB progenitor. Instead, the system undergoes rapid oscillations, as implied by its comparatively short oscillation length, quickly becoming, on average, an equal mixture of $`\nu _\tau `$ and $`\nu _\tau ^{}`$ (and their antiparticles) by Eq. (3). This scenario, however, is deemed unlikely for cosmological reasons mentioned earlier. But if some oscillations were to occur (perhaps with a smaller $`\mathrm{\Delta }m^2`$), the $`\nu _\tau `$ and $`\overline{\nu }_\tau `$ intensities at $`r`$ would be, respectively, effectively halved such that $`\nu _\tau \overline{\nu }_\tau `$ annihilation would still take place inside the baryonic region, but at a quarter of the standard rate per unit volume. Assuming that all active flavours contribute equally to annihilation in the absence of oscillations, the total energy deposition rate in our case is expected to suffer at worst a $`25`$ % decrease.
#### 2 Small mixing angle
An interesting effect arises for propagation in a medium of monotonically varying density. A level-crossing occurs when the neutrinos traverse a region in which the effective masses are virtually degenerate, i.e., where the resonance condition
$$2EV_{\alpha \beta }=\mathrm{\Delta }m_{\alpha \beta }^2\mathrm{cos}2\theta ,$$
(19)
is satisfied. Provided that the matter density is changing sufficiently slowly, a $`\nu _\alpha `$ entering the resonance will emerge as a $`\nu _\beta `$, where $`\alpha \beta `$, and vice versa. This is the Mikheyev–Smirnov–Wolfenstein (MSW) effect and is particularly prominent for $`\mathrm{sin}2\theta 0`$. The conversion efficiency depends on the ratio of the physical width of the resonance region to the corresponding effective oscillation length of the system, or equivalently,
$$\gamma \frac{\left(\frac{\mathrm{\Delta }m_{\alpha \beta }^2}{2E}\mathrm{sin}2\theta \right)^2}{\left|\frac{dV_{\alpha \beta }}{dr}\right|}|_{\text{res}},$$
(20)
where $`\frac{dV_{\alpha \beta }}{dr}`$ is the rate of change of the matter potential along the neutrino’s path. The larger the ratio (otherwise known as the adiabaticity parameter), the more effective the transformation.
Our interest in the case of small vacuum mixing lies in the possible existence of such a resonance within the baryonic region. If Eq. (19) is satisfied therein and the adiabaticity parameter $`\gamma `$ is sufficiently large, the ensuing conversion of all active neutrinos to sterile species means that, beyond the resonance, no neutrinos are available for annihilation. This loss of energy is practically irretrievable, unless a second resonance exists through which steriles reconvert to actives.<sup>††</sup><sup>††</sup>††This situation is in fact not as contrived as it first seems. A double $`\nu _e\nu _s`$ resonance has been shown to exist in the post bounce hot bubble in a Type II supernova . However, this possibility will not be dealt with here, owing to its extreme dependence on the density profile of the progenitor; the spatial distribution of baryons in a Type II supernova is perhaps not representative of those in core collapse scenarios in general. The reduction in the total energy deposition rate hinges on the location of the resonance, since the $`\nu \overline{\nu }`$ annihilation rate per unit volume $`q`$ generally has an $`r`$-dependence, such as $`qr^8`$ for spherical geometry . We leave this calculation for the interested numerical modeller.
We now explore the parameter space in which resonant conversion to sterile neutrinos would significantly decrease the energy deposition by $`\nu \overline{\nu }`$ annihilation, by calculating the constraints on the oscillation parameters required for the prevention of energy loss via this mechanism. Consider a binary neutron star merger, and let us suppose that each active species exhibits small mixing only with its sterile partner. Assuming that each $`\nu _\alpha ^{}`$ is lighter than its active counterpart, the extremely low value of $`Y_e`$ in this environment means that the resonance condition can only be satisfied by antineutrino systems, as indicated by Eq. (6). Given the relevant densities, $`\rho 10^910^{11}\text{g}\text{cm}^3`$, the average $`2030`$ MeV (anti)neutrino will undergo resonant conversion if the squared mass difference of the oscillating system happens to lie in the approximate range
$$10^3\stackrel{<}{}\mathrm{\Delta }m^2/\text{eV}^2\stackrel{<}{}10^5.$$
(21)
Furthermore, by holding the quantity $`Y_e`$ constant, we rewrite the adiabaticity parameter in more civilised units,
$$\gamma =\frac{3}{\eta }\left[\left(\frac{\mathrm{\Delta }m^2}{\text{eV}^2}\right)\left(\frac{\text{MeV}}{E}\right)\mathrm{sin}2\theta \right]^2\left(\frac{10^{10}\text{g}\text{cm}^3\text{km}^1}{\left|\frac{d\rho }{dr}\right|}\right)|_{\text{res}},$$
(22)
where $`\eta =13Y_e`$ and $`1Y_e`$ for $`\overline{\nu }_e\overline{\nu }_e^{}`$ and $`\overline{\nu }_{\mu ,\tau }\overline{\nu }_{\mu ,\tau }^{}`$ respectively. One may reasonably expect the density gradient $`\frac{d\rho }{dr}`$ to be some undoubtedly highly model-dependent function of $`r`$. For our crude analysis, we make the approximation
$$\frac{d\rho }{dr}\frac{\rho _{\text{max}}\rho _{\text{min}}}{R}\frac{10^{11}\text{g}\text{cm}^3}{10\text{km}}=10^{10}\text{g}\text{cm}^3\text{km}^1.$$
(23)
If we demand $`\gamma 1`$ such that resonant conversion to $`\overline{\nu }_s`$ is “non-adiabatic” and thus inefficient, then by Eqs. (22) and (23) together with $`Y_e=0.05`$, the following approximate constraints on the vacuum mixing angle are obtained,
$$\mathrm{sin}^22\theta 10^410^8,$$
(24)
for the range of squared mass differences in Eq. (21), where we have taken the neutrino energy to be the average $`2030`$ MeV intrinsic to binary neutron star mergers.
### B Active–active oscillations
In the following, we briefly examine the consequences of mixing amongst the active flavours.
Oscillations between $`\nu _\mu `$ and $`\nu _\tau `$ are not affected by the presence of matter, since they interact similarly with ordinary matter. For the same reason, these thermal neutrinos are produced with identical energy spectra and are therefore of little interest from the perspective of $`\nu _\mu \nu _\tau `$ oscillations.
Contrastingly, the $`\nu _e\nu _{\mu ,\tau }`$ system may experience resonant conversion, given the correct oscillation parameters. Since $`\nu _e`$’s are more abundant, this implies a possible decrease in the $`\nu _e`$ flux beyond the resonance. However, as the more energetic $`\nu _{\mu ,\tau }`$’s are simultaneously converted to $`\nu _e`$’s, one may reasonably expect the reduction in flux to be compensated for by a harder spectrum. Similarly, the increase in $`\nu _{\mu ,\tau }`$ flux is accompanied by a softening of the spectrum. Thus, summing over all flavours, the energy deposition rate due to $`\nu \overline{\nu }`$ annihilation should, to a first approximation, exhibit minimal difference from the no-oscillation case.
## IV Mirror stars
The concept of a mirror world was introduced as a means to retain parity and time-reversal transformations (Improper Lorentz Transformations) as exact symmetries of Nature. In essence, the content of the Standard Model of particle physics is enlarged to include a mirror sector such that every ordinary particle is partnered with a mirror image differing only in its handedness. The resulting theory has been called the Exact Parity Model (see also Ref. for a different model). These particles participate in mirror interactions identical in nature to ordinary processes, but are inert with respect to the ordinary strong, electromagnetic and weak forces. Thus the mirror world evolves as we do, complete with stellar mergers and collapses, its only link to the ordinary world being through gravitational coupling, and the mixing of colourless and electrically neutral ordinary–mirror partners. If neutrinos have non-degenerate masses, maximal ordinary–mirror neutrino oscillations are a necessary consequence of the underlying exact parity symmetry. Interestingly, the maximal mixing of $`\nu _e`$ with its mirror partner can solve the solar neutrino problem, while the maximal mixing of $`\nu _\mu `$ with its mirror partner can solve the atmospheric neutrino problem .
Recently, Blinnikov has proposed that the central engines of GRBs may be cataclysmic astrophysical events involving mirror stars . We now examine the implications of matter-affected neutrino oscillations for this proposal.
Mirror neutrinos emitted in a mirror merger/collapse must traverse a region of excess mirror baryons and suffer the same matter effects as do their ordinary counterparts. Thus interactions between mirror neutrinos and the mirror ambience are equally well described by the matter potentials written down earlier in Eq. (6), save for a change of labels — $`\alpha `$ becomes $`\alpha ^{}`$, where the primed symbol now denotes a mirror particle. In this environment, our ordinary $`\nu _e`$, $`\nu _\mu `$ and $`\nu _\tau `$ are effectively what were previously labelled as sterile neutrinos.
Given that the $`\nu \overline{\nu }`$ annihilation rate per unit volume generally decreases with $`r`$, in order to channel as much energy as possible towards the generation of an ordinary GRB, rapid maximal ordinary–mirror oscillations for both neutrinos and antineutrinos throughout the progenitor is desired. However, as suggested by results from the previous section, this situation cannot be realised by the $`\nu _e\nu _e^{}`$ and $`\nu _\mu \nu _\mu ^{}`$ systems for which oscillations are highly suppressed at the nominal densities. Pushing the squared mass differences to their respective upper limits, substantial mixing is possible at densities lower than $`\rho 10^310^4\text{g}\text{cm}^3`$ by Eq. (13). But the annihilation of ordinary neutrinos will now take place at large distances where the rate is rendered insignificant by geometric factors. As an illustration, suppose that the progenitor is spherical and that ordinary $`\nu _e`$ and $`\overline{\nu }_e`$ are available in large quantities only at $`r>r_0`$ where mixing is not suppressed. We estimate the efficiency of the energy deposition due to $`\nu _e\overline{\nu }_e`$ annihilation to be
$$\frac{\dot{Q}_{e\overline{e}}}{\dot{Q}_{e\overline{e}}^{\text{ord}}}\frac{L_eL_{\overline{e}}_{r_0}^{\mathrm{}}r^8r^2𝑑r}{L_e^{\text{ord}}L_{\overline{e}}^{\text{ord}}_{r_\nu }^{\mathrm{}}r^8r^2𝑑r}=\frac{L_eL_{\overline{e}}}{L_e^{\text{ord}}L_{\overline{e}}^{\text{ord}}}\left(\frac{r_0}{r_\nu }\right)^5,$$
(25)
where $`\dot{Q}`$ is the integrated energy deposition rate, $`L_e`$ and $`L_{\overline{e}}`$ are the effective luminosities of $`\nu _e`$ and $`\overline{\nu }_e`$ respectively, $`r_\nu `$ the radius of the emitting surface, and the subscript “ord” denotes ordinary. Equation (25) clearly demonstrates that a distance as small as $`r_04r_\nu `$ is enough to produce at least a thousand-fold decrease in the efficiency (since $`L_\alpha L_\alpha ^{\text{ord}}`$). Thus ordinary $`\nu _e\overline{\nu }_e`$ and $`\nu _\mu \overline{\nu }_\mu `$ annihilation in a mirror event may be safely ignored.
Conversely, the $`\nu _\tau \nu _\tau ^{}`$ system may at least partially fulfil the aforementioned requirements, if $`\mathrm{\Delta }m_{\tau \tau ^{}}^2`$ is sufficiently large for maximal mixing to be attained not too far from the neutrinosphere (with the usual caveats regarding possible cosmological constraints understood). Be this the case, rapid maximal oscillations will lead to the effective generation of a $`\nu _\tau `$ and a $`\overline{\nu }_\tau `$ flux, each with a luminosity equal to half of that expected from an ordinary merger/collapse. This implies that the energy deposition rate per unit volume at $`r`$ is a factor of four smaller than that due to $`\nu _\tau \overline{\nu }_\tau `$ annihilation alone in an ordinary event. In the standard picture, all three active flavours contribute roughly equal amounts of energy towards the burst. It follows that the total annihilation rate per unit volume at $`r`$ must be some ten times less than the ordinary rate. Furthermore, that ordinary annihilation only takes place at $`r>r_0`$ introduces a geometric reduction factor. Thus, assuming spherical geometry, we estimate the overall efficiency of energy deposition to be
$$\frac{\dot{Q}_{\text{total}}}{\dot{Q}_{\text{total}}^{\text{ord}}}\frac{\dot{Q}_{\tau \overline{\tau }}}{_{\alpha =e,\mu ,\tau }\dot{Q}_{\alpha \overline{\alpha }}^{\text{ord}}}\frac{L_\tau L_{\overline{\tau }}}{_{\alpha =e,\mu ,\tau }L_\alpha ^{\text{ord}}L_{\overline{\alpha }}^{\text{ord}}}\left(\frac{r_0}{r_\nu }\right)^5\frac{1}{10}\left(\frac{r_0}{r_\nu }\right)^5,$$
(26)
where the assumption $`r_{\nu _e}r_{\overline{\nu }_e}\mathrm{}r_{\nu _\tau }r_\nu `$ is implicit. As an illustration, the matter density in a core collapse is such that a $`\nu _\tau \nu _\tau ^{}`$ system with $`\mathrm{\Delta }m^210\text{eV}^2`$ may enjoy maximal mixing beyond $`r_02r_\nu `$.<sup>‡‡</sup><sup>‡‡</sup>‡‡These numbers are inferred from Figure 1b in Ref. for a Type II supernova at $`6`$ s post bounce. Thus a GRB generated by such a mirror event must be approximately $`300`$ times less energetic than one produced by an equivalent event in the ordinary world. The reward, however, is that the baryon-loading problem is virtually eliminated.
We shall not consider small angle resonant conversion of mirror to ordinary neutrinos since this process is generally not simultaneously available for both neutrinos and antineutrinos. Ordinary annihilation necessarily requires the presence of both $`\nu `$ and $`\overline{\nu }`$. Thus small mixing between mirror partners alone will not lead to the production of ordinary GRBs in mirror events.
## V Summary and conclusion
Matter-affected neutrino oscillations in GRB progenitors are studied in this paper. For simplicity, all oscillation schemes examined are essentially independent two-neutrino systems. It is found that oscillations amongst the ordinary, active flavours — $`\nu _e`$, $`\nu _\mu `$ and $`\nu _\tau `$ — have minimal effects on the energetics of the burst. Maximal $`\nu _e`$ and $`\nu _\mu `$ oscillations with their respective sterile partners are also expected to be of little consequence. Contrastingly, if $`\nu _\tau `$ is allowed to oscillate maximally to its sterile partner with a squared mass difference $`\mathrm{\Delta }m^2\stackrel{>}{}10^4\text{eV}^2`$, the energy available for the ultimate GRB may suffer a $`25`$ % decrease. However, reconciliation with constraints imposed by cosmological closure and big bang nucleosynthesis renders this option unlikely.
In the small mixing angle regime, the possible existence of an MSW resonance in the baryonic region implies a generally irretrievable loss of energy beyond the resonance in the form of sterile neutrinos. By demanding minimal loss, we are able to determine some crude constraints on the oscillation parameters. These can be found in the appropriate section in the paper.
Contrary to earlier claims, matter effects alter the oscillation pattern in such a way that the “temporary” conversion to $`\nu _s`$ as a means to bypass the baryonic region cannot be achieved in any realistic GRB progenitor. The fireball will remain as dirty as dictated by the merger/collapse.
The suppression of mirror to ordinary neutrino oscillations by matter effects also argues against the viability of mirror mergers/collapses as ordinary GRB progenitors. Even the most efficient $`\nu _\tau \nu _\tau ^{}`$ maximal oscillations with $`\mathrm{\Delta }m^2\stackrel{>}{}10^4\text{eV}^2`$ would lead to some factor of ten decrease in the energy of the resultant burst relative to that generated by an equivalent event in the ordinary world. Further deterioration inevitably follows, at least in the case of spherical geometry, any decrease in the squared mass difference. Ultimately, the central mirror engine will perhaps need to be a few hundred (or more) times more energetic than its ordinary counterpart if it is to produce an ordinary GRB that is compatible in energy with observations. However, with the guaranteed elimination of the baryon-loading problem, this remains an option.
We stress at this point that the study of matter-affected oscillations is highly model-dependent — the word “model” referring to both the GRB and the neutrino model. Analyses of two-neutrino systems merely serve to illustrate some of the possible effects. But most importantly, we wish to emphasise the necessity to consider matter effects on the oscillation pattern, if neutrinos are to be the means of energy transportation in any GRB progenitor. At this stage, there is no clear evidence for the correct GRB or neutrino model. Hopefully, with new neutrino experiments underway, the latter will be at least partially resolved in the not too distant future.
###### Acknowledgements.
This work was supported in part by the Australian Research Council and in part by the Commonwealth of Australia’s postgraduate award scheme.
|
no-problem/9907/astro-ph9907428.html
|
ar5iv
|
text
|
# Ionized Gas in Damped Lyman-𝛼 Systems and Its Effects on Elemental Abundance Studies
## 1 EVIDENCE FOR PHOTOIONIZED GAS IN DAMPED Ly$`\alpha `$ SYSTEMS
Observations of damped Ly$`\alpha `$ systems (DLAs), which may represent the progenitors to modern disk galaxies, along the sightlines to high-redshift QSOs allow astronomers to trace the evolution of elemental abundances over 90% of the age of the Universe. This is typically achieved by comparing the measured column density of a single ion of a given element, $`X^i`$, with that of neutral hydrogen, $`\mathrm{H}^\mathrm{o}`$. The assumption is then made that the unobserved ionization stages make negligible contributions and $`N(X^i)/N(\mathrm{H}^\mathrm{o})N(X)/N(\mathrm{H})`$.
The total H I column densities of DLAs are, by definition, $`N(\text{}\text{I})2\times 10^{20}`$ cm<sup>-2</sup>; if collected in a single cloud, such a large column density would imply small ionization corrections. Although some of these systems include such monolithic absorbers, high-resolution data show DLAs are often made up of a collection of several (in many cases $`510`$) lower column density clouds (Lu et al. 1996; Pettini et al. 1999; Prochaska & Wolfe 1999). Furthermore, both the Lu et al. (1996) and Prochaska & Wolfe (1999) surveys of DLAs find a very conspicuous correlation between the velocity structure seen in the absorption lines of low-ionization species (e.g., Si II, Fe II, and Zn II) and the structure observed in Al III, a tracer of moderately (photo)ionized gas \[IP$`(\mathrm{Al}^+,\mathrm{Al}^{+2})=(18.8,28.4)`$ eV\]. Such an obvious correlation is not observed between the low-ionization species and the more highly-ionized ions such as Si IV or C IV.
Similar arrangements of low- and intermediate-ions can be found along selected sightlines extending into the halo of the Milky Way. Towards HD 93521 (Spitzer & Fitzpatrick 1993) and $`\rho `$ Leo (Howk & Savage 1999) the tracers of neutral and photoionized gas have relative velocity component distributions resembling those of Al III and low ions in DLAs. The total hydrogen column densities towards these stars are $`\mathrm{log}N(\text{}\text{I})=20.10`$ and 20.44, respectively (Diplas & Savage 1994). In the Milky Way, the scale height of Al III is consistent with that of the free electrons, $`h_z1`$ kpc, which is more extended than the H I distribution (Savage, Edgar, & Diplas 1990).
Most singly-ionized metal species that are dominant ionization stages in H I-bearing regions may also be produced in photoionized clouds where H<sup>o</sup> is a small fraction of the total hydrogen content. The formation of metal absorption lines in both ionized and neutral regions can have a significant effect on elemental abundance determinations. Ionization can be an important issue for high-precision studies of elemental abundances in the Milky Way (Sofia & Jenkins 1998; Howk, Savage, & Fabian 1999; Sembach et al. 1999). The Al III in DLAs, with velocity structure that is often indistinguishable from that of the low ions (Lauroesch et al. 1996), suggests the long-held assumption that ionization effects are neglible in these systems may be unwarranted. In this work we examine the contribution of photoionized gas to the observed metal-line absorption in damped Ly$`\alpha `$ systems.
## 2 THE IONIZING SPECTRUM
The major uncertainty in determining the ionization balance in the DLAs is the unknown shape of the ionizing spectrum. The two most likely origins for ionizing photons in DLAs are: internal stellar and external background sources. Ionization of the DLAs by external sources, e.g., by the integrated light from QSOs, AGNs, starbursts, and normal galaxies (Haardt & Madau 1996; Madau & Shull 1996), requires that the ionizing photons “leak” into the DLAs. This might seem unlikely given the large observed neutral hydrogen column densities, but the multi-component nature of these systems implies that each individual cloud may have a much lower column density than the total. Furthermore, the ionization of the warm ionized medium (WIM) in the Milky Way requires $`15\%`$ of the ionizing photon output of Galactic OB stars (Reynolds 1993). This implies that the gaseous structure of a present day disk galaxy is such that ionizing photons can travel very large distances from their origin, and of order $`5\%`$ may escape the Galaxy completely (Bland-Hawthorn & Maloney 1999). We assume a similar arrangement in the DLAs. For the external ionization case, we adopt an updated version of the Haardt & Madau (1996; hereafter HM) QSO ultraviolet background in our photoionization models. This modified background spectrum (Haardt 1999, priv. comm.) assumes $`q_o=0.5`$ (instead of 0.1), a power law index for the QSO emission spectrum of $`\alpha =1.8`$ (rather than 1.5), and a redshift evolution of the QSO number density that follows the trend described by Madau, Haardt, & Rees (1999).
Internal ionization, in this work, refers to photoionization by stellar sources internal to the DLAs. If DLAs represent the early phases of massive disk galaxies (e.g., Wolfe & Prochaska 1998), it is reasonable to expect some star formation in these systems. Searches for Ly$`\alpha `$ and H$`\alpha `$ emission from DLAs imply low star formation rates: $`\text{}_{}520`$ M yr<sup>-1</sup> (Bunker et al. 1999; Lowenthal et al. 1995), with one detection of Ly$`\alpha `$ emission suggesting $`\text{}_{}1`$ M yr<sup>-1</sup> (Warren & Møller 1996). In the Milky Way, where ionizing photons from early-type stars must leak through the neutral ISM to ionize the WIM, the star formation rate is of order $`25`$ M yr<sup>-1</sup> (Mezger 1987; McKee 1989; McKee & Williams 1997). The perpendicular column density of ionized hydrogen in the WIM is about 1/4 that of neutral hydrogen at the solar circle, thus demonstrating that a relatively large fraction of interstellar hydrogen can be ionized with a modest level of star formation. For the internal ionization case, we adopt the spectrum of a typical late O star as the ionizing spectrum. We use an ATLAS line-blanketed model atmosphere (Kurucz 1991) with an effective temperature $`T_{eff}=33,000`$ K and $`\mathrm{log}(g)=4.0`$. Our work on the ionization of the Galactic WIM (Sembach et al. 1999) suggests that such a spectrum is able to match the constraints imposed by emission line observations of the ionized gas (Reynolds & Tufte 1995; Reynolds et al. 1998; Haffner, Reynolds, & Tufte 1999).
We consider only a single temperature stellar source for the internal case, and a QSO-dominated spectrum for the external ionization case. The reader should be aware that the true ionizing spectrum may be a combination of soft (internal) and hard (external) ionizing spectra. The lack of associated Si IV absorption with the low ions favors either the softer stellar spectrum or a very low ionization parameter.
## 3 PHOTOIONIZATION MODELS
We use the CLOUDY ionization equilibrium code (Ferland et al. 1998; Ferland 1996) to model the ionization of DLAs. We assume a plane-parallel geometry with the ionizing spectrum incident on one side. Rather than match the total H I column density in our models, we stop the integration at the point where the local ionization fraction of neutral hydrogen climbs above 10%, i.e., $`x(\mathrm{H}^\mathrm{o})N(\mathrm{H}^\mathrm{o})/N(\mathrm{H}_{\mathrm{tot}})>0.1`$. Our models therefore treat the (almost) fully-ionized regions assumed to envelop the neutral, H I-bearing clouds. The relative mix of neutral and ionized material can be inferred from observations of adjacent ions, e.g., Al II/Al III. Our models assume a base metal abundance of 0.1 solar, with relative heavy element abundances equivalent to those observed in the Galactic warm neutral medium (Sembach et al. 1999; Howk et al. 1999). We include interstellar grains for heating and cooling processes (see Ferland 1996 and Baldwin et al. 1991), with a dust to gas ratio 0.1 of the Galactic value. Our models are only as accurate as the input atomic data for the CLOUDY code, and we refer the reader to Ferland (1996) and Ferland et al. (1998) for discussions of the uncertainties (see also our earlier work with CLOUDY: Sembach et al. 1999; Howk & Savage 1999). In particular, the dielectronic recombination coefficients for elements in the third and fourth row of the periodic table are typically not well known, and the radiative recombination coefficients for many of the heavier elements (e.g., Zn and Cr) are often based on somewhat uncertain theoretical considerations.
We have computed CLOUDY models for the $`z2.0`$ HM spectrum and for the Kurucz model atmosphere over a range of ionization parameters, $`\mathrm{\Gamma }`$. In this case $`\mathrm{\Gamma }`$ is the dimensionless ratio of total hydrogen-ionizing photon density to hydrogen particle density at the face of the cloud. In Figure 1 we present the ionization fractions of several ions, $`x(X^i)`$, for the HM spectrum as a function of the assumed ionization parameter. The top panel shows $`x(X^i)`$ for elements with at least two potentially measurable ionization stages: Si, Fe, and Al. The bottom panel shows the effects of ionization on relative metal abundances, tracing values of $`x(X^i)/x(\mathrm{Fe}^+)`$ for several commonly measured ions. These plots can be used to correct for ionization effects if one is able to estimate $`\mathrm{\Gamma }`$.
For large values of $`\mathrm{log}\mathrm{\Gamma }(3.0)`$ the predicted strength of the Si IV becomes large, with $`x(\mathrm{Si}^+)/x(\mathrm{Si}^{+3})2`$, contrary to observations. At $`\mathrm{log}\mathrm{\Gamma }=4.0`$ this ratio is $`100`$. We note that the behavior of the ratio $`x(\mathrm{Ni}^+)/x(\mathrm{Cr}^+)`$ in Figure 1 also suggests a low ionization fraction, given that the observed ratio $`N(\mathrm{Ni}^+)/N(\mathrm{Cr}^+)`$ is typically very near the solar Ni/Cr ratio.<sup>1</sup><sup>1</sup>1This result relies on new $`f`$-value determinations by Fedchak & Lawler (1999). Using these new oscillator strengths we find a (weighted) average abundance $`[\mathrm{Cr}/\mathrm{Ni}]=+0.013\pm 0.023`$ in the 11 DLAs containing both elements in the Prochaska & Wolfe (1999) sample. The utility of this ratio as an indicator of the ionization parameter would be improved with better atomic data. Figure 1 shows that while Al III is a tracer of ionized gas, it accounts for less than 10% of the total aluminum column density, even in regions of fully-ionized hydrogen (where Al II or Al IV dominate). Unfortunately, this implies that past arguments for a lack of ionized gas in DLAs based upon a relatively large Al II/Al III ratio are possibly erroneous.
Figure 2 shows the CLOUDY photoionization calculations performed assuming internal sources of ionizing photons, i.e., star formation. Again the fraction of aluminum in Al III is relatively small. If we assume that the properties of the ionized gas in the DLAs are similar to those of the WIM in the Milky Way, a relatively low ionization parameter is preferred (e.g., $`\mathrm{log}\mathrm{\Gamma }3.7`$ is adopted by Sembach et al. 1999). The $`x(\mathrm{Ni}^+)/x(\mathrm{Cr}^+)`$ ratio suggests a low value of $`\mathrm{\Gamma }`$, as in the external ionization case. For the adopted stellar spectrum, the fraction of silicon in the form of Si<sup>+3</sup> never rises above 0.1% for the range of ionization parameters considered. Note that this is a considerably smaller fraction than found for high-$`z`$ Lyman limit systems (Steidel & Sargent 1989; Prochaska 1999) and Ly$`\alpha `$ forest clouds (Songaila & Cowie 1996).
## 4 DISCUSSION
Figures 1 and 2 show that even in the case where the Al II/Al III ratio is large, the amount of ionized gas in DLAs can be significant. Comparing certain metal ions to hydrogen may very well systematically overestimate the abundances of DLAs. Figure 3 shows the implied fraction of ionized hydrogen in DLAs, $`f(\mathrm{H}^+)`$, for the stellar and QSO ionizing spectra in the top panel, where we have plotted the results for several different values of $`\mathrm{log}\mathrm{\Gamma }`$. In the middle panel we show the logarithmic error introduced into measurements of \[Zn/H\], defined as
$$ϵ(\mathrm{Zn}/\mathrm{H})\mathrm{log}\frac{N(\text{Zn II})}{N(\text{}\text{I})}|_{measured}\mathrm{log}\frac{N(\mathrm{Zn})}{N(\mathrm{H})}|_{intrinsic},$$
(1)
for changing mixtures of neutral and ionized gas, as traced by the Al II/Al III ratio, while the bottom panel shows the equivalent $`ϵ(\mathrm{Si}/\mathrm{H})`$. The predicted values of $`ϵ(\mathrm{Zn}/\mathrm{H})`$ and $`ϵ(\mathrm{Si}/\mathrm{H})`$ vary significantly with the adopted ionizing spectrum and ionization parameter. Errors in the derived values of \[Zn/H\] or \[Si/H\] of a few tenths of a dex are easily achievable even when $`N(\text{Al II})N(\text{Al III})`$.
It should be pointed out that the atomic data for zinc are quite uncertain, with the recombination coefficients being derived from extrapolations of the results for other elements. The atomic data for silicon are more reliable, though the abundance of this element is complicated by its possible inclusion into dust grains. The behavior of $`ϵ(\mathrm{Zn}/\mathrm{H})`$ and $`ϵ(\mathrm{Si}/\mathrm{H})`$ observed in Figure 3 is a common feature for those elements predominantly found in their singly-ionized stage in neutral gas. Figure 3 shows that DLAs with $`f(\mathrm{H}^+)(0.5,0.4,\mathrm{and}0.2)`$ can have errors of $`ϵ(\mathrm{Si}/\mathrm{H})(0.1,0.07,\mathrm{and}0.04)`$ dex and $`ϵ(\mathrm{Zn}/\mathrm{H})(0.3,0.2,\mathrm{and}0.1)`$ dex in the case of internal ionization for $`\mathrm{log}\mathrm{\Gamma }=3.0`$. This error is larger for smaller ionization parameters. For the external ionizing spectrum these values are $`ϵ(\mathrm{Si}/\mathrm{H})(0.3,0.2,\mathrm{and}0.05)`$ dex and $`ϵ(\mathrm{Zn}/\mathrm{H})(0.1,0.07,\mathrm{and}0.03)`$ dex.
The large spread in total metal abundances, \[Zn/H\] (Pettini et al. 1997a, 1999), in DLAs at a given redshift could in part be due to differing ionization conditions. The total spread in abundance at a given redshift can be as high as almost 2.0 dex (Pettini et al. 1997a, 1999), which is not easily explained by ionization effects. However, the standard deviations of measurements in a given redshift interval are of order $`0.30.4`$ dex (Pettini et al. 1997a). This degree of variation is consistent with a range of $`f(\mathrm{H}^+)`$ values between $`0.0`$ and $`0.6`$ in these systems.
If ionization is playing a significant role in determining the apparent distribution of metallicity in DLAs, we might expect lower column density systems to show higher inferred abundances, on average. This is consistent with the claim by Pettini et al. (1999) that the “census” of metals in known DLAs is dominated by high column density, low metallicity systems, while those higher apparent metallicity systems tend to be of lower neutral hydrogen column densities (see also Wolfe & Prochaska 1998). However, one should also be wary of the possible selection effects in identifying high metallicity, high column density absorbers (Pei & Fall 1995; Wolfe & Prochaska 1998; see also Pettini et al. 1999).
Systematic errors in the relative metal abundances can also be significant, depending on the ions compared. Unfortunately, systematic errors in excess of 20% can begin to mimic other effects such as nucleosynthetic enrichment or dust depletion. For example, if the internal stellar ionizing spectrum is appropriate, the errors in the \[Si/Fe\] abundances inferred from $`N(\mathrm{Si}^+)/N(\mathrm{Fe}^+)`$ can mimic the preferential inclusion of iron into dust grains, or the enhancement of $`\alpha `$-elements over iron. For $`f(\mathrm{H}^+)(0.5,0.4,\mathrm{and}0.2)`$, the systematic errors in \[Si/Fe\] are $`ϵ(\mathrm{Si}/\mathrm{Fe})(+0.4,+0.3,\mathrm{and}+0.2)`$. Similarly, systematic errors in the \[Cr/Zn\] abundances can mimic the inclusion of chromium into dust: $`ϵ(\mathrm{Cr}/\mathrm{Zn})(0.3,0.2,\mathrm{and}0.1)`$ for the same $`f(\mathrm{H}^+)`$ values. The values $`f(\mathrm{H}^+)`$ required to explain the dispersion in inferred \[Zn/H\] metallicities are also sufficient to provide the dispersion in inferred \[Cr/Zn\] values (Pettini et al. 1997b).
There are some ionic ratios that are accurate tracers of relative metal abundances even if ionized gas makes a substantial contribution. For $`f(\mathrm{H}^+)<0.5`$, the ratios of Mn II and Mg II to Fe II should trace Mn/Fe and Mg/Fe to within $`10\%`$ in the case of the external (hard) ionizing spectrum. The ratio of Si II to Al II should be a reasonable proxy for Si/Al. For the softer stellar spectrum, the ratios of Ni II and Mg II to Si II are reliable tracers of Ni/Si and Mg/Si.
Fe III is a much better tracer of ionized gas than Al III in the sense that it is the dominant ionization stage of iron in the photoionized gas. The $`\lambda `$1122 transition of Fe III may be lost in the Ly$`\alpha `$ forest toward high-redshift quasars, but in select cases this important transition may be useful for providing further information on the ionized gas in the DLAs.
Our calculations suggest that ionized regions may make a significant contribution to the total column density of metal ions in DLAs, and that this contribution can lead to systematic errors in the determination of abundances in these systems. Observational studies of abundances in DLAs should take ionization into account whenever possible, or at the very least assess its possible impact on the derived results.
We thank G. Ferland and collaborators for their years of work on the CLOUDY ionization code, and F. Haardt and P. Madau for providing us an electronic version of their updated UV background spectrum. Our thanks also to M. Pettini, J. Lauroesch, and J. Prochaska for helpful comments that have improved the presentation of our work. We acknowledge support from the NASA LTSA grant NAG5-3485.
|
no-problem/9907/hep-th9907061.html
|
ar5iv
|
text
|
# SOME PROPERTIES OF TYPE I′ STRING THEORY
## 1 Introduction
I am pleased to contribute to this volume in memory of Yuri Golfand. His name will be remembered by future generations of physicists for his 1971 paper with Likhtman, which introduced the four-dimensional super-Poincaré algebra for the first time. Recognizing that such a symmetry algebra is a consistent mathematical possibility was certainly a remarkable achievement. It is a curious coincidence that this paper appeared within a few days of Pierre Ramond’s paper on fermionic strings. Communications were not so good in those days, and the Golfand–Likhtman work was not generally known (at least in the West) for several years. As a result, its influence in driving the development of supersymmetry was not as great as it should have been. In fact, supersymmetric theories in two dimensions were developed to describe the world-sheet theory of RNS strings, and this motivated Wess and Zumino to seek four-dimensional analogs. Only years later did we understand that RNS strings, properly interpreted, have local 10-dimensional spacetime supersymmetry.
The version of the theory that received the most attention prior to 1985 was the one containing both open and closed strings, which Mike Green and I called the type I theory, since it has one ten-dimensional supersymmetry. In 1984 we showed that this theory is inconsistent (due to gauge anomalies) unless the gauge group is chosen to be SO(32). Then the anomalies cancel, and consistency is achieved. In this manuscript, I propose to review some of the interesting features that appear when one of the spatial dimensions is chosen to be a circle. In this case an alternative $`T`$ dual description, known as type I, is available. This description gives a different viewpoint for understanding various phenomena, such as gauge symmetry enhancement. The material presented here is not new, though it may be organized somewhat differently than has been done before.
## 2 T Duality
Let $`X^\mu (\sigma ,\tau )`$ denote the embedding functions of a closed string world-sheet in ten-dimensional spacetime. In the case of a trivial flat geometry, the world sheet field equations are simple two-dimensional wave equations. Suppose that one of the nine spatial dimensions, $`X^9`$ say, is circular with radius $`R`$. Denoting $`X^9`$ by $`X`$ for simplicity, the general solution of the wave equation is
$$X=mR\sigma +\frac{n}{R}\tau +\mathrm{periodic}\mathrm{terms}.$$
(1)
The parameter $`\sigma `$ labels points along the string and is chosen to have periodicity $`2\pi `$. Thus $`m`$ is an integer, called the winding number, which is the number of times the string wraps the spatial circle. The parameter $`\tau `$ is world-sheet time, and correspondingly $`p=n/R`$ is the momentum along the circle. Single-valuedness of $`e^{ipX}`$ requires that $`n`$ is an integer, called the Kaluza–Klein excitation number.
The general solution of the $`2d`$ wave equation consists of arbitrary left-moving and right-moving pieces
$$X(\sigma ,\tau )=X_L(\sigma +\tau )+X_R(\sigma \tau ).$$
(2)
In the particular case described above we have
$`X_L`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(mR+{\displaystyle \frac{n}{R}}\right)(\sigma +\tau )+\mathrm{}`$
$`X_R`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(mR{\displaystyle \frac{n}{R}}\right)(\sigma \tau )+\mathrm{}.`$ (3)
T duality is the world-sheet field transformation $`X_RX_R,X_LX_L`$ (or vice versa) together with corresponding transformations of world-sheet fermi fields. There are two issues to consider: the transformation of the world-sheet action and the transformation of the space-time geometry. The world-sheet action may or may not be invariant under T duality, depending on the theory, but the classical description of the spacetime geometry is always radically changed. Let us examine that first:
$$X=X_L+X_RX_LX_R=\frac{n}{R}\sigma +mR\tau +\mathrm{}.$$
(4)
Comparing with eq.(1), we see that this describes a closed string on a circle of radius $`1/R`$ with winding number $`n`$ and Kaluza–Klein excitation number $`m`$. Thus we learn the rule that under T duality $`R1/R`$ and $`mn`$. In the case of type I or type II superstrings, world-sheet supersymmetry requires that $`\psi _R^9\psi _R^9`$ at the same time. This has the consequence for type II theories of interchanging the IIA theory (for which space-time spinors associated with left-movers and right-movers have opposite chirality) and the IIB theory (for which they have the same chirality). Thus T duality is not a symmetry in this case — rather it amounts to the equivalence of the IIA theory compactified on a circle with radius $`R`$ and the IIB theory on a circle with radius $`1/R`$. If we compactified on a torus instead, and performed T duality transformations along two of the cycles, then this would take IIA to IIA or IIB to IIB and would therefore be a symmetry.
In recent years, D-branes have played a central role in our developing understanding of string theory. These are dynamical objects, which can be regarded as nonperturbative excitations of the theory. They have the property that open strings can end on them. When they have $`p`$ spatial dimensions they are called D$`p`$-branes. If a D$`p`$-brane is a flat hypersurface, the coordinates can be chosen so that it fills the directions $`X^m,m=0,1,\mathrm{},p`$ and has a specified position in the remaining “transverse” dimensions $`X^i=d^i`$ where $`i=p+1,\mathrm{},9`$. An open string ending on such a D-brane is required to satisfy Neumann boundary conditions in tangential directions
$$_\sigma X^M|_{\sigma =0}=0m=0,1,\mathrm{},p,$$
(5)
and Dirichlet boundary conditions in the transverse directions
$$X^i=d^ii=p+1,\mathrm{},9.$$
(6)
A remarkable fact, which is easy to verify, is that the T duality transformation $`X_RX_R`$ interchanges Dirichlet and Neumann boundary conditions. This implies that an “unwrapped” D$`p`$-brane, which is localized on the circle, is mapped by T duality into a D$`(p+1)`$-brane that is wrapped on the dual circle. This rule meshes nicely with the fact that the IIA theory has stable (BPS) D$`p`$-branes for even values of $`p`$ and the IIB theory has stable D$`p`$-branes for odd values of $`p`$. An obvious question that arises is how the wrapped D-brane encodes the position along the circle of the original unwrapped D-brane. The answer is that a type II D-brane has a U(1) gauge field $`A`$ in its world volume, and as a result a wrapped D-brane has an associated Wilson line $`e^{i{\scriptscriptstyle A}}`$. This gives the dual description of position on the circle.
## 3 Type I Superstrings
Type IIB superstrings have a world-sheet parity symmetry, denoted $`\mathrm{\Omega }`$. This $`Z_2`$ symmetry amounts to interchanging the left- and right-moving modes on the world sheet: $`X_L^\mu X_R^\mu ,\psi _L^\mu \psi _R^\mu `$. This is a symmetry of IIB and not of IIA, because only in the IIB case do the left and right-moving fermions carry the same space-time chirality. When one gauges this $`Z_2`$ symmetry, the type I theory results. The projection operator $`\frac{1}{2}(1+\mathrm{\Omega })`$ retains the left-right symmetric parts of physical states, which implies that the resulting type I closed strings are unoriented. In addition, it is necessary to add a twisted sector — the type I open strings. These are strings whose ends are associated to the fixed points of $`\sigma 2\pi \sigma `$, which are at $`\sigma =0`$ and $`\sigma =\pi `$. These strings must also respect the $`\mathrm{\Omega }`$ symmetry, so they are also unoriented. The type I theory has half as much supersymmetry as type IIB (16 conserved supercharges instead of 32 — corresponding to a single Majorana–Weyl spinor). This supersymmetry corresponds to the diagonal sum of the $`L`$ and $`R`$ supersymmetries of the IIB theory.
This “orientifold” construction of the type I theory has the entire 10d spacetime as a fixed point set, since $`\mathrm{\Omega }`$ does not act on $`x^\mu `$. Correspondingly a spacetime-filling orientifold plane (an O9-plane) results. This orientifold plane turns out to carry $`32`$ units of $`RR`$ charge, which must be cancelled by adding 32 D9-planes. Rather than proving this, we can make it plausible by recalling that $`n`$ type I D9-planes carry an SO($`n`$) gauge group. Moreover, we know that the total charge must be cancelled and that SO(32) is the only orthogonal group allowed by anomaly cancellation requirements. Correspondingly, these are the unique choices allowed by tadpole cancellation. As a remark on notation, let me point out that instead of speaking of 32 D$`9`$-branes, we could equivalently speak of 16 D$`9`$-branes and their mirror images. This distinction is simply one of conventions. The important point is that when $`n`$ type I D9-branes and their $`n`$ mirror images coincide with an O9-plane, the resulting system has an unbroken SO(2n) gauge symmetry.
## 4 The Type I Theory
We now wish to examine the T-dual description of the type I theory on a spacetime of the form $`R^9\times S^1`$, where the circle has radius $`R`$. We have seen that IIB is T dual to IIA and that type I is an orientifold projection of IIB. Therefore, one should not be surprised to learn that the result is a certain orientifold projection of type IIA compactified on the dual circle $`\stackrel{~}{S}^1`$ of radius $`R^{}=1/R`$. The resulting T dual version of type I has been named type IA and Type I by various authors. We shall adopt the latter usage here.
We saw that T duality for a type II theory compactified on a circle corresponds to the world-sheet symmetry $`X_RX_R,\psi _R\psi _R`$, for the component of $`X`$ and $`\psi `$ along the circle. This implies that $`X=X_L+X_RX^{}=X_LX_R`$. In the case of type II theories, we saw that $`X^{}`$ describes a dual circle $`\stackrel{~}{S}^1`$ of radius $`R^{}=1/R`$. In the type I theory we gauge world-sheet parity $`\mathrm{\Omega }`$, which corresponds to $`X_LX_R`$. Evidently, in the T dual formulation this corresponds to $`X^{}X^{}`$. Therefore this gauging gives an orbifold projection of the dual circle: $`\stackrel{~}{S}^1/Z_2`$. More precisely the $`Z_2`$ action is an orientifold projection that combines $`X^{}X^{}`$ with $`\mathrm{\Omega }`$. This makes sense because $`\mathrm{\Omega }`$ above is not a symmetry of the IIA theory, since left-moving and right-moving fermions have opposite chirality. However, the simultaneous spatial reflection $`X^{}X^{}`$ compensates for this mismatch.
The orbifold $`\stackrel{~}{S}^1/Z_2`$ describes half of a circle. In other words, it is the interval $`0X^{}\pi R^{}`$. The other half of the circle should be regarded as also present, however, as a mirror image that is also $`\mathrm{\Omega }`$ reflected. Altogether the statement of T duality is the equivalence of the compactified IIB orientifold $`(R^9\times S^1)/\mathrm{\Omega }`$ with the type IIA orientifold $`(R^9\times S^1)/\mathrm{\Omega }_1`$. The symbol $`_1`$ represents the reflection $`X^{}X^{}`$.
The fixed-point set in the type I construction consists of a pair of orientifold 8-planes located at $`X^{}=0`$ and $`X^{}=\pi R^{}`$. Each of these carries $`16`$ units of $`RR`$ charge. Consistency of the type I theory requires adding 32 D$`8`$-branes. Of these, 16 reside in the interval $`0X^{}\pi R^{}`$ and 16 are their mirror images located in the interval $`\pi R^{}X^{}2\pi R^{}`$. Clearly, these D$`8`$-branes are the T duals of the D$`9`$-branes of the type I description.
The positions of the D$`8`$-branes along the interval are determined in the type I description by Wilson lines in the Cartan subalgebra of SO(32). Since this group has rank 16, its Cartan subalgebra has 16 generators. Let $`A^I`$ denote the component of the corresponding 16 gauge fields along the circular direction. These correspond to compact U(1)’s, so their values are characterized by angles $`\theta _I`$. These determine the dual positions of the D$`8`$-branes to be
$$X_I^{}=\theta _IR^{},I=1,2,\mathrm{},16.$$
(7)
The SO(32) symmetry group is broken by the Wilson lines to the subgroup that commutes with the Wilson line matrix. In terms of the type I description this gives the following rules:
* When $`n`$ D$`8`$-branes coincide in the interior of the interval, this corresponds to an unbroken U($`n`$) gauge group.
* When $`n`$ D$`8`$-branes coincide with an O8-plane they give an unbroken SO($`2n`$) gauge group.
In both cases the gauge bosons arise as zero modes of $`88`$ open strings. In the second case the mirror-image D$`8`$-branes also contribute. As we will explain later, this is not the whole story. Further symmetry enhancement can arise in other ways.
The case of trivial Wilson line (all $`A^I=0`$) corresponds to having all 16 D$`8`$-branes (and their mirror images) coincide with one of the D$`8`$-branes. This gives SO(32) gauge symmetry, of course. In addition there are two U(1) factors. The corresponding gauge fields arise as components of the 10d metric and B field: $`g_{\mu 9}`$ and $`B_{\mu 9}`$. One combination of these belongs to the 9d supergravity multiplet, whereas the other combination belongs to a 9d vector supermultiplet.
Somewhat more generally, consider the Wilson line
$$\left(\begin{array}{cc}I_{16+2N}& 0\\ 0& I_{162N}\end{array}\right).$$
(8)
This corresponds to having $`8+N`$ D$`8`$-branes coincide with the O8-plane at $`X^{}=0`$ and $`8N`$ D$`8`$-branes with the O8-plane at $`X^{}=\pi R^{}`$. Generically this gives rise to the gauge symmetry
$$SO(16+2N)\times SO(162N)\times U(1)^2.$$
(9)
However, from the S-dual heterotic description of the type I theory, one knows that for a particular value of the radius further symmetry enhancement is possible. Specifically, for heterotic radius $`R_H^2=N/8`$ one finds the gauge symmetry enhancement
$$SO(162N)\times U(1)E_{9N}.$$
(10)
This radius, converted to type I metric, corresponds to $`R^2=gN/8`$. This symmetry enhancement will be explained from a type I viewpoint later. There are other interesting extended symmetries such as SU(18) and SO(34), which might also be understood from a type I viewpoint, but will not be considered here.
## 5 D0-Branes
The type I theory is constructed as a type IIA orientifold. As such, its bulk physics — away from the orientifold planes — is essentially that of the type IIA theory. More precisely, there are number of distinct type IIA vacua distinguished by the difference in the number of D$`8`$-branes to the left and the right. When these numbers match, one has the ordinary IIA vacuum. When they don’t one has a “massive” IIA vacuum of the kind first considered by Romans. In any case, the ordinary IIA vacuum admits various even-dimensional D-branes. Here I wish to focus on D$`0`$-branes. Later we will discuss what happens to them when they cross a D$`8`$-brane and enter a region with a different IIA vacuum.
D$`0`$-branes of the type I theory correspond to type I D-strings that wrap the compactification circle. The Wilson line on the D-strings controls the positions of the dual D$`0`$-branes. A collection of $`n`$ coincident type 1 D-strings has an O($`n`$) world-volume gauge symmetry. Unlike the case of D$`9`$-branes the reflection element is included, so that the group really is O($`n`$) and not SO($`n`$). This means that in the case of a single D string it is $`O(1)=Z_2`$. Thus in this case there are two possible values for the Wilson line $`(\pm 1)`$. The dual type I description is a single D$`0`$-brane stuck to one of the orientifold planes, with the value of the Wilson line controlling which one it is.
A single D$`0`$-brane of type I stuck to an orientifold-plane cannot move off the plane into the bulk. However, a pair of them can do so. To understand this, let us consider a pair of wrapped D strings of type I, coincident in the other dimensions, which carries an O(2) gauge symmetry. Again, this is T dual to a pair of type I D0-branes with positions controlled by the choice of O(2) Wilson line. The inequivalent choices of Wilson line are classified by conjugacy classes of the O(2) gauge group. So we should recall what they are. It is important that O(2), unlike its SO(2) subgroup, is non-Abelian. Correspondingly, there are conjugacy classes of two types:
* The SO(2) subgroup has classes labeled by an angle $`\theta `$. Including the effect of the reflection, inequivalent classes correspond to range $`0\theta \pi `$. Such a conjugacy class describes a D$`0`$-brane at $`X^{}=\theta R^{}`$ in the bulk, together with the mirror image at $`X^{}=(2\pi \theta )R^{}`$. We see that to move into the bulk a second (mirror image) D0-brane had to be provided.
* The reflection elements of O(2) all belong to the same conjugacy class. A representative is the matrix $`\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)`$. This class corresponds to one stuck D$`0`$-brane on each O8-plane.
## 6 Brane Creation
The solutions of massive type IIA supergravity were investigated by Polchinski and Witten, who showed that they involve a metric and dilaton that vary in one direction. In the context of the type I theory this means they vary in all regions for which the number of D$`8`$-branes to the left and to the right are unequal. Thus the only case for which this effect does not occur is the SO(16) $`\times `$ SO(16) configuration with D$`8`$-branes attached to each of the O8-planes. (This case is closely related to the M theory description of the $`E_8\times E_8`$ theory.)
We can avoid describing the $`X^{}`$ dependence of the metric explicitly by using proper distance $`s`$ as a coordinate along the interval. (This requires holding the other coordinates fixed.) Then one has $`0s\pi R^{},R^{}=1/R`$. We didn’t address the issue earlier, but when we said the interval has length $`\pi R^{}`$ we really did mean its proper distance. In terms of this coordinate there is a varying dilaton field, and hence a varying string coupling constant $`g_A(s)`$. Only in regions with half of the D$`8`$-branes to the left and half to the right is it constant.
The function $`g_A(s)`$ was obtained by Polchinski and Witten by solving the field equations. A more instructive way of obtaining and understanding the result uses the brane creation process. Consider an isolated D$`0`$-brane in a region where $`g_A(s)`$ is constant. Now suppose the D$`0`$-brane crosses a D$`8`$-brane to enter a region where $`g_A(s)`$ is varying. What happens is that the D$`0`$-brane emerges on the other side with a fundamental string stretched between it and the D$`8`$-brane. This phenomenon, called Hanany–Witten effect, has been derived by a variety of means. It occurs in many different settings that are related by various duality transformations. (For example, two suitably oriented M5-branes can cross to give rise to a stretched M2-brane.) The intuitive reason that string creation is required can be understood as follows. The original D$`0`$-brane configuration preserved half the supersymmetry and was BPS. Therefore a delicate balance of focus ensured that it was stable at rest. When it crosses the D$`8`$-brane (adiabatically) the amount of supersymmetry remains unchanged and so it should still be stable at rest. To be specific, let us consider the D$`8`$-brane configuration discussed earlier with $`8+N`$ D$`8`$-branes on the $`X^{}=0`$ O8-plane and $`8N`$ D$`8`$-branes on the $`X^{}=\pi R^{}`$ O8-plane. In this case $`N`$ fundamental strings should connect the D$`0`$-brane to the $`X^{}=0`$ O8-plane. The BPS condition implies that the mass of the D$`0`$-brane should be independent of its position in the interval. Recalling that the mass of a type IIA D$`0`$-brane is $`1/g_A`$, we therefore conclude that for this configuration
$$M_{D0}=\frac{1}{g_A(0)}=\frac{1}{g_A(s)}+NT_{F1}s.$$
(11)
Here $`T_{F1}=\frac{1}{2\pi }`$ is the tension of a fundamental type IIA string (in string units). We therefore see that $`g_A(s)`$ is the reciprocal of a linear function whenever $`N0`$. Thus, for $`N0`$ it necessarily develops a pole if $`R^{}`$ is too large.
The mass $`M_{D0}`$ can also be computed in the type I picture in terms of a pair of wrapped D strings with Wilson line. The mass is independent of the O(2) Wilson line, since it is independent of the $`X^{}`$ coordinate. However, it does depend on the SO(32) Wilson line. Altogether the mass is a sum of two contributions:
$$M_{D0}=M_{\mathrm{winding}}+M_{\mathrm{Wilson}}.$$
(12)
The winding term contribution is given by simple classical considerations:
$$M_{\mathrm{winding}}=22\pi RT_{D1}=\frac{2R}{g}.$$
(13)
A more careful analysis is required to obtain the Wilson line contribution
$$M_{\mathrm{Wilson}}=\frac{N}{4R}.$$
(14)
Note that this contribution vanishes for large $`R`$.
We now come to the main point. There is a special value of $`R^{}`$, the one for which the coupling diverges at the $`X^{}=\pi R^{}`$ orientifold plane. In this case
$$\frac{1}{g_A(\pi R^{})}=0,$$
(15)
which implies, using eq. (11), that
$$M_{D0}=\frac{N}{2R}=\frac{2R}{g}+\frac{N}{4R},$$
(16)
and hence that
$$R^2=gN/8.$$
(17)
This is precisely the value that we previously asserted gives the symmetry enhancement SO($`162N`$) $`\times `$ U(1) $`E_{9N}`$. The reason that there is symmetry enhancement is that there are additional massless vectors with appropriate quantum numbers. They arise as the ground states of open strings connecting the D$`8`$-branes to a stuck D$`0`$-brane. This works because the stuck D$`0`$-brane is massless in this case, as a consequence of eq. (15). This accounts for all the extra gauge bosons when $`N>2`$. In the $`E_7`$ and $`E_8`$ cases, there are additional states attributable to a single bulk D0-brane near $`X^{}=\pi R^{}`$.
## 7 Conclusion
The study of supersymmetric theories has come a long way since Golfand’s pioneering work. I presume that he would be pleased.
## Acknowledgments
I am grateful to O. Bergman for very helpful discussions. This work was supported in part by the U.S. Dept. of Energy under Grant No. DE-FG03-92-ER40701.
## References
|
no-problem/9907/nucl-th9907104.html
|
ar5iv
|
text
|
# Rotating Nuclei at Extreme Conditions: Cranked Relativistic Mean Field Description
## 1 INTRODUCTION
CRMF theory represents the extension of relativistic mean field (RMF) theory to the rotating frame and thus provides a natural framework for the description of rotating nuclei at high spin. Available experimental data on rotating nuclei at extreme conditions of large deformation (superdeformation) and fast rotation in different mass regions allow to test the theoretical models (in our case the CRMF theory) in physical situations where pairing correlations are expected to play no or only a minor role. This is an especially important point considering the fact that in the framework of CRMF theory a consistent theoretical description of pairing correlations including fluctuations by number projection is still in a stage of development.
Thus a systematic study of SD bands within CRMF theory has been undertaken. Detailed investigations have been performed in the $`A140150`$ and in the $`A60`$ mass regions. Experimental observables as dynamic moments of inertia $`J^{(2)}`$, kinematic moments of inertia $`J^{(1)}`$ in the $`A60`$ mass region, absolute ($`Q_0`$) and relative ($`\mathrm{\Delta }Q_0`$) charge quadrupole moments, effective alignments $`i_{eff}`$ and the single-particle ordering in the SD minimum (derived from the analysis of effective alignments) have been confronted with results of CRMF calculations without pairing. It was shown that this theory provides in general good agreement with available experimental data.
All these results give us strong confidence that CRMF theory can be a powerful tool both for the interpretation of experimental data and for the microscopic understanding of the behaviour of rotating nuclei at extreme conditions. Considerable disagreement with experiment has so far only been found in the case of the ’SD’ band in <sup>154</sup>Er . In the present article, we report on investigations on the structure of SD bands observed recently in <sup>153</sup>Ho with the aim to understand better the origin of the discrepancies found in the <sup>154</sup>Er case.
## 2 The nuclei <sup>153</sup>Ho and <sup>154</sup>Er.
The nucleus <sup>153</sup>Ho. Three SD bands have been observed in <sup>153</sup>Ho . Their structure, as it follows from CRMF calculations with the NL1 force , is discussed below. Considering the large size of the SD shell gaps at $`Z=66`$ and $`N=86`$, which follows from the doubly magic nature of the <sup>152</sup>Dy SD core (conf. $`\pi 6^4\nu 7^2`$) , the occupation of neutron orbitals in the considered configurations is kept as in the <sup>152</sup>Dy SD core. Then the configurations based on different occupations of the proton orbitals by the 67th proton have been calculated. As a result, they are labelled by the proton orbital occupied above the $`Z=66`$ SD shell gap.
Band 1. This band undergoes a band crossing at a frequency $`\mathrm{\Omega }_x0.6`$ MeV, where a large increase in $`J^{(2)}`$ is observed (Fig. 1a). Such a crossing appears also in the lowest SD configuration obtained in CRMF calculations (solid line in Fig. 1a). It arises from the crossing between the $`\pi [530]1/2(r=+i)`$ and $`\pi [770]1/2(r=+i)`$ orbitals which are the lowest proton orbitals above the $`Z=66`$ SD shell gap at different frequencies (see Fig. 4 in Ref. for single-routhian diagrams). For this configuration, the results of our calculations are in very good agreement with experiment at low rotational frequencies with respect to $`J^{(2)}`$ and $`i_{eff}`$ (see Fig. 1). However, compared with experiment the crossing is calculated at somewhat higher frequency and it is sharper. The latter feature is possibly due to both the deficiencies of the cranking model and the fact that the calculations have been carried out as a function of rotational frequency but not as a function of spin. The calculated gain in alignment at crossing is very close to the measured one and it would be in perfect agreement with experiment if the crossing would have been calculated at the experimental crossing frequency. The same interpretation of this band has been obtained also in cranked Woods-Saxon calculations at fixed deformation .
Band 2. According to the CRMF calculations, we can assign to this band the configuration $`\pi [530]1/2(r=i)`$. Assuming this assignment, the experimental values of $`J^{(2)}`$ and $`i_{eff}`$ are reasonably well reproduced (see Fig. 1). This assignment corresponds to the one discussed in Ref. . Additional confirmation of the interpretation of bands 1 and 2 could be obtained by a precise measurement of the charge quadrupole moments $`Q_0`$ relative to the ones of the <sup>152</sup>Dy(1) band. According to the calculations, the occupation of the $`\pi [530]1/2(r=+i)`$, $`\pi [530]1/2(r=i)`$ and $`\pi [770]1/2(r=+i)`$ orbitals leads to an increase of $`Q_0`$ by 0.60 $`e`$b, by 0.65 $`e`$b (both values are calculated at $`\mathrm{\Omega }_x=0.5`$ MeV) and by 1.15 $`e`$b (calculated at $`\mathrm{\Omega }_x=0.8`$ MeV), respectively.
Band 3. The features of this band are difficult to explain assuming that the changes of the physical observables with respect to the ones of the <sup>152</sup>Dy(1) band should be governed by an additional proton. At high rotational frequencies, the $`J^{(2)}`$ moment of inertia drops considerably below that of the <sup>152</sup>Dy(1) band. This drop is accompanied by the loss in effective alignment $`i_{eff}`$ of $`0.8\mathrm{}`$ in the $`\mathrm{\Omega }_x=0.510.68`$ MeV range (Fig. 1). It was suggested in Ref. that the occupation of the $`\pi [523]7/2(r=i)`$ orbital by 67th proton could lead to such features. It seems that this interpretation can be ruled out since the calculated loss of alignment of $`0.2\mathrm{}`$ arises from the interaction between the $`(r=i)`$ signatures of the $`\pi [523]7/2`$ and the $`\pi [530]1/2`$ orbitals. However, the configuration with the $`\pi [530]1/2(r=i)`$ orbital occupied is assigned to band 2 which does not show an increase neither in $`J^{(2)}`$ nor in $`i_{eff}`$ expected from such an interaction. A consistent interpretation of this band within a pure single-particle picture is not found in the CRMF calculations either. For example, the effective alignment of the $`\pi [532]5/2(r=i)`$ orbital located above the $`Z=66`$ SD shell gap (see Fig. 4 in Ref. ) is shown in Fig. 1b. The calculated $`J^{(2)}`$ moment of inertia of this configuration is very close to the one of the configuration assigned to band 2 (Fig. 1a). Although the results of calculations are reasonably close to experiment at low frequencies, the loss of $`i_{eff}`$ and the drop in $`J^{(2)}`$ at higher frequencies are not reproduced.
The occupation of positive parity $`\pi [413]5/2`$, $`\pi [404]9/2`$ and $`\pi [411]3/2`$ orbitals located above the $`Z=66`$ SD shell gap (Fig. 4 in Ref. ) has also been considered. The strongest argument against the interpretation of the observed bands as based on these orbitals comes from the fact that these orbitals have a small signature splitting. Thus signature partner bands with small signature splitting should be observed if these orbitals are occupied.
The nucleus <sup>154</sup>Er. One band has been observed in <sup>154</sup>Er and it has been discussed as SD . Two specific features of this band are (i) the $`J^{(2)}`$ moment of inertia at high frequencies is much lower than the one of the <sup>152</sup>Dy(1) band (Fig. 1a), (ii) the effective alignment in the <sup>152</sup>Dy(1)/<sup>154</sup>Er(1) pair drops by $`2.1\mathrm{}`$ in the frequency range $`\mathrm{\Omega }_x=0.370.65`$ MeV. These features strongly suggest that this band has a smaller number of high-$`N`$ orbitals occupied (and thus is less deformed) than the <sup>152</sup>Dy(1) band. Considering available single-particle orbitals above the $`Z=66`$ SD shell gap and their impact on physical observables (as deduced from the analysis of <sup>153</sup>Ho), it is clear that this band cannot be described as a ’doubly magic <sup>152</sup>Dy core + 2 additional protons’ system. Indeed, the results of calculations for $`J^{(2)}`$ and $`i_{eff}`$ of the lowest SD configurations in this nucleus disagree considerably with experiment. The possibility that the observed band belongs to a highly-deformed triaxial minimum predicted in Ref. has also been checked. Such a minimum with $`Q_010`$ $`e`$b and $`\gamma 9^{}`$ exists in CRMF calculations too and it is lower in energy than the SD minimum at $`I<60\mathrm{}`$. Fig. 1a shows the $`J^{(2)}`$ moment of inertia of one of the configurations ($`\pi 6^1\nu 6^4(+,1)`$) calculated in this minimum. Although there still is disagreement with experiment, the discrepancy is somewhat smaller than in the case of the SD configurations. However, it is difficult to present a specific configuration assignment for the observed band. The measurements of the transition quadrupole moment of this band will help to resolve the existing problem.
## 3 Conclusions
CRMF theory has been applied for the study of SD bands observed in <sup>153</sup>Ho. Bands 1 and 2 are reasonably well described, while it was difficult to get a consistent interpretation for band 3 in a pure single-particle picture. Based on these results it was concluded that the band observed in <sup>154</sup>Er and previosly discussed as SD is very likely less deformed than the <sup>152</sup>Dy(1) band.
A.V.A. acknowledges support from the Alexander von Humboldt Foundation. This work is also supported in part by the Bundesministerium für Bildung und Forschung under the project 06 TM 875.
|
no-problem/9907/cond-mat9907258.html
|
ar5iv
|
text
|
# Doping Induced Magnetization Plateaus
## Abstract
The low temperature magnetization process of antiferromagnetic spin-$`S`$ chains doped with mobile spin-$`(S1/2)`$ carriers is studied in an exactly solvable model. For sufficiently high magnetic fields the system is in a metallic phase with a finite gap for magnetic excitations. In this phase which exists for a large range of carrier concentrations $`x`$ the zero temperature magnetization is determined by $`x`$ alone. This leads to plateaus in the magnetization curve at a tunable fraction of the saturation magnetization. The critical behaviour at the edges of these plateaus is studied in detail.
preprint: ITP-UH-13/99
Synthetization of new magnetic materials and availability of very high magnetic fields provide new possibilities to study the magnetization process of low-dimensional quantum spin systems. In particular so-called spin liquids realized in quasi-one dimensional antiferromagnetic systems such as spin chains, spin ladders and exchange-alternating spin chains attract much interest at present due to the possible occurence of magnetization plateaus associated with gapped excitations. In addition to saturated magnetization $`M_s`$ such plateaus, i.e. regions where the magnetization does not depend on the magnetic field for sufficiently low temperatures, are admissible from topological considerations at certain fractions of $`M_s`$ depending on the value of the spin of the substance and the translational symmetry of the ground state. Necessary conditions for the occurence of plateaus have been formulated by Oshikawa et al. employing a generalization of the Lieb-Schultz-Mattis theorem: for a spin-$`S`$ chain with a magnetic unit cell containing $`q`$ magnetic moments this feature can appear at rational values $`M`$ with integer $`q(SM)`$. The existence of these phenomena in a variety of models has been established by numerical and analytical studies of various low-dimensional magnetic insulators including spin chains, spin ladders and systems with multi spin exchange or exchange anisotropies . Very recently, several experimental observations of such magnetization plateaus at non-zero $`M`$ have been reported .
A common feature in these systems is that the plateaus in the magnetization curves appear at certain simple fractions of the maximal value $`M_s`$ as a consequence of their topological origin. In this letter we report on a mechanism leading to gaps for magnetic excitations at magnetizations which can be controlled by suitable preparation of the sample, namely doping. We study this phenomenon in the framework of a recently introduced class of integrable models for doped Heisenberg chains which may be used as a basis for studies of certain features of doped transition metal oxides . Starting from the double-exchange model a strong ferromagnetic Hund’s rule coupling between the spins of the itinerant $`e_g`$ electrons and localized quantum spins $`(S1/2)`$ arising from the $`t_{2g}`$ electrons allows to introduce an effective Hamiltonian on a restricted Hilbert space with maximally allowed spin $`S^{}`$ on a given lattice site , i.e. $`S^{}=S`$ if the electronic state on this site is occupied, or $`S^{}=S1/2`$ if there is no $`e_g`$ electron (denoted as a hole in the following). This derivation of a low energy Hamiltonian generalizes that of the $`t`$$`J`$ model from the Hubbard model which is corresponds to the case of $`S=1/2`$, i.e. no localized spins. Numerical studies of the $`S1`$ variants of these models have been performed to gain a better understanding of experimental findings for the doped Haldane system Y<sub>2-x</sub>Ca<sub>x</sub>BaNiO<sub>5</sub> $`(S=1)`$ and manganese oxides such as La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> ($`S=2`$) .
Below we consider integrable models of this type in one spatial dimension. Similar to the models obtained from the general procedure outlined above their Hamiltonians are of the form
$$^{(S)}=\underset{n=1}{\overset{L}{}}\left\{𝒳_{n,n+1}^{(S)}+𝒯_{n,n+1}^{(S)}\right\}.$$
(1)
Here $`𝒳_{ij}^{(S)}`$ and $`𝒯_{ij}^{(S)}`$ describe the (antiferromagnetic) exchange and hopping of the holes between sites $`i`$ and $`j`$ of the lattice, respectively. $`SU(2)`$ invariance of the model implies that they can be written as polynomials of the operator $`𝐒_i𝐒_j`$ (see e.g. ). The form of these polynomials is fixed in the integrable models . For example, in the $`S=1`$ case with possible relevance to the doped Nickel oxides the antiferromagnetic exchange terms are given in terms of bilinear and biquadratic Heisenberg couplings depending on the values $`S_{i,j}\{1/2,1\}`$ of the spins on sites $`i`$ and $`j`$ :
$`𝒳_{ij}^{(1)}={\displaystyle \frac{1}{2}}\left({\displaystyle \frac{1}{S_iS_j}}𝐒_i𝐒_j1+\delta _{S_iS_j,1}\left(1(𝐒_i𝐒_j)^2\right)\right).`$
(Note that the undoped chain, i.e. $`S_i=1`$ for all $`i`$, is the integrable spin-$`1`$ Takhtajan-Babujian model while the completely doped chain is the spin-$`1/2`$ Heisenberg chain with bilinear exchange.) Similarly, the kinetic term of the integrable spin-$`1`$ model reads
$`𝒯_{ij}^{(1)}=\left(1\delta _{S_i,S_j}\right)𝒫_{ij}\left(𝐒_i𝐒_j\right),`$
where $`𝒫_{ij}`$ is an operator permuting the states on sites $`i`$ and $`j`$ thereby allowing the spin-$`1/2`$ “holes” to propagate. The additional exchange term in this expression leads to different hopping amplitudes $`t(S_{ij})`$ depending on the total spin $`S_{ij}`$ on the participating sites, i.e. $`t(1/2)=1`$ and $`t(3/2)=+1/2`$ for the possible values $`S_{ij}=1/2`$ and $`3/2`$, respectively. These amplitudes differ from the values proposed in Ref. for the doped Nickel oxide, namely $`t(1/2)=1/2`$, $`t(3/2)=1`$. This is one reason for the absence of a ferromagnetic phase in the integrable model (1) (see for a discussion of the other differences).
For general $`S`$ the integrable models are constructed from solutions of a Yang-Baxter equation and can be solved by means of the algebraic Bethe Ansatz . Their thermodynamical properties at finite temperature $`T`$ can be obtained from the solution of the thermodynamic Bethe Ansatz (TBA) equations, i.e. the following set of coupled nonlinear integral equations
$`ϵ_n(\xi )=Ts\mathrm{ln}[1+\mathrm{e}^{ϵ_{n1}(\xi )/T}][1+\mathrm{e}^{ϵ_{n+1}(\xi )/T}]`$ (2)
$`2\pi \delta _{n,2S}s(\xi )\delta _{n,1}Ts\mathrm{ln}[1+\mathrm{e}^{\kappa (\xi )/T}],`$ (3)
(4)
$`[2\pi a_{2S}s(\xi )+\mu ]Ts\mathrm{ln}[1+\mathrm{e}^{ϵ_1(\xi )/T}]`$ (5)
$`=\kappa (\xi )+TR\mathrm{ln}[1+\mathrm{e}^{\kappa (\xi )/T}].`$ (6)
Here $`\left(fg\right)(\xi )`$ denotes a convolution in the space of rapidities $`\xi `$, $`a_n(\xi )=(2n/\pi )\left(4\xi ^2+n^2\right)^1`$, $`s(\xi )=\left(2\mathrm{cosh}\pi \xi \right)^1`$ and $`R=a_2(1+a_2)^1`$. Eqs. (4) are to be solved subject to the condition $`lim_n\mathrm{}(ϵ_n/n)=H`$ with the external magnetic field $`H`$ and $`\mu `$ is the chemical potential for the holes controlling their concentration. In terms of the functions $`ϵ_n(\xi )`$ and $`\kappa (\xi )`$ the free energy of this system reads ($`E_0^{(S)}`$ is the ground state energy of the spin-$`S`$ Takhtajan-Babujian chain for $`H=0`$ )
$`{\displaystyle \frac{1}{L}}F(T,H,\mu )={\displaystyle \frac{E_0^{(S)}}{L}}T{\displaystyle d\xi s(\xi )\mathrm{ln}[1+\mathrm{e}^{ϵ_{2S}(\xi )/T}]}`$ (7)
$`T{\displaystyle d\xi (a_{2S}s)(\xi )\mathrm{ln}[1+\mathrm{e}^{\kappa (\xi )/T}]}.`$ (8)
The low temperature ($`HT`$) phase diagram for the spin-1 system has been obtained in Ref. , qualitatively the same behaviour is found for general $`S1`$ .
Here we study the properties of these systems in a magnetic field at fixed doping. For hole concentrations $`0<x<x_c(S)`$ (see Fig. 1) the low energy excitation spectrum of the system allows to identify four intermediate field phases (labelled A, B<sub>1</sub>, C and B<sub>2</sub> in Fig. 1 of Ref. ) for $`0<H<H_s`$ before the system is completely polarized for $`H>H_s`$. Of particular interest in the present context is the phase C: since the system is not ferromagnetically polarized one expects nontrivial excitations for both the charge and magnetic degrees of freedom. In the neighbouring phases B<sub>1,2</sub> these excitations are massless leading to an effective description of these phases in terms of a Tomonaga-Luttinger model. The analysis of the $`T=0`$ limit of the TBA eqs. (4) shows that in phase C only one of these modes is gapless . The resulting low-energy theory is that of a single mode with dispersion
$$ϵ_{2S}(\xi )+_Q^Qd\xi ^{}K(\xi \xi ^{})ϵ_{2S}(\xi ^{})=ϵ_{2S}^{(0)}(\xi )$$
(9)
where $`K(\xi )=2_{k=1}^{2S1}a_{2k}(\xi )`$, $`ϵ_{2S}^{(0)}(\xi )=\left(2S\frac{1}{2}\right)H\mu 2\pi _{k=1}^{2S}a_{2k1}(\xi )`$ and $`Q`$ is a function of magnetic field and chemical potential through the condition $`ϵ_{2S}(\pm Q)=0`$. The corresponding hole concentration $`x=_Q^Qd\xi \sigma _{2S}(\xi )`$ is obtained from an equation for $`\sigma _{2S}`$ similar to (9) with driving term replaced by $`\sigma _{2S}^{(0)}(\xi )=_{k=1}^{2S}a_{2k1}(\xi )`$. Further analysis of the zero temperature limit of the TBA eqs. (4) shows that the massless mode in this phase carries the charge degrees of freedon while all magnetic excitations are gapped, i.e. $`\kappa (\xi )<0`$ and $`ϵ_{n2S}(\xi )>0`$ for all $`\xi `$. Hence, $`x`$ *and* the magnetization $`M_p=S3x/2`$ are constant throughout this phase for fixed $`Q`$, i.e. $`\left(2S\frac{1}{2}\right)H\mu =\mathrm{const}`$. This implies plateaus in the magnetization curve $`M(H)`$ *below* the saturated value $`M_s=Sx/2`$ (see Fig. 2 for $`S=1`$). The end points of these plateaus are $`H_{c1}=2\mu `$ and
$$H_{c2}=2\mu +\frac{4}{S}+2_Q^Qd\xi a_{2S1}(\xi )ϵ_{2S}(\xi ).$$
(10)
As $`HH_{c1,2}`$ from inside the plateau region the spin gap closes as $`\mathrm{\Delta }\left|HH_{c1,2}\right|`$.
For finite temperatures the full set of TBA eqs. has to be solved to determine the magnetization curves. In a sufficiently strong magnetic field $`HT`$ however, the energies $`ϵ_{n>2S}`$ are gapped and can be eliminated from the TBA eqs. (4) . For the doped $`S=1`$ chain this procedure leads to a coupled set of three nonlinear integral equations which are straightforward to solve by iteration. Choosing the chemical potential such that the hole concentration $`x=F/\mu `$ is fixed the magnetization and magnetic susceptibility can be obtained from the thermodynamical potential $`\mathrm{\Omega }(T,H,x)=F(T,H,\mu )+\mu x`$. In Fig. 3 we present the resulting data for various temperatures. They clearly show the formation of plateaus with decreasing temperature and the singular behaviour arising in the vicinity of the transitions into the spin gap phase as expected from the analysis of the zero temperature phase diagram.
Remarkably, the nature of these singularities on the two critical end points $`H_{c1,2}`$ of the plateau is quite different: for the magnetic insulators discussed in the introduction the singular part of the magnetization near the plateaus has been predicted to show a square root behaviour due to the similarity of the transition to a commensurate-incommensurate transition . A reliable numerical verification of this prediction — even for an integrable model — is extremely difficult for transitions other than the one into the ferromagnetically polarized state . In the model considered here such diffulties arise for $`HH_{c1}`$ only: for $`HH_{c1}`$ the zero temperature magnetization shows a critical behaviour $`\left(H_{c1}H\right)^\alpha `$ consistent with the square root behaviour $`\alpha =1/2`$ within the numerical accuracy of our data. On the other hand, near $`H_{c2}`$ the magnetization depends linearly on the external field, i.e. $`MM_pHH_{c2}`$ for $`HH_{c2}`$. This difference in the critical behaviour is also evident in the temperature dependence of the magnetic susceptibility near $`H_{c1,2}`$, in particular $`\chi \mathrm{const}.`$ at the high-field end of the plateau (see Fig. 3(b)). Note, that a similar $`T`$-dependence has been observed in experiments on certain spin-$`1/2`$ Heisenberg ladders .
This different singular behaviour is a consequence of the coupling between the two massless excitations present in the Tomonaga-Luttinger phases outside the interval $`H_{c1}<H<H_{c2}`$: without the magnetic field breaking the spin-$`SU(2)`$ these modes can be assigned usually to spin and charge excitations separately based on their different symmetries. In an external field, however, this assignment leads to a coupling of the two sectors. In certain cases this interaction can be removed by allowing for mixing of the corresponding quantum numbers . Analysing the Bethe Ansatz wave functions we can determine the relation between the total charges $`Q_{1,2}`$ in the two gapless sectors to the physical quantum numbers, i.e. number of holes and $`z`$-component of the total spin. In the phase for $`H<H_{c1}`$ a change in magnetization at fixed doping affects the total charge in one of the sectors only. Necglecting the coupling to the other sector, bosonization then gives the familiar square root singularity of the magnetization at $`HH_{c1}`$ . For $`H>H_{c2}`$, however, a change of magnetization requires $`\mathrm{\Delta }Q_1=\mathrm{\Delta }Q_2`$ for fixed doping $`x`$. As the critical field $`H_{c2}`$ is approached from above, this fact together with the coupling between the two sectors leads to a dispersion $`ϵ_1(k)(v_F^2k^2/4\mathrm{\Delta })\mathrm{\Delta }`$ of the “incommensurate” soft mode with $`\mathrm{\Delta }\left(HH_{c2}\right)^2`$ rather than the usual behaviour $`\mathrm{\Delta }\left|HH_{c2}\right|`$ for fermions. This field dependence immediately gives the linear $`H`$-dependence of the magnetization for $`HH_{c2}`$.
In summary we have studied the properties of doped Heisenberg chains in a magnetic field in the framework of a Bethe Ansatz solvable model. We find plateaus in the magnetization curves at certain values $`M_p`$ of the magnetization. To our knowledge this is a novel feature in an integrable model thereby providing the basis for more detailed studies aiming at a better understanding of the mechanism for the occurence of these plateaus and the critical behaviour at their end points. $`M_p`$ can be continuously tuned by changing the concentration of carriers. This is a fundamentally different from the fixed values of $`M_p`$ obtained from topological arguments in the magnetic insulators studied previously. An extension of such arguments to the case of doped chains might be possible within a *classical* treatment of the localized $`t_{2g}`$ spins in the double-exchange model: in this limit the ground state has an incommensurate magnetic structure with $`k_F`$ periodicity . However, unlike the models considered here this approach also leads to gapped charge excitations. An alternative attempt to describe the plateaus at tunable fractions of the maximal magnetization within a bosonized model relies just on this existence of a gapless charge mode . Similarly, the second feature where the plateaus discussed here differ from the ones in quantum spin chains and ladders can be understood only as a consequence of the existence of a second massless mode: the relation of the physical quantum numbers to the conserved charges of the effective low energy theory together with the coupling of the gapless channels conspire to give the linear field dependence of the singular part of the magnetization observed near the critical point $`H_{c2}`$. Note that this mechanism does not restrict the critical exponents to the values $`1/2`$ and $`1`$ discussed in this letter.
Further studies of the low energy properties — in particular the analysis of the asymptotics of correlation functions as $`H_{c1,2}`$ are approached — in this solvable microscopic model will lead to new insights into the critical behaviour at the plateau transitions and possible the related ones into Mott insulating phases of interacting particles. Furthermore, the phenomena reported in this letter may be verified in experimental studies of the magnetization process in the doped, effectively one-dimensional transition metal oxides mentioned above.
We thank D. C. Cabra, F. H. L. Eßler and A. M. Tsvelik for discussions. This work is supported in parts by the Deutsche Forschungsgemeinschaft under Grant No. Fr 737/2.
|
no-problem/9907/physics9907007.html
|
ar5iv
|
text
|
# The heat of atomization of sulfur trioxide, SO3 — a benchmark for computational thermochemistry
## I Introduction
Neither the sulfuric anhydride (SO<sub>3</sub>) molecule, nor its importance in atmospheric and industrial chemistry, require any introduction to the chemist.
SO<sub>3</sub> displays somewhat unusual bonding. While it is often cited as a ‘hypervalent molecule’ in undergraduate inorganic chemistry textbooks, quantitative theories of chemical bonding such as atoms-in-molecules unequivocally show (see Ref. for a lucid review and discussion) that there are no grounds for invoking violation of the octet rule in SO<sub>3</sub> (or, for that matter, most second-row molecules), and that bonding in SO<sub>3</sub> is best seen as a combination of moderately polar $`\sigma `$ bonds with highly polar $`p_{\pi ,S},p_{\pi ,O}`$ bonds.
Previous experience on BF<sub>3</sub> and SiF<sub>4</sub> suggests that in molecules with several strong and very polar bonds, basis set convergence will be particularly slow. In addition, in a recent calibration study on the anharmonic force field of SO<sub>3</sub> it was found that the molecule represented a fairly extreme example of a phenomenon noted previously for second-row molecules — namely the great sensitivity of the SCF part of computed properties to the presence of so-called ‘inner polarization functions’, i.e. high-exponent $`d`$ and $`f`$ functions.
Very recently, Martin and de Oliveira published a standard protocol known as W2 (Weizmann-2) theory that was able to predict total atomization energies of a fairly wide variety of molecules (including SO<sub>2</sub>, which is relevant for this work) to better than 0.23 kcal/mol on average (0.18 kcal/mol for molecules dominated by a single reference configuration). Application of this method to SO<sub>3</sub> requires a CCSD (coupled cluster with all single and double excitations) calculation with 529 basis functions in the $`C_{2v}`$ nondegenerate subgroup, which was well beyond our available computational resources, particularly in terms of disk space.
Very recently, however, Schütz et al. developed a general implementation of integral-direct correlated methods that made possible, inter alia, CCSD calculations on basis sets this size on workstation computers. Consequently, we carried out a benchmark calculation on the heat of atomization of SO<sub>3</sub>, which is reported in the present work.
Having obtained the benchmark ab initio value, we will assess the performance of some less computationally demanding schemes. This includes W1 theory, which is much more cost-effective than W2 theory but performs much less well for second-row than for first-row compounds. From an analysis of the SO<sub>3</sub> results, we will derive a minor modification (denoted W1 theory) which in effect largely removes this disadvantage.
## II Methods
Most electronic structure calculations were carried out using MOLPRO98.1 (with integral-direct code installed) running on a DEC Alpha 500/500 workstation at the Weizmann Institute of Science. Some additional calculations were carried out using GAUSSIAN 98 running on the same platform.
As in our previous work on SO<sub>2</sub>, the CCSD(T) electron correlation method, as implemented by Hampel et al., has been used throughout. The acronym stands for coupled cluster with all single and double substitutions augmented by a quasiperturbative account for triple excitations. From extensive studies (see for a review) this method is known to yield correlation energies very close to the exact $`n`$-particle solution within the given basis set as long as the Hartree-Fock determinant is a reasonably good zero-order reference wave function. None of the usual indicators ($`𝒯_1`$ diagnostic, largest excitation amplitudes, or natural orbital occupancies of first few HOMOs and LUMOs) suggest a significant departure from the single-reference regime. (For the record, $`𝒯_1`$=0.018 for SO<sub>3</sub>.)
Valence correlation basis sets are built upon the augmented correlation-consistent polarized $`n`$-tuple zeta (aug-cc-pV$`n`$Z, or AV$`n`$Z for short) basis sets of Dunning and coworkers. In this work, we have considered AVDZ, AVTZ, AVQZ, and AV5Z basis sets, with maximum angular momenta $`l`$=2 ($`d`$), 3 ($`f`$), 4 ($`g`$), and 5 ($`h`$), respectively. The effect of inner polarization was accounted for by adding ‘tight’ (high-exponent) $`d`$ and $`f`$ functions with exponents that follow even-tempered series $`\alpha \beta ^n`$, with $`\alpha `$ the tightest exponent of that angular momentum in the underlying basis set and $`\beta `$=2.5. Such basis sets are denoted AV$`n`$Z+d, AV$`n`$Z+2d, and AV$`n`$Z+2d1f. The largest basis set considered in the present work, AV5Z+2d1f, corresponds to $`[8s7p7d5f3g2h]`$ on sulfur and $`[7s6p5d4f3g2h]`$ on oxygen (148 and 127 contracted basis functions, respectively), adding up to 529 basis functions for the entire molecule. The CCSD calculation in this basis set was carried out using the newly implemented direct algorithm; all other CCSD and CCSD(T) calculations were done conventionally.
The effect of inner-shell correlation was considered at the CCSD(T) level using two specialized core correlation basis sets, namely the Martin-Taylor (MT) basis set used in previous work on SO<sub>2</sub>, and the somewhat more compact MTsmall basis set that is used in the W2 protocol for this purpose. Correlation from the sulfur ($`1s`$) orbital was not considered, since this lies too deep to meaningfully interact with the valence orbitals. Scalar relativistic effects were computed as expectation values of the first-order Darwin and mass-velocity corrections for the ACPF (averaged coupled pair functional) wave function with the abovementioned core correlation basis sets. (All electrons were correlated in these calculations since relativistic effects are most important for the electrons closest to the nucleus.)
The CCSD(T)/VQZ+1 reference geometry used throughout this work, $`r_{SO}`$=1.42279 Å, was taken from the earlier spectroscopic work on SO<sub>3</sub>, as was the anharmonic zero-point energy of 7.794 kcal/mol.
## III Results and discussion
The most striking feature of the basis set convergence at the SCF level (Table 1) is certainly the great importance of inner polarization functions: augmenting the AVDZ basis set with two tight functions on S has an effect of no less than 40.5 kcal/mol! The same operation affects the AVTZ SCF binding energy by 15.7 kcal/mol, and even from AVQZ to AVQZ+2d the effect is still 8.6 kcal/mol, probably the largest such effect hitherto observed. In addition augmenting the basis set by a tight $`f`$ function has an effect of 1.1 kcal/mol from AVTZ+2d to AVTZ+2d1f, but only 0.16 kcal/mol from AVQZ+2d to AVQZ+2d1f. Presumably the effect from AV5Z+2d to AV5Z+2d1f will be next to negligible.
Not surprisingly, this translates into a substantial effect on the extrapolated SCF limit. A geometric extrapolation from the AV{D,T,Q}Z results would yield 153.64 kcal/mol as the SCF limit, 6.3 kcal/mol less than the AV{T,Q,5}Z+2d1f limit employed in W2 theory. The AV{D,T,Q}Z+2d limit, on the other hand, if fairly close to the latter at 159.7 kcal/mol. (Our best SCF limit is 159.90 kcal/mol, of which the extrapolation accounts for 0.15 kcal/mol.)
This type of variability is almost completely absent for the correlation energy, where AV$`n`$Z, AV$`n`$Z+2d and AV$`n`$Z+2d1f largely yield the same answers. Following the W2 protocol, the CCSD correlation energy is extrapolated using the $`A+B/l^3`$ extrapolation formula of Halkier et al. to CCSD/AV{Q,5}Z+2d1f energies (for which $`l`$={4,5}). (For a fairly comprehensive review of theoretical and empirical arguments in favor of this type of extrapolation, see Ref. and references therein.) We thus obtain 165.94 kcal/mol as our best estimate for the CCSD correlation contribution to TAE. It should be noted that the extrapolation accounts for 3.2 kcal/mol of this amount: basis set convergence is indeed quite slow. We note that the largest direct CCSD calculation took a solid two weeks of CPU time on the DEC Alpha — a conventional calculation would have required about 60 GB of temporary disk space, as well as a much higher I/O bandwidth if a reasonable wall time to CPU time ratio were to be attained.
As a general rule, the (T) contribution converges much more rapidly with basis set (besides being smaller to begin with) and therefore, we were able to dispense entirely with the CCSD(T)/AV5Z+2d1f calculation. From CCSD(T)/AV{T,Q}+2d1f results and the $`A+B/l^3`$ formula, we obtain a basis set limit for the (T) contribution of 20.17 kcal/mol, in which the extrapolation accounts for 0.57 kcal/mol. Together with the CCSD results, this adds up to a valence correlation contribution to TAE\[SO<sub>3</sub>\] of 186.11 kcal/mol, of which 3.75 kcal/mol is covered by extrapolations.
The inner-shell correlation contribution (Table 2) at the CCSD(T) level using the Martin-Taylor core-correlation basis set, was found to be 0.89 kcal/mol with the Martin-Taylor core correlation basis set, and 0.96 kcal/mol with the somewhat more compact MTsmall basis set used in W2 theory. Bauschlicher and Ricca found that basis set superposition error significantly affects the inner-shell correlation contribution in SO<sub>2</sub>. It was evaluated here using the site-site counterpoise method ; we thus found counterpoise-corrected core correlation contributions of 0.73 kcal/mol with the Martin-Taylor and 0.68 kcal/mol with the MTsmall basis sets.
Scalar relativistic effects were obtained as expectation values of the mass-velocity and Darwin operators for the ACPF (averaged coupled pair functional) wavefunction. Their effect on the computed TAE (with either core correlation basis set) is -1.71 kcal/mol, comparable to the -1.88 kcal/mol previously found for SiF<sub>4</sub>. Atomic spin-orbit splitting adds another -1.23 kcal/mol to the result. (These latter two terms together imply a relativistic contribution of -2.94 kcal/mol, or nearly 1% of the atomization energy.)
Finally, we obtain a W2 total atomization energy at the bottom of the well, TAE<sub>e</sub>, of 344.03 kcal/mol; using the BSSE-corrected inner shell correlation contribution, this value drops to 343.76 kcal/mol. In combination with the very accurate ZPVE=7.795 kcal/mol, we finally obtain, at absolute zero, TAE<sub>0</sub>=336.17 kcal/mol without, and 335.96 kcal/mol with, BSSE correction on the core correlation contribution. This latter value is in perfect agreement with the experimental TAE<sub>0</sub>=335.92$`\pm `$0.19 listed in the Gurvich compilation. We thus see once more the importance of including BSSE corrections for the inner-shell correlation part of TAE: it should be noted that while the inner-shell contribution to TAE is small, the S($`2s,2p`$);O($`1s`$) absolute correlation energy is comparable with the valence correlation energy in SO<sub>3</sub>. BSSE on the valence contribution is much less of an issue since the basis sets used for valence correlation are much more saturated to begin with, and furthermore the valence correlation energy is being extrapolated to the infinite-basis limit where it should vanish by definition.
The performance of more approximate computational thermochemistry schemes is of some interest here (Table 3). G1 theory is in error by no less than -11.4 kcal/mol, which goes down to -6.9 kcal/mol for G2 theory and -5.45 kcal/mol for G3 theory. (Only the latter includes spin-orbit splitting as part of the protocol: none of these methods consider scalar relativistic effects.) G2(MP2) performs relatively well as a result of error compensation (-2.4 kcal/mol). The CBS-Q scheme underestimates the true binding energy by only 1 kcal/mol, while CBS-QB3 is only 0.2 kcal/mol above experiment. It should be noted that neither CBS-Q nor CBS-QB3 include relativistic effects of any kind as part of the standard protocol; therefore the excellent performance put in by these methods is to a large extent thanks to error compensation. Finally, the W1 theory of Martin and de Oliveira — which yields a mean absolute error of about 0.3 kcal/mol for a wide variety of compounds — has an error in TAE<sub>0</sub>\[SO<sub>3</sub>\] of -1.13 kcal/mol. (W1 theory includes both scalar relativistic and spin-orbit contributions.)
The largest calculations involved in the W1 protocol are CCSD/AVQZ+2d1f and CCSD(T)/AVTZ+2d1f, which is still rather more demanding than the steps in any of the G$`n`$ or CBS methods. Hence this performance is rather disappointing — a failure of W1 theory was also noted for SO<sub>2</sub> in the original paper. Balance considerations may lead us to wonder whether an AVTZ+2d1f basis set is not rather top-heavy on inner polarization functions. Using the AV$`n`$Z+2d series favored by Bauschlicher and coworkers (e.g.) indeed reduces the discrepancy with experiment by 0.55 kcal/mol (of which 0.20 kcal/mol in the SCF part). The alternative sequence {AVDZ+2d,AVTZ+2d,AVQZ+2d1f} yields even better agreement with experiment (and the more rigorous calculations): in fact, the final value thus obtained falls within the experimental error bar. Particularly encouraging is the fact that the predicted SCF limit is now within 0.04 kcal/mol of our best estimate. Preliminary calculations on other second-row systems suggest that this procedure, which we will label W1 theory, may be preferable over standard W1 theory for second-row systems with strong inner shell polarization. (The two variants are equivalent for first-row compounds.)
As a test, we have taken three molecules for which W1 yields fairly large errors (CS, SO, and SO<sub>2</sub>) and repeated the calculation using W1 theory. Deviations from experiment drop from -0.92, -0.62, and -1.01 kcal/mol, respectively, to -0.56, -0.32, and -0.02 kcal/mol, respectively, which is not qualitatively different from the vastly more expensive W2 calculations which yielded deviations of -0.51, +0.02, and +0.23 kcal/mol for these molecules. We conclude that W1 theory indeed represents an improvement, and recommend it for future work on second-row systems instead of W1 theory.
## IV Conclusions
Benchmark ab initio calculations using direct coupled cluster methods predict the total atomization energy at 0 K of SO<sub>3</sub> to be 335.96 (observed 335.92$`\pm `$0.19) kcal/mol. The computed results includes extrapolation to the basis set limit (3.75 kcal/mol), relativistic effects (-2.94 kcal/mol), inner-shell correlation (0.68 kcal/mol after BSSE correction), and anharmonic zero-point energy (7.755 kcal/mol). Inner polarization functions make very large (40 kcal/mol with $`spd`$, 10 kcal/mol with $`spdfg`$ basis sets) contributions to the SCF part of the binding energy. The molecule presents an unusual hurdle for less computationally intensive theoretical thermochemistry methods and is proposed as a benchmark for them. A slight modification of W1 theory is proposed which appears to result in improved performance for second-row systems with strong inner-shell polarization effects.
###### Acknowledgements.
JM is a Yigal Allon Fellow, an Honorary Research Associate (“Onderzoeksleider in eremandaat”) of the National Science Foundation of Belgium (NFWO/FNRS), and the incumbent of the Helen and Milton A. Kimmelman Career Development Chair. He thanks Prof. Peter J. Knowles (Birmingham University, UK) for assistance with the installation of the direct coupled cluster code, and Dr. Charles W. Bauschlicher Jr. (NASA Ames Research Center, Moffett Field, CA) for critical reading of the manuscript prior to submission. This research was supported by the Minerva Foundation, Munich, Germany.
|
no-problem/9907/hep-th9907018.html
|
ar5iv
|
text
|
# References
It has recently been established that $`CP2`$ can be realised as a non-linear supersymmetric model as the result of constraining a linear supersymmetric model . Massless Goldstone bosons arise from components of global symmetries which are spontaneously broken. There is no extra symmetry for Goldstone bosons in supersymmetry. Instead the supersymmetry forces complexification of scalars. This leads to an increased number of massless excitations in general, with complete doubling of the original number in some cases. Despite previously believed theorems to the contrary by Lerche and Shore , the $`CP2`$ case was established as a counter–example. The key contribution leading to this possibility was that of Hughes and Polchinski , which showed that the original anticommutator for supersymmetric charges had to be generalised to include a central term at the underlying current density level. This is a direct result of the more modern viewpoint that supermembranes are just as fundamental as elementary particles in string theory. The key point seems to be that this is a case where the symmetry of the hamiltonian is larger than the symmetry of the $`S`$-matrix. When the anticommutator algebra for supersymmetic charges is generalised to local form as
$$_\mu T\left(j_{A\alpha }^\mu (x)\overline{j}_{B\dot{\beta }}^\nu (y)\right)=2(\sigma ^\rho )_{\alpha \dot{\beta }}T_\rho ^\nu \delta ^4(xy)\delta _{AB}+2(\sigma ^\nu )_{\alpha \dot{\beta }}C_{AB}\delta ^4(xy)$$
(1)
the appearance of the central terms $`C_{AB}`$ is crucial. The authors take advantage of the fact that $`T^{\mu \nu }`$ is not the only unique conserved symmetric tensor since $`T^{\mu \nu }+C\eta ^{\mu \nu }`$ is also conserved. Thus equation (1) is clearly finite and Lorentz invariant, and from it follow the usual consequences of degenerate multiplets for unbroken supersymmetries and Goldstone fermions for those that are broken. In momentum space, with $`C_{AB}`$ diagonal and $`<T^{\mu \nu }>=\mathrm{\Lambda }\eta ^{\mu \nu }`$, this gives
$$q_\mu <j_{A\alpha }^\mu (q)\overline{j}_{A\dot{\beta }}^\nu >=2(\sigma ^\nu )_{\alpha \dot{\beta }}(\mathrm{\Lambda }+C_{AA})+0(q)$$
(2)
where there is no sum over $`A`$. For those $`A`$ such that $`\mathrm{\Lambda }+C_{AA}0`$, equation (2) implies a $`1/\overline{)}q`$ singularity in the two current correlation; $`j_{A\alpha }^\mu `$ couples the vacuum to a massless fermion with coupling strength $`[2(\mathrm{\Lambda }+C_{AA})]^{1/2}`$, where $`\mathrm{\Lambda }+C_{AA}0`$. It is now clear how to evade the extra unwanted Goldstone bosons where the underlying coset manifold is indeed Kahler. The crucial point of extending the underlying algebra of supercharge current densities by central terms has to be combined not merely with a Kahler $`G/H`$, but that manifold has to be reexpressed as a quotient of the complexified $`G`$ (denoted as $`G^C`$) by a maximally extended complexification of $`H`$ (denoted $`\widehat{H}`$). By following the elegant treatment of Itoh, Kugor and Kunitoma , this method will display an explicit mapping manifesting the homeomorphism between $`G/H`$ and $`G^C/\widehat{H}`$. Since the bosonic coset space for $`CP4`$ is
$$\frac{G}{H}=\frac{SU_3}{SU_2\times U_1},$$
(3)
a convenient starting point for the appropriate notation is given by the original Gell- Mann matrices . Note that $`\lambda _8`$ and $`\lambda _3`$ are in the Cartan subalgebra, and that the raising operators are $`E_1=1/2(\lambda _1+i\lambda _2),E_2=1/2(\lambda _4+i\lambda _5)`$ and $`E_3=1/2(\lambda _6+i\lambda _7),`$ with $`E_1=E_1^{}`$ , $`E_2=E_2^{}`$ and $`E_3=E_3^{}`$ as the lowering operators. It is clear that all the raising and lowering operators are nilpotent in this representation. This feature obviously extends to larger $`N`$ and ensures that constructing the Kahler potential is essentially immediate in all cases. Following reference a projection operator $`\eta `$ with its only entry a one in the bottom right hand corner is defined by
$$\eta =\frac{1}{3}1\sqrt{\frac{1}{3}}\lambda _8,$$
(4)
and the complex subgroup $`\widehat{H}`$ specified by the relationship
$$\widehat{h}\eta =\eta \widehat{h}\eta .$$
(5)
This implies that the generators of $`\widehat{H}`$ are $`\lambda _8,\lambda _3,E_1,E_1,E_2`$ and $`E_3`$, and that $`E_2`$ and $`E_3`$ are the four elements of the algebra spanning $`G^c/\widehat{H}`$.
Extending the notation of reference , the original (unconstrained) supersymmetric action is constructed from nine (complex) chiral superfields. In components, with
$$y^m=x^m+i\theta \sigma ^m\overline{\theta },$$
(6)
these have the form
$`\mathrm{\Phi }(x,\theta ,\overline{\theta })`$ $`=`$ $`\varphi (y)+\sqrt{2}\theta \lambda _\varphi (y)+\theta ^2F_\varphi (y),`$ (7)
$`\mathrm{\Sigma }_8(x,\theta ,\overline{\theta })`$ $`=`$ $`\sigma _8(y)+\sqrt{2}\theta \lambda _8(y)+\theta ^2F_8(y),`$ (8)
$`\mathrm{\Sigma }_3(x,\theta ,\overline{\theta })`$ $`=`$ $`\sigma _3(y)+\sqrt{2}\theta \lambda _3(y)+\theta ^2F_3(y),`$ (9)
$`\mathrm{\Delta }_A(x,\theta ,\overline{\theta })`$ $`=`$ $`\mathrm{\Delta }_A(y)+\sqrt{2}\theta \mathrm{\Lambda }_A(y)+\theta ^2F_A(y),`$ (10)
where $`A=(1,1)`$,
$`\mathrm{\Delta }_2(x,\theta ,\overline{\theta })`$ $`=`$ $`\delta _2(y)+\sqrt{2}\theta \mathrm{\Lambda }_2(y)+\theta ^2F_\mathrm{\Delta }(y),`$ (11)
$`\mathrm{\Delta }_3(x,\theta ,\overline{\theta })`$ $`=`$ $`\delta _3(y)+\sqrt{2}\theta \mathrm{\Lambda }_3(y)+\theta ^2F_3(y),`$ (12)
$`\mathrm{\Gamma }_\mu (x,\theta ,\overline{\theta })`$ $`=`$ $`\gamma _\mu (y)+\sqrt{2}\theta \mathrm{\Omega }_\mu (y)+\theta ^2F_\mu ^\mathrm{\Gamma }(y),`$ (13)
where $`\mu =(2,3)`$, $`\sigma ^m(1,\tau ^a)`$, and the $`\tau ^a`$ are the Pauli matrices $`(a=1,2,3)`$. The chiral superfields transform under $`SU_3`$ as indicated by the index structure, including $`\mathrm{\Phi }`$ which is a singlet. The most general supersymmetric action is then written as
$$\begin{array}{c}I=d^8z\left[\overline{\mathrm{\Phi }}\mathrm{\Phi }+\overline{\mathrm{\Sigma }}_8\mathrm{\Sigma }_8+\overline{\mathrm{\Sigma }}_3\mathrm{\Sigma }_3+\overline{\mathrm{\Delta }}_A\mathrm{\Delta }_A+\overline{\mathrm{\Delta }}_2\mathrm{\Delta }_2+\overline{\mathrm{\Delta }}_3\mathrm{\Delta }_3+\overline{\mathrm{\Gamma }}_\mu \mathrm{\Gamma }_\mu \right]\\ +d^6sW+d^6\overline{s}\overline{W}\end{array}$$
(14)
where the superpotential $`W`$ is a functional of chiral superfields only. Combining the eight non-singlet $`SU_3`$ superfields with their respective matrices into the matrix
$$M=\mathrm{\Sigma }_8\lambda _8+\mathrm{}+\mathrm{\Gamma }_\mu E_\mu ,$$
(15)
reveals that, under chiral $`SU_3\times SU_3`$, $`M`$ transforms as
$$MLMR^{},$$
(16)
where the $`\gamma _5`$ structure is suppressed, and taking
$$W=k\mathrm{\Phi }detM,$$
(17)
where $`k`$ is a constant, ensures that the model reduces to the usual bosonic (Kahler) model below the symmetry breaking scale. This starting action now yields the potential
$$\begin{array}{c}V=F_\varphi \overline{F}_\varphi +F_8\overline{F}_8+F_3\overline{F}_3+F_A\overline{F}_A+F_2\overline{F}_2+F_3\overline{F}_3+F_\mu ^\mathrm{\Gamma }\overline{F}_\mu ^\mathrm{\Gamma }\\ =4k^2\varphi \overline{\varphi }\left[\sigma _8\overline{\sigma }_8+\sigma _3\overline{\sigma _3}+\delta _A\overline{\delta }_A+\delta _2\overline{\delta }_2+\delta _3\overline{\delta }_3+\gamma _\mu \overline{\gamma }_\mu \right]\\ +k^2\left[\sigma _8^2+\sigma _3^2+\delta _A\delta _A+\gamma _3\delta _3+\gamma _\mu \gamma _{A+2}\right]\left[\overline{\sigma }_8^2+\overline{\sigma }_3^2+\overline{\delta }_A\overline{\delta }_A+\overline{\gamma }_3\overline{\delta }_3+\overline{\gamma }_\mu \overline{\gamma }_{A+2}\right]\end{array}.$$
(18)
In the formal limit as $`k\mathrm{}`$, the action becomes
$$I=d^8z\frac{\mathrm{\Gamma }_\mu \overline{\mathrm{\Gamma }}_\mu }{4},$$
(19)
as the constraints are satisfied by the superfield conditions
$$\mathrm{\Sigma }_8=\mathrm{\Sigma }_3=\mathrm{\Delta }_A=\mathrm{\Delta }_3=\mathrm{\Gamma }_3=0.$$
(20)
The superfield $`\mathrm{\Phi }`$ can again be ignored as a non-interacting spectator. Notice that the pair of complex superfields $`\mathrm{\Gamma }_\mu `$ are all that remain in the action, and they are not constrained.
In this notation the complex coset space is written in the form
$$L=\mathrm{exp}\left(\frac{i}{2}\gamma _2E_2\right)exp\left(\frac{i}{2}\gamma _3E_3\right),$$
(21)
and this gives an explicit mapping of the homeomorphism between $`G/H`$ and $`G^c/\widehat{H}`$. Following references the Kahler potential is given by
$$K=\mathrm{ln}\underset{\eta }{det}\left[exp\left(\frac{i\overline{\gamma _3}\overline{E}_3}{2}\right)exp\left(\frac{i\overline{\gamma _2}\overline{E}_2}{2}\right)exp\left(\frac{i\gamma _2E_2}{2}\right)exp\left(\frac{i\gamma _3E_3}{2}\right)\right],$$
(22)
where the notation indicates that the determinant is to be taken in the bottom right hand of the matrix in this representation. This reveals at once that
$$K=\mathrm{ln}\left[1+\frac{\gamma _\mu \overline{\gamma }_\mu }{4}\right],$$
(23)
which is the desired result. Notice how this presentation deals with the main objections which arose when it was claimed in reference that the generalisation directly to CPN was possible. It is not necessary to find special co-ordinates for the manifold in order to demonstrate that it is Kahler. The $`CPN`$ manifolds are already known to be Kahler. It is time that having a general co-ordinate system in the $`CP2`$ case was very useful from a descriptive viewpoint, but it is now clear that it was not really needed. Of course it was very convenient to use the nilpotency of $`\tau ^+`$ and $`\tau ^{}`$ in the $`CP2`$ case, but far from being restricted to that case it is now obvious that the number of nilpotent matrices rises with $`N`$. Finally, there was the well established feature that there is an increasing number of Kahler potentials with rising rank of $`G`$, and each introduces an extra arbitrary constant. Of course this current presentation just gives one particular combination, but as is always the case with counterexamples one is sufficient.
The author is grateful to Professor D.A.Ross for raising his interest in this type of work. This work is partly supported by PPARC grant number GR/L56329.
|
no-problem/9907/math9907086.html
|
ar5iv
|
text
|
# Dynamics Forced by Surface Trellises
## 1. Introduction
Let $`f:^2^2`$ be a diffeomorphism. A fixed point $`p`$ of $`f`$ is a *hyperbolic fixed point* if the eigenvalues of $`Df(p)`$ have modulus $`1`$. By the Stable Manifold Theorem, the stable and unstable sets of $`p`$ are injectively immersed manifolds, and if $`p`$ is a saddle point, these manifolds are curves. If these curves intersect at a point $`q`$ distinct from $`p`$, there must be infinitely many intersections, and the stable and unstable curves then form a complicated set called a *homoclinic tangle*.
Homoclinic tangles have been studied extensively, dating back to Poincaré and Birkhoff. The main result, due in its modern form to Smale, is that a diffeomorphism with a transverse homoclinic point has a horseshoe in some iterate. While this has been generalised to topologically transverse intersections and quadratic tangencies, little progress has been made in determining more about the actual dynamics forced by a homoclinic tangle.
Since all interesting homoclinic tangles have infinitely many intersection points, we cannot compute them in practice. The purpose of this paper is to show that we can obtain interesting information about the dynamics of a system by considering a portion of a homoclinic tangle with only finitely many intersection points. We call these objects *trellises*.
We will consider systems on compact surfaces with boundary. Given a trellis for a system, we find lower bounds for the number of periodic orbits of a given period, and the location of these orbits in terms of the complement of the trellis. In many cases, we can find a finite type shift which gives a good symbolic description of the system. The growth rate of the number of periodic points is the same as the entropy of the shift, which is a lower bound for the topological entropy of $`f`$. All but finitely many periodic points of the shift are realised by the the original map.
Since all the tools we use are topological, we do not need any differentiability requirements, and we can even weaken the hypothesis that $`f`$ is invertible. Further, the methods work equally well for heteroclinic tangles. We will refer to both homoclinic and heteroclinic tangles as *tangles*. Note that our terminology differs from that of Easton \[Eas86\], who uses the word trellis for what we call a tangle.
Algorithms exist for computing approximations to stable and unstable manifolds for surface diffeomorphisms. Since transverse intersections of these curves are persistent under perturbations, and trellises contain finitely many intersections, we can often compute trellises precisely. This allows us to obtain rigorous results about real systems. As an example, we find symbolic dynamics for the Hénon map $`(x,y)(rx^2+cy,x)`$ with parameter values $`c=\frac{4}{5}`$ and $`r=\frac{3}{2}`$, and show that it has topological entropy at least $`0.527`$.
In Section 2 we state the definitions and theorems from relative periodic point theory we need to study trellises. Proofs and further discussion of the results in this section can be found in \[Col\].
In Section 3 we give a formal definition of trellises, and details of the operations we need to study them. For a trellis $`T`$, we first cut along the unstable curves of $`T`$ to obtain a topological pair $`𝒞T`$ consisting of a surface and a subset corresponding to the stable curves. We then homotopy-retract $`𝒞T`$ onto a graph $`𝒢T`$. If $`T`$ is a trellis for a map $`f`$, then we obtain maps $`𝒞f`$ on $`𝒞T`$ and $`𝒢f`$ on $`𝒢T`$. We can then use Nielsen theory to show that periodic orbits for the graph map correspond to periodic orbits for the original map $`f`$. If the trellis $`T`$ has transverse intersections and is a subset of a tangle for a homeomorphism $`f`$, then the growth rate of the number of periodic points of $`f`$ so found is a lower bound for the topological entropy of $`f`$.
In Section 4 we give a number of examples showing how we can use these methods to obtain interesting results about the dynamics of maps.
## 2. Relative Periodic Point Theory
In this section we give, without proofs, a brief summary of the definitions and theorems for the relative fixed point theory developed in \[Col\]. The results are based on standard fixed point theory, a good introduction to which can be found in Brown \[Bro71\].
There are two basic types of theory, Lefschetz theory and Nielsen theory. Both these are homotopy-invariant, and allow for comparison of maps on different spaces. The Lefschetz theory finds periodic points by looking at cohomology actions on $`H^{}(X,Y)`$, and is most useful when no a priori information about periodic points is available. The computations involved are similar to those for the cohomological Conley index of Szymczak \[Szy95\], and were motivated by this theory, though some of the topology is complicated since our regions may not have disjoint closures. The Nielsen theory determines when two periodic points can bifurcate with each other. It is most useful when we can explicitly find periodic points for one map in a homotopy class, since we can then decide whether these points exist for other maps in the homotopy class. When studying trellises, the strongest results are obtained by applying Nielsen theory to maps of divided graphs.
Throughout this section, all topological spaces will be assumed to be compact absolute neighbourhood retracts. All cohomology groups will be taken over $``$.
### 2.1. Topological Pairs, Regions and Itineraries
In this section we define a number of terms which provide a framework for describing dynamics.
###### Definition 2.1 (Topological pairs).
A *topological pair* is a pair $`(X,Y)`$ where $`X`$ is a topological space and $`Y`$ is a closed subset of $`X`$. If $`(X,Y)`$ is a topological pair, we will write $`Y^C`$ for $`XY`$, the complement of $`Y`$ in $`X`$.
A *map of pairs* $`f:(X_1,Y_1)(X_2,Y_2)`$ is a continuous function $`f:X_1X_2`$ such that $`f(Y_1)Y_2`$. A map of pairs $`f:(X_1,Y_1)(X_2,Y_2)`$ is *exact* if $`f^1(Y_2)Y_1`$, or, equivalently, if $`f(Y_1^C)Y_2^C`$.
###### Definition 2.2 (Homotopy).
Let $`f_0,f_1:(A,B)(X,Y)`$. A *homotopy* from $`f_0`$ to $`f_1`$ in the category of topological pairs is a family of maps $`f_t:(A,B)(X,Y)`$ for $`0t1`$ such that such that the function $`F:A\times IX`$ defined by $`F(a,t)=f_t(a)`$ is continuous. We write $`f_t:f_0f_1`$ if $`f_0`$ is homotopic to $`f_1`$ via the homotopy $`f_t`$. $``$ induces an equivalence relation on maps of pairs, and we write $`[f]`$ for the equivalence class of $`f`$.
A homotopy $`f_t`$ is a *strong homotopy* if $`f_t(a)=f_0(a)`$ whenever $`f_1(a)=f_0(a)`$ an *exact homotopy* if each map $`f_t`$ is exact.
###### Definition 2.3 (Regions).
An *region* $`R`$ of a topological pair $`(X,Y)`$ is an open subset of $`XY`$ such that $`RY`$ is closed in $`X`$. A *regional space* is a triple $`(X,Y;𝐑)`$ where $`(X,Y)`$ is a topological pair, and $`𝐑`$ is a set of mutually disjoint regions. Note that we do not require $`𝐑`$, the union of the regions in $`𝐑`$, to cover $`Y^C`$.
If $`(X_1,Y_1;𝐑_1)`$ and $`(X_2,Y_2;𝐑_2)`$ are regional spaces, a map $`f:(X_1,Y_1;𝐑_1)(X_2,Y_2;𝐑_2)`$ is *region-preserving* if there is a function $`f_𝐑:𝐑_1𝐑_2`$ such that for all regions $`R_1𝐑_1`$, $`f(R_1)f_𝐑(R_1)`$, and for all regions $`R_2𝐑_2`$, $`f^1(R_2)𝐑_1`$.
###### Definition 2.4 (Dynamical Systems).
A *dynamical system* on a regional space $`(X,Y;𝐑)`$ is a self-map $`f`$ of $`(X,Y)`$.
If $`f`$ and $`g`$ are dynamical systems on $`(X_1,Y_1;𝐑_1)`$ and $`(X_2,Y_2;𝐑_2)`$ respectively, a region-preserving map $`r:(X_1,Y_1;𝐑_1)(X_2,Y_2;𝐑_2)`$ is a *morphism* from $`f`$ onto $`g`$ if there is a map of pairs $`s:(X_2,Y_2)(X_1,Y_1)`$ such that $`rsid`$ and $`fsgr`$.
We interpret $`X`$ as the base space of the system, $`Y`$ as invariant set on which the dynamics of $`f`$ is known, and $`𝐑`$ as the regions in which we are interested in finding symbolic dynamics on. We will see that if there is a morphism from $`f`$ onto $`g`$, then the symbolic dynamics we can compute for $`f`$ are more complicated than that for $`g`$.
###### Definition 2.5 (Itineraries and Codes).
Let $`f`$ be a dynamical system on $`(X,Y;𝐑)`$. A sequence $`R_0R_1R_2\mathrm{}`$ of regions in $`𝐑`$ is an *itinerary* for $`xX`$ if $`f^i(x)R_i`$ for all $`i`$.
Let $`\mathrm{Per}_n(f)`$ be the set fixed points of $`f^n`$ (that is, the set of points of not necessarily least period $`n`$). A word $`=R_0R_1\mathrm{}R_{n1}`$ on $`𝐑`$ is a *code* for $`x\mathrm{Per}_n(f)`$ if $`f^i(x)R_i`$ for $`0i<n`$. We write $`\mathrm{Per}_{}(f)`$ for the set of periodic points with code $``$, and $`\mathrm{Per}_{𝐑,n}(f)`$ for the set of points with codes in $`𝐑`$ of length $`n`$.
Notice that the itinerary is not defined for points which leave $`𝐑`$, but since regions are disjoint, it is unique where defined.
### 2.2. Relative Lefschetz Theory
Since $`X`$ and $`Y`$ are ANRs, we can use the strong excision property to define a *cohomology projection*.
###### Definition 2.6 (Cohomology projection).
Let $`R`$ be a region of $`(X,Y)`$. let $`j_1:(RY,Y)(X,Y)`$, $`j_2:(X,Y)(X,XR)`$ and $`j_3:(RY,Y)(X,XR)`$ be inclusions. $`j_3`$ is (weakly) excisive, so induces isomorphisms on cohomology. The *cohomology projection onto $`R`$* is $`\pi _R^{}=j_2^{}(j_3^{})^1j_1^{}`$.
Using the cohomology projection, we can restrict the cohomology action of a dynamical system $`f`$ on $`(X,Y;𝐑)`$ to each region. Given a word $``$ on $`𝐑`$, we can obtain a kind of restricted cohomology action of $`f^n`$.
###### Definition 2.7.
Let $`f`$ be a semidynamical system on $`(X,Y;𝐑)`$. For all $`R𝐑`$, let $`f_R^{}=\pi _R^{}f^{}`$. For all words $``$ on $`𝐑`$ of length $`n`$, let $`f_{}^{}=f_{R_0}^{}f_{R_1}^{}\mathrm{}f_{R_{n1}}^{}`$.
The Lefschetz number of $`f_{}^{}`$ is defined as follows.
###### Definition 2.8 (Lefschetz Number).
The *Lefschetz number* of $`f_{}^{}`$ is $`L(f_{}^{})=_{i=0}^{\mathrm{}}(1)^i\mathrm{Tr}(f_{}^{(i)})`$.
Using this, we can deduce the existence of periodic points with a given code.
###### Theorem 2.9 (Relative Lefschetz Theorem).
Let $`f`$ by a semidynamical system on $`(X,Y;𝐑)`$. Suppose $``$ is a word of length $`n`$ on $`𝐑`$, and $`L(f_{}^{})0`$. Then there is a period-$`n`$ point $`x`$ such that $`x`$ is the limit of a sequence $`(x_i)`$ such that $`f^j(x_i)R_{j\mathrm{mod}n}`$ for all $`J<i`$.
We write $`\widehat{\mathrm{Per}}_{}(f)`$ for the set of periodic points defined above. Note that if $`x\widehat{\mathrm{Per}}_{}(f)`$, then, $`f^j(x)\mathrm{cl}(R_{j\mathrm{mod}n})`$ for all $`j`$. We give a result showing how we can compare systems on different spaces.
###### Theorem 2.10.
Let $`f`$ and $`g`$ be dynamical systems on $`(X_1,Y_1;𝐑_1)`$ and $`(X_2,Y_2;𝐑_2)`$ respectively, and $`r`$ a morphism from $`f`$ onto $`g`$. Then
$$\underset{_1r_{𝐑}^{}{}_{}{}^{1}(_2)}{}L(f__1^{})=L(g__2^{})$$
### 2.3. Relative Nielsen Theory
Throughout this section, by *curve* we mean a map $`\alpha :(I,J)(X,Y)`$, where $`I`$ is the unit interval. All homotopies of curves will be relative to endpoints, and we write $`\alpha _0\alpha _1`$ if $`\alpha _0\alpha _1`$ are homotopic $`\mathrm{rel}`$ endpoints.
Let $`f`$ be a dynamical system on $`(X,Y;𝐑)`$, and $`n`$.
###### Definition 2.11.
Suppose $`x_1,x_2\mathrm{Per}_n(f)`$. We say $`x_1`$ is *Nielsen equivalent* to $`x_2`$, denoted $`x_1_fx_2`$, if there is a subset $`J`$ of $`I`$ and exact curves $`\alpha _j:(I,J)(X,Y)`$ from $`f^j(x_1)`$ to $`f^j(x_2)`$ for $`j=0\mathrm{}n1`$ such that $`\alpha _{j+1\mathrm{mod}n}f\alpha _j`$ for all $`j`$. The family $`(\alpha _j)`$ is a *relating family*.
If $`x\mathrm{Per}_n(f)`$, then $`x`$ is *Nielsen related to $`Y`$*, denoted $`x_fY`$ if there is a relating family $`(\alpha _j)`$ for $`x_fx`$ consisting of exact curves $`(I,J)(X,Y)`$ for which $`J\mathrm{}`$. If $`x\simeq ̸_fY`$, then we say $`x`$ is *Nielsen separated from $`Y`$*.
Clearly $`_f`$ is an equivalence relation. Equivalence classes of $`\mathrm{Per}_n(f)`$ are called *$`n`$-Nielsen classes* We will drop the subscript $`f`$ where this will cause no confusion.
We have the following important lemma.
###### Lemma 2.12.
If $`x_1x_2`$, then $`x_1`$ is Nielsen related to $`Y`$ if and only if $`x_2`$ is Nielsen related to $`Y`$. If $`x_1x_2`$, and $`x_1\mathrm{Per}_{}(f)`$, then $`x_2\mathrm{Per}_{}(f)`$ or $`x_1,x_2Y`$.
We can therefore speak of a Nielsen *class* $`Q`$ being *Nielsen related to $`Y`$* or *Nielsen separated from $`Y`$*. If $`Q`$ is Nielsen separated from $`Y`$, then all points of $`Q`$ have the same code, which we call the *code for $`Q`$*. We let $`N_{}(f)`$ be the number of essential Nielsen classes with code $``$, and $`N_n(f)`$ the number of Nielsen classes with codes $``$ of length $`n`$.
###### Theorem 2.13.
Suppose $`Q`$ is a Nielsen class of $`f`$. Then $`Q`$ is open in $`\mathrm{Per}_n(f)`$.
We can therefore define the index of a Nielsen class $`Q`$, denoted $`\mathrm{Ind}(X,Q;f)`$ or simply $`\mathrm{Ind}(Q)`$ to be the Lefschetz index $`\mathrm{Ind}(X,U;f)`$, where $`U`$ is an open neighbourhood of $`Q`$ containing no other fixed points in its closure.
###### Definition 2.14 (Essential Nielsen class).
A Nielsen class $`Q`$ is *essential* if $`\mathrm{Ind}(X,Q;f)0`$.
We let $`N_n(f)`$, the number of essential Nielsen classes separated from $`Y`$. We let $`\overline{N}_n(f)`$ be the total number of essential Nielsen classes, and $`N_n^Y(f)`$ the number of Nielsen classes related to $`Y`$. $`N_n^Y(f)`$ may be greater or less than the number of Nielsen classes of $`f|_Y`$.
The following result is a localisation result for Nielsen theory.
###### Theorem 2.15.
Suppose $`f`$ and $`g`$ agree on $`𝐑`$. Then $`N_{}(f)=N_{}(g)`$ for all words $``$ on $`𝐑`$.
If there is a morphism from $`f`$ to $`g`$, then $`f`$ has more Nielsen classes than $`g`$ in the following sense.
###### Theorem 2.16.
Let $`f`$ and $`g`$ be dynamical systems on $`(X_1,Y_2;𝐑_1)`$ and $`(X_2,Y_2;𝐑_2)`$ respectively, and $`r`$ morphism from $`f`$ onto $`g`$. Then
$$\underset{_1r_{𝐑}^{}{}_{}{}^{1}(_2)}{}N__1(f)N__2(g)$$
We have the following trivial corollary.
###### Corollary 2.17.
If $`g`$ is homotopic to $`f`$, then $`N_{}(g)=N_{}(f)`$ for all words $``$, and $`g`$ has at least $`N_n(f)`$ points of period $`n`$.
### 2.4. Entropy
There are several ways of defining topological entropy. We will use the following definition based on $`(𝒰,n,f)`$-separated sets.
###### Definition 2.18 (Topological entropy).
Let $`𝒰`$ be an open cover of $`X`$. Points $`x_1,x_2X`$ are *$`(𝒰,n,f)`$-close* if for all $`i<n`$ there exist $`U_i𝒰`$ such that $`f^i(x_1),f^i(x_2)U_i`$. Points $`x_1,x_2`$ *$`(𝒰,f)`$-shadow* each other if they are $`(𝒰,n,f)`$-close for all $`n`$.
A set $`S`$ is $`(𝒰,n,f)`$-separated if no two points of $`S`$ are $`(𝒰,n,f)`$-close. Let $`s(𝒰,n,f)`$ be the maximum cardinality of a $`(𝒰,n,f)`$ separated set. Then the *topological entropy* of $`f`$, written $`h_{\mathrm{𝑡𝑜𝑝}}(f)`$ is given by
$$h_{\mathrm{𝑡𝑜𝑝}}(f)=\underset{𝒰}{sup}\underset{n\mathrm{}}{lim}\frac{\mathrm{log}s(𝒰,n,f)}{n}$$
We have a classical result that $`h_{\mathrm{𝑡𝑜𝑝}}(f)lim\; sup_n\mathrm{}\frac{\mathrm{log}N(f^n)}{n}=N_{\mathrm{}}(f)`$. (See Katok and Hasselblatt \[KH95\]). In other words the growth rate of the number of essential fixed-point classes of $`f^n`$ is a lower bound for the topological entropy of $`f`$.
For the relative case, we define the *asymptotic Nielsen number* $`N_{\mathrm{}}(f)=lim\; sup_n\mathrm{}\frac{\mathrm{log}N_n(f)}{n}`$. We would like to show again that $`h_{\mathrm{𝑡𝑜𝑝}}(f)N_{\mathrm{}}(f)`$. Unfortunately, problems can occur near $`Y`$, so we introduce an additional hypothesis.
###### Definition 2.19 (Expansive periodicity near $`Y`$).
Let $`f`$ be a dynamical system on a regional space $`(X,Y;𝐑)`$. We say $`f`$ has expansive periodicity near $`Y`$ if there is a neighbourhood $`U_0`$ of $`Y`$ and an open cover $`𝒰`$ of $`X`$ such that whenever $`x_0,x_1\mathrm{Per}_{𝐑,n}(f)W`$ are Nielsen separated from $`Y`$, then either $`f^i(x_1)`$ and $`f^i(x_2)`$ are $`𝒰`$-separated for some $`i`$, or every curve from $`x_1`$ to $`x_2`$ in $`U_0`$ is homotopic to a curve from $`x_0`$ to $`x_1`$ which does not intersect $`Y`$.
We can show that expansive periodicity near $`Y`$ is enough to show that the topological entropy is at least the asymptotic Nielsen number.
###### Theorem 2.20.
Let $`f`$ be a dynamical system on $`(X,Y;𝐑)`$ with expansive periodicity near $`Y`$. Then $`h_{\mathrm{𝑡𝑜𝑝}}(f)N_{\mathrm{}}(f)`$.
## 3. Trellises
We now give a formal definition of trellises and two important classes of topological pairs. We also describe some important operations on these objects.
### 3.1. Trellises
###### Definition 3.1 (Trellis).
A trellis $`T`$ in a surface with boundary $`M`$ is a collection $`(T^P,T^V,T^U,T^S)`$ of subsets of $`MM`$ with the following properties.
1. $`T^P`$ is finite.
2. $`T^U`$ and $`T^S`$ are embedded copies of $`T^P\times I`$ such that each component of $`T^U`$ and of $`T^S`$ contains exactly one point of $`T^P`$.
3. $`T^V=T^UT^S`$ is finite.
We write $`T=(T^P,T^V,T^U,T^S)`$.
We will write $`U/S`$ for a statement which holds for both the stable ($`S`$) and unstable ($`U`$) case. A trellis is *transverse* if intersections of $`T^S`$ and $`T^U`$ are topologically transverse.
###### Definition 3.2 (Segments).
A *segment* is an interval in $`T^U`$ or $`T^S`$. Segments may be open or closed subsets of $`T^{U/S}`$, or neither. If $`q_1`$ and $`q_2`$ lie in the same component of $`T^{U/S}`$, we have an *open segment* $`T^{U/S}(q_1,q_2)`$ and a *closed segment* $`T^{U/S}[q_1,q_2]`$ between $`q_1`$ and $`q_2`$.
An *initial segment* has endpoints $`p`$ and $`q`$ where $`pT^P`$. A *minimal segment* has endpoints $`q_1,q_2T^V`$, and $`T^{U/S}(q_1,q_2)`$ contains no vertices. A *maximal segment* has endpoints $`q_1,q_2T^V`$, such that $`T^{U/S}[q_1,q_2]`$ contains all vertices in that component of $`T^{U/S}`$. The *ends* of $`T^{U/S}`$ are the subsets of $`T^{U/S}`$ not contained in any maximal segment.
For our purposes, only the maximal segments of $`T^{U/S}`$ are important, and so we will sometimes remove the ends of $`T^{U/S}`$ without explicitly mentioning this.
We now define a natural class of maps between trellises:
###### Definition 3.3 (Trellis Maps).
If $`T_1`$ is a trellis in $`M_1`$, $`T_2`$ is a trellis in $`M_2`$ and $`h:M_1M_2`$, we say $`h`$ is a *trellis map* from $`T_1`$ to $`T_2`$ if
1. $`h`$ maps $`T_1^P`$ bijectively with $`T_2^P`$.
2. $`h(T_1^S)T_2^S`$.
3. $`h^1(T_2^U)T_1^U`$.
Two trellis maps $`f_0,f_1`$ from $`T_1`$ to $`T_2`$ are *homotopic* if there is a homotopy $`f_t:f_0f_1`$ such that each $`f_t`$ is a trellis map.
The most important trellis maps are those from a trellis $`T`$ to itself. If $`f:MM`$ is such a trellis map, we say *$`T`$ is a trellis for $`f`$*. Clearly, if $`f`$ is a diffeomorphism with saddle periodic points $`T^P`$, and stable and unstable curves $`T^S`$ and $`T^U`$ with intersection $`T^V`$, then $`(T^P,T^V,T^U,T^S)`$ is a trellis for $`f`$.
We use the more general definition of trellis map to keep a formalism for comparing trellis maps for different trellises; in particular, we have a category of trellises and trellis maps.
### 3.2. Combinatorics of trellises
Often the best way of describing a trellis is simply to draw it. However, it is also useful to have a combinatorial way of describing it. We shall only consider the simplest case, namely that of a trellis for a homoclinic tangle on a sphere with transverse intersections. In this case, $`T=(T^P,T^V,T^U,T^S)`$, where $`T^P`$ is a one-point set $`\{p\}`$, and $`T^U`$ and $`T^S`$ are embedded intervals. We need to choose orientations for $`T^U`$ and $`T^S`$.
We now assign coordinates to each point of $`T^V`$. The *unstable coordinate* of $`qT^V`$, denoted $`n_U(q)`$ is $`n`$ if $`q`$ is the $`n^{\mathrm{th}}`$ point of $`T^V`$ in the positive direction from $`p`$ along $`T^U`$, or the $`n^{\mathrm{th}}`$ point of $`T^V`$ in the negative direction from $`p`$. We define the *stable coordinate* $`n_S(q)`$ in a similar way.
Merely giving the unstable and stable coordinates of points of $`T^V`$ is not enough to give a good description of a trellis. We also need to specify the *orientation* of the crossing of $`T^U`$ with $`T^S`$.
The orientation at $`q`$, written $`𝒪(q)`$ is positive ($`+`$) if $`T^U`$ and $`T^S`$ intersect with the same orientation as they do at $`p`$, and negative ($``$) if they intersect with the opposite orientation.
We can define a trellis up to ambient isomorphism just by giving $`(n_U,n_S,𝒪)`$ for all points $`qT^V`$. This description will be called the $`(U,S,𝒪)`$-coordinate description of $`T`$.
### 3.3. Cutting
Suppose $`f:MM`$ has trellis $`T`$. We would like to obtain a map of pairs from $`f`$ which captures the action of $`f`$ on $`T`$. The process by which we do this is *cutting* along the unstable curves $`T^U`$.
###### Definition 3.4 (Cutting).
Let $`M`$ be a surface. An embedded curve $`\alpha `$ is a *cutting curve* if $`\alpha M\alpha `$. A finite set of mutually disjoint cutting curves is a *cutting set*.
A surface $`𝒞_\alpha M`$ is obtained by *cutting $`M`$ along $`\alpha `$* if there are curves $`\alpha _1,\alpha _2:I𝒞_\alpha M`$ in the boundary of $`𝒞_\alpha M`$ which are disjoint except that we allow $`\alpha _1(0)=\alpha _2(0)`$ or $`\alpha _1(1)=\alpha _2(1)`$ (or both), and a map $`q_\alpha :𝒞_\alpha MM`$ such that $`q_\alpha `$ is the quotient map for the relation $`\alpha _1(t)\alpha _2(t)`$, and $`\alpha (t)=q_\alpha (\alpha _1(t))=q_\alpha (\alpha _2(t))`$. The quotient map $`q_\alpha `$ is called the *gluing map*.
If $`A`$ is a cutting set, we can cut along all curves simultaneously to obtain a surface $`𝒞_AM`$ and gluing map $`q_A`$.
It is a straightforward, though messy, exercise to show that cutting surfaces are unique up to homeomorphism. Cutting is shown pictorially in Figure 1.
The gluing map takes $`𝒞_AMq_A^1(A)`$ homeomorphically onto $`MA`$. If $`xA`$ then typically $`x`$ has two preimages under $`q_A`$, and a neighbourhood $`U`$ such that $`q_A^1(U)`$ is homeomorphic to two disjoint copies of the upper-half plane $`H`$ (and $`q_A^1(x)`$ lies on the boundaries of these half-planes). However, if for some arc $`\alpha `$, $`x\alpha M`$, then $`x`$ has a neighbourhood $`U`$ such that $`q_A^1(U)`$ is homeomorphic to a single half-plane.
We extend cutting to topological pairs as follows.
###### Definition 3.5.
If $`(M,B)`$ is a topological pair, and $`A`$ is a collection of cutting curves, then $`𝒞_A(M,B)`$ is the pair $`(𝒞_AM,q_A^1(B))`$.
Given a function $`f:M_1M_2`$, and cutting sets $`A_1`$ for $`M_1`$ and $`A_2`$ for $`M_2`$, we would like to know when we can find a map $`𝒞f:𝒞_{A_1}M_1𝒞_{A_2}M_2`$ such that $`q_{A_2}𝒞f=fq_{A_1}`$. The following lemma gives such a condition.
###### Lemma 3.6.
Suppose $`M_1`$ and $`M_2`$ are surfaces, $`A_1`$ and $`A_2`$ are cutting sets in $`M_1`$ and $`M_2`$ respectively, and $`f:M_1M_2`$ is a map such that $`f^1(A_2)A_1`$. Then there is a map $`𝒞f:𝒞_{A_1}M_1𝒞_{A_2}M_2`$ such that $`q_{A_2}𝒞f=fq_{A_1}`$. Further, if $`f(B_1)B_2`$, then $`𝒞f(q_{A_1}^1(B_1))q_{A_2}^1(B_2)`$
###### Proof.
If $`q_{A_1}(x)f^1(A_2^C)`$, then we can take $`𝒞f(x)=q_{A_2}^1(f(q_{A_1}(x)))`$. If $`f(q_{A_1}(x))`$ lies at a point of $`A_2`$ with one preimages, take $`𝒞f(x)=q_{A_2}^1(f(q_{A_1}(x)))`$. Otherwise, let $`V`$ be a neighbourhood of $`f(q_{A_1}(x))`$ with such that $`q_{A_2}^1(V)`$ consists of two disjoint copies of $`H`$. Let $`\widehat{U}`$ be a semicircular neighbourhood of $`x`$ such that $`q_{A_1}`$ maps $`\widehat{U}`$ homeomorphically onto $`U`$, a subset of $`f^1(V)`$. Let $`W=UA_1`$. $`W`$ is connected, so $`f(W)`$ is connected, and since $`f(W)A_2^C`$, $`q_{A_2}^1(f(W))`$ is connected, so lies in one of the components of $`q_{A_2}^1(V)`$. Take $`𝒞f(x)`$ to be the preimage of $`f(q_A(x))`$ under $`q_{A_2}`$ in this component.
Clearly the map so defined is continuous at $`x`$, and $`𝒞f(𝒞_{A_1}B_1)𝒞_{A_2}B_2`$
Now suppose $`T=(T^P,T^V,T^U,T^S)`$ is a trellis for a map $`f`$ on $`M`$. We can cut along $`T^U`$ to obtain a surface $`𝒞_{T^U}M`$. We can also take the preimage of $`T^S`$ under the gluing map, an obtain a pair $`𝒞T=(𝒞_{T^U}M,q_{T^U}^1(T^S))`$. For convenience, we will often write $`𝒞T=(X_T,Y_T)`$ An example of the cutting procedure is shown in Figure 2
Since $`f^1(T^U)T^U`$, we have a map $`𝒞f:𝒞_{T^U}M𝒞_{T^U}M`$, and since $`f(T^S)T^S`$, $`𝒞f`$ is a map of pairs $`𝒞f:𝒞T𝒞T`$. More generally, if $`f:M_1M_2`$ is a trellis map from $`T_1`$ to $`T_2`$, then we can define $`𝒞f:𝒞T_1𝒞T_2`$. Since $`𝒞(fg)=𝒞f𝒞g`$, cutting induces a functor from the trellis category to that of topological pairs.
We now give some trivial, but fundamentally important properties of the $`T^U`$-cutting projection $`q_A`$.
###### Proposition 3.7.
1. $`q_{T^U}`$ maps regions of $`(M,T^UT^S)`$ bijectively with regions of $`𝒞T`$.
2. $`f`$ has the same periodic orbits as $`𝒞f`$, except perhaps for those lying on $`T^U`$.
3. $`q_{T^U}`$ is a finite-to-one semiconjugacy, and so $`h_{\mathrm{𝑡𝑜𝑝}}(f)=h_{\mathrm{𝑡𝑜𝑝}}(𝒞f)`$.
### 3.4. Cross-Cut Surfaces and Divided Graphs
The relationship between graph maps and surface homeomorphisms has been studied in detail, particularly with regard to Thurston’s train tracks and the classification of surface diffeomorphisms. More recently, Bestvina and Handel \[BH95\], Franks and Misiurewicz \[FM93\] and Los \[Los93\] produced algorithms for computing the dynamics of isotopy classes of homeomorphisms relative to a finite invariant set. When studying trellises, we will need to consider *divided graphs*, where we have an invariant subset of the vertex set. The regions of a divided graph obtained from a trellis are typically very simple (often trees with two or three vertices) making these graphs particularly easy to study.
###### Definition 3.8 (Cross-cut surfaces).
A *cross-cut surface* is a topological pair $`(M,A)`$, where $`M`$ is a surface with nonempty boundary, and $`A`$ is a finite union of disjoint embedded intervals $`\alpha `$ such that $`\alpha M=\alpha `$. $`A`$ is a *cross-cutting set* and curves $`\alpha A`$ are *cross-cuts*.
When cutting along $`T^U`$, all minimal segments of $`T^S`$ lift to cross-cuts of $`𝒞_{T^U}M`$. If $`T`$ is a transverse trellis, the endpoints of these lifts are disjoint, so $`𝒞T`$ is a cross-cut surface.
The main property of cross-cut surfaces is that they fibre nicely over graphs.
###### Definition 3.9 (Divided graph).
A divided graph is a topological pair (G,W), where $`G`$ is a graph (simplicial 1-complex) and $`W`$ is a subset of $`\mathrm{Ver}(G)`$, the vertex set of $`G`$.
We now show that for any pair $`(M,A)`$ where $`M`$ is a surface and $`A`$ consists of nicely embedded curves, there is an exact, homotopy invertible map $`r`$ to a divided graph.
###### Theorem 3.10.
Let $`M`$ be a surface such that $`H_2(M,\mathrm{})=0`$, and $`AM`$ set of embedded compact intervals such that $`AM`$ has only a finite number of components. Then there is a divided graph $`(G,W)`$ and an exact map $`(M,A)(G,W)`$ with a homotopy inverse. If $`M`$ is a cross-cut surface, then the homotopy inverse can be made an embedding and all homotopies exact.
###### Proof.
Let $`(X,W)`$ be the quotient space obtained by collapsing each component of $`A`$ to a point, and $`q`$ the quotient map. Clearly $`q`$ is exact, and since neighbourhoods of $`A`$ are topological discs, $`q`$ has a homotopy inverse $`j`$. Further, if $`A`$ consists of cross-cuts, this homotopy inverse can be made an embedding, as shown in Figure 3
Choose a simplicial subdivision of $`X`$, such that no simplex contains more than one point of $`W`$. Since $`X`$ is the quotient of a surface by the curves $`A`$, each 1-simplex of $`X`$ is contained in no more than two 2-simplexes of $`X`$. Then any two vertices lying in the same component of $`XW`$ can be joined by an edge-path which does not touch $`W`$. Let $`Y`$ be a minimal 1-complex with the property that any two vertices in the same component of $`XW`$ lie in the same component of $`Y`$. By the minimality of $`Y`$, each component of $`Y`$ is contractible, so $`H_2(X,YW)=0`$. Hence there exists an edge $`e`$ such that $`eY`$ and $`e`$ is an edge of exactly one 2-simplex $`s`$ of $`X`$. Let $`X_1`$ be the simplicial complex formed by removing $`e`$ and $`s`$ from $`X`$. There is a strong deformation retract $`r_1:XX_1`$ such that $`r_1(se)se`$, and both $`r_1`$ and the corresponding inclusion $`i_1`$ are exact. By iterating this procedure to remove one simple at a time, we obtain the graph $`(G,W)`$.
Since the homotopy inverse for $`q`$ can be made an exact embedding if $`A`$ consists of cross cuts, and each inclusion is an exact embedding, we obtain the required homotopy inverse in the case where $`A`$ consists of cross-cuts. ∎
Thus there are maps $`r:(M,A)(G,W)`$ and $`s:(G,W)(M,A)`$ such that $`rs=id`$ and $`srid`$. If $`𝐑`$ is a set of disjoint regions of $`(M,A)`$, and $`𝐑_G=\{r(R):R𝐑\}`$, then $`r`$ is a region-preserving map $`(M,A;𝐑)(G,W;𝐑_G)`$.
Suppose $`f`$ is a dynamical system on $`(M,A;𝐑)`$. Let $`g=rfs`$. Clearly $`r`$ is a morphism from $`f`$ to $`g`$, so we can study the dynamics of $`f`$ by studying the dynamics of $`g`$ using relative Nielsen theory. If $`A`$ consists of cross cuts, then since $`sgr=srfrs=srfidf=f`$, there is also a morphism from $`f`$ to $`g`$. In this case, the Nielsen classes of $`f`$ and $`g`$ are equivalent. In the ideal situation, we can find a divided graph $`𝒢T`$ and a map $`𝒢f`$ such that all periodic points of $`𝒢f`$ persist under homotopy.
### 3.5. Graph Maps
Under certain conditions, all, or at least all but finitely many, of the periodic points of a system on a graph are unremovable under homotopy. If there is a morphism from a dynamical system on some other space to such a map, we obtain a lot of information about the periodic points of this system. One particularly appealing feature of maps on graphs is that we can easily describe homotopy classes combinatorially using simplicial maps.
###### Definition 3.11.
Let $`G`$ be a graph, $`\stackrel{~}{G}`$ a subdivision of $`G`$, and $`g:\stackrel{~}{G}G`$ a simplicial map. We call such a map $`g`$ a *graph map*.
Let $`e`$ be an edge of $`G`$, such that $`e=\stackrel{~}{e}_1\stackrel{~}{e}_2\mathrm{}\stackrel{~}{e}_m`$, where the $`\stackrel{~}{e}_i`$ are edges of $`\stackrel{~}{G}`$. Then we write $`g(e)=g(\stackrel{~}{e}_1)g(\stackrel{~}{e}_2)\mathrm{}g(\stackrel{~}{e}_m)=e_1e_2\mathrm{}e_n`$, the *edge-path action* of $`g`$. If $`e_{i+1}=\overline{e}_i`$ for some $`i`$, then we say that $`g`$ *folds* the edge $`e`$.
Thus, graph maps either map an edge $`e`$ to a vertex, or stretch it in a piecewise-linear way over an edge-path $`e_1e_2\mathrm{}e_n`$ so that the only points of local non-injectivity on $`e`$ are isolated preimages of vertices.
Dynamics of graph maps can be represented by the *transition matrix*
###### Definition 3.12 (Transition Matrix).
Let $`g`$ be a graph map of $`G`$ and let $`e_1,\mathrm{},e_m`$ be the edges of $`G`$. Let $`A`$ be the $`m\times m`$ matrix with $`i,j`$-th element $`a_{ij}`$ equal to the number of times $`g`$ maps edge $`e_i`$ across $`e_j`$. A is the *transition matrix* for $`g`$.
If $`A`$ is the transition matrix for $`g`$, then we can show that $`A^n`$ is the transition matrix for $`g^n`$. $`(A^n)_{ij}`$ measures the number of times $`g^n`$ maps edge $`e_i`$ across $`e_j`$. There must be one periodic point of $`g`$ of period $`n`$ in $`e_i`$ for each time $`g^n`$ maps $`e_i`$ across $`e_i`$ (except in the degenerate case where $`g^n(e_i)=e_i`$, where all points are periodic by linearity). Thus there are $`(A^n)_{ii}`$ period $`n`$ points of $`g`$ in $`e_i`$.
Naively, one would expect $`\mathrm{Tr}(A^n)=_{i=1}^m(A^n)_{ii}`$ to give the total number of points of period $`n`$ for $`g`$. Unfortunately, periodic points in $`\mathrm{Ver}(G)`$ may be counted several times, or not at all. However, the error between $`\mathrm{Tr}(A^n)`$ and $`\mathrm{\#}\mathrm{Per}_n(g)`$ is bounded by a constant $`c`$ independent of $`n`$.
It is well known that the topological entropy of $`g`$ is given by the growth rate of the number of periodic points of $`g`$, $`lim\; sup_n\mathrm{}\frac{1}{n}\mathrm{log}\mathrm{Tr}(A^n)`$, and is equal to the Perron-Frobenius eigenvalue of $`A`$, $`\lambda _{\mathrm{max}}(A)`$. $`A`$ determines a graph with $`a_{ij}`$ edges from vertex $`i`$ to vertex $`j`$, and the dynamics of $`g`$ are represented by the edge shift on this graph.
Now suppose $`(G,W)`$ is a divided graph, $`𝐑`$ is a set of disjoint regions, and $`g`$ is a graph map of $`(G,W)`$. We can extend the definition of transition matrices to take into account the regions in $`𝐑`$ as follows:
###### Definition 3.13.
For all regions $`R𝐑`$, define an $`m\times m`$ matrix $`P_R`$ by $`(P_R)_{ii}=1`$ if edge $`e_iR`$ and $`(P_R)_{ij}=0`$ otherwise. Let $`A_R=P_RA`$, and $`A_𝐑=_{R𝐑}A_R`$. If $``$ is a word on $`𝐑`$ of length $`n`$, let $`A_{}=A_{R_0}A_{R_1}\mathrm{}A_{R_{n1}}`$, the *transition matrix for the code $``$*.
When writing $`A_𝐑`$ we will typically drop rows and columns corresponding to edges not in $`𝐑`$, and draw a horizontal line between rows corresponding to edges in different regions.
$`\mathrm{Tr}(A_{})`$ is gives the number of points of period $`n`$ for $`g`$ with code $``$ (except for small errors occurring at vertices). It is easy to check that
$$\underset{W^m(𝐑)}{}\mathrm{Tr}(A_{})=\mathrm{Tr}(A_𝐑^n)\mathrm{Tr}(A^n)$$
where $`W^m(𝐑)`$ is the set of words on $`𝐑`$ of length $`m`$. Again, $`\mathrm{Tr}(A_𝐑^n)`$ counts the number of points in $`\mathrm{Per}_{𝐑,n}(g)`$, up to an error which is constant in $`n`$.
We have shown that the periodic points of graph maps are easy to calculate. We now define a class of graph maps, called *tight graph maps*, which have minimal dynamics in the homotopy class.
###### Definition 3.14 (Tight Graph Map).
A graph map $`g:(G,W)(G,W)`$ is $`𝐑`$-*tight* if for all regions $`R𝐑`$, for all edges $`e`$ in $`R`$, $`g(e)`$ does not fold, and if $`e_1`$ and $`e_2`$ are distinct edges from the same vertex $`v`$ in $`RW`$, then $`g(e_1)`$ and $`g(e_2)`$ have different initial edges.
Not every map of a divided graph is homotopic to a tight graph map, but all the maps of cross-cut surfaces we study are exactly homotopy retract onto a tight graph map, and we conjecture that this is true in general.
The fundamental theorem on tight graph maps is that the periodic points lie in different Nielsen classes, and that, typically, these Nielsen classes are essential.
###### Theorem 3.15.
Suppose $`g`$ is $`𝐑`$-tight and $`x_1,x_2\mathrm{Per}_{𝐑,n}(f)`$. Then either $`x_1`$ and $`x_2`$ lie in different Nielsen classes, or there is an edge-path joining $`x_1`$ to $`x_2`$ which is fixed by $`g^n`$. Further, if $`x\mathrm{Per}_{𝐑,n}(f)`$, then either $`\mathrm{Ind}(x;f)0`$ or $`x\mathrm{Ver}(G)`$.
###### Proof.
Suppose $`x_1x_2`$ are Nielsen-equivalent, and $`\alpha _j:(I,J)(G,W)`$ is a relating family for $`x_1x_2`$.
Suppose $`J\mathrm{}`$. Let $`s=infJ`$ and $`y=\alpha (s)`$ Since $`x_1Y`$, $`s>0`$. Let $`\beta _j:(I,\{1\})(X,Y)`$ be given by $`\beta _j(t)=\alpha _j(t/s)`$. Then $`(\beta _j)`$ is a relating family for $`x_1y`$, and further, there are regions $`R_j𝐑`$ such that $`\beta _j(I)R_j`$.
If $`J=\mathrm{}`$, then we let $`\beta _j=\alpha _j`$, so again there are regions $`R_j𝐑`$ such that $`\beta _j(I)R_j`$.
By homotoping if necessary to remove any folds, we can assume that all curves $`\beta _j`$ are locally injective. Since $`g`$ is $`𝐑`$-tight, $`g(\beta )`$ is locally injective, so, up to parameterisation, $`g\beta _j=\beta _{j+1}`$ Hence $`g^n(\beta _0(I))=\beta _0(I)`$ so $`g^n\beta _0\beta _0`$. Thus $`g^n\beta =\beta `$ , and so all points of $`\beta `$ are fixed by $`g^n`$.
If $`x`$ is an isolated repelling fixed point of a graph map $`f`$ and $`x`$ does not lie on a vertex of $`G`$, then $`\mathrm{Ind}(G,x;g^n)=\pm 1`$. ∎
### 3.6. Entropy of Trellis Maps
We now show that we can find a lower bound for the entropy of a trellis map in terms of the asymptotic Nielsen number. By Theorem 2.20, we need only show that $`𝒞f`$ has expansive periodicity near $`Y_T`$.
###### Theorem 3.16.
If $`f`$ is a homeomorphism with trellis $`T`$ such that $`T^P`$ consists of hyperbolic periodic points, then $`𝒞f:(X_T,Y_T)(X_T,Y_T)`$ has expansive periodicity near $`Y_T`$.
###### Proof.
Since $`Y_T`$ is the inverse image under the glueing map of a submanifold of stable manifold for $`f`$, $`Y_T`$ has a neighbourhood $`W`$ for which every point of $`WY_T`$ eventually leaves $`W`$. Since $`Y_T`$ is a union of disjoint copies of an interval with endpoints in $`X_T`$, we can find neighbourhoods $`V_1`$, $`V_2`$ and $`V_3`$ of $`Y_T`$ each of which deformation retract onto $`Y_T`$ such that $`\mathrm{cl}(V_1)V_2`$, $`\mathrm{cl}(V_2)V_3`$, $`𝒞f(V_1)V_2`$, and every point of $`V_1Y_T`$ eventually leaves $`V_1`$ Choose an open cover $`𝒰`$ containing the components of $`V_1`$ and $`V_2Y_T`$, and such that for all other $`U𝒰`$, $`UV_1=\mathrm{}`$ and $`U`$ intersects at most one component of $`V_2Y_T`$ (This is where we need $`\mathrm{cl}(V_2)V_3`$). Let $`U_0=V_1`$. We claim that $`𝒰`$ and $`U_0`$ are the required open cover and neighbourhood of $`Y_T`$.
First notice that if $`x_1`$ and $`x_2`$ lie in the same component of $`V_1`$, but different components of $`V_1Y_T`$ (equivalently, every path from $`x_1`$ to $`x_2`$ in $`V_1`$ crosses $`Y`$), then $`f(x_1)`$ and $`f(x_2)`$ lie in different components of $`V_2`$. Suppose $`x_1,x_2U_0Y_T`$, and $`f^j(x_1)`$ and $`f^j(x_2)`$ are $`𝒰`$-close for all $`j`$. Then there exists least $`i`$ such that either $`f^i(x_1)`$ or $`f^i(x_2)`$ are not in $`U_0=V_1`$. By minimality of $`i`$, $`f^i(x_1),f^i(x_2)V_2`$. Since $`f^i(x_1)`$ and $`f^i(x_2)`$ are $`𝒰`$-close, they must lie in the same component of $`V_2Y_T`$. This means that $`x_1`$ and $`x_2`$ lie in the same component of $`V_1Y_T`$, and since components of $`V_1`$ are simply connected, every path in $`V_1`$ from $`x_1`$ to $`x_2`$ is homotopic to one which does not intersect $`Y_T`$. ∎
We can use this to show that the entropy of a map with trellis $`T`$ is at least the asymptotic Nielsen number of $`𝒞f`$.
###### Corollary 3.17.
If $`f`$ is a homeomorphism with transverse trellis $`T`$ such that $`T^P`$ consists of hyperbolic periodic points, then $`h_{\mathrm{𝑡𝑜𝑝}}(f)N_{\mathrm{}}(𝒞f)`$.
###### Proof.
$`h_{\mathrm{𝑡𝑜𝑝}}(f)=h_{\mathrm{𝑡𝑜𝑝}}(𝒞f)`$ since the gluing map is a finite-to-one surjective semiconjugacy, and $`h_{\mathrm{𝑡𝑜𝑝}}(𝒞f)N_{\mathrm{}}(𝒞f)`$ by Theorem 3.16 and Theorem 2.20. ∎
If the homeomorphism $`f`$ for the trellis $`T`$ is clear, we will sometimes call $`N_{\mathrm{}}(𝒞f)`$ the *entropy of $`T`$*.
## 4. Examples
###### Example 4.1 (The Smale horseshoe).
First we give a familiar example, the Smale horseshoe map. Recall that the Smale horseshoe map $`f:S^2S^2`$ maps the stadium-shaped area of Figure 4 into itself as shown, mapping the square $`S`$ linearly across itself with uniform expansion in the horizontal direction and contraction in the vertical direction.
$`f`$ maps the semicircular region $`D_1`$ into itself so that all points in $`D_1`$ are attracted to a fixed point, and maps $`D_2`$ into $`D_1`$. Outside the stadium, $`f`$ has a single repelling fixed point.
There is a hyperbolic saddle point in $`S`$, and the stable and unstable curves form a homoclinic tangle. The *horseshoe trellis* $`T_2`$ is the subset of the tangle in shown in Figure 5(a). Except for two fixed points outside $`S`$, the nonwandering set $`\mathrm{\Lambda }`$ of $`f`$ lies in the regions $`R_1`$ and $`R_2`$.
The $`(U,S,𝒪)`$-coordinates for the vertices are
$$(0,0,+),(1,7,),(2,4,+),(3,3,),(4,2,+),(5,5,),(6,6,+),(7,1,)$$
To study the dynamics, we first cut along the unstable set $`T_2^U`$ of the trellis (dropping the ends) as shown in Figure 5(b). This gives us a topological pair $`𝒞T_2=(X_{T_2},Y_{T_2})`$, where $`X`$ is the surface obtained by the cutting, $`Y_{T_2}`$ is a subset of $`X_{T_2}`$ corresponding to the stable set $`T_2^S`$ of the trellis. $`f`$ naturally induces a map $`𝒞f`$ of $`𝒞T_2`$.
Let $`G_{T_2}`$ be graph embedded in $`𝒞T_2`$ as shown in Figure 5(c). Letting $`W_{T_2}=G_{T_2}Y_{T_2}`$, we obtain a topological pair $`𝒢T_2=(G_{T_2},W_{T_2})`$ onto which we can deformation retract $`(X_{T_2},Y_{T_2})`$. This collapsing induces a map $`𝒢f`$ on $`𝒢T_2`$.
Just by knowing the action of $`f`$ on $`T_2^S`$, we can deduce the action of $`𝒢f`$ on $`W`$. In this case we have
$$p_0,p_3,p_4p_0,p_1,p_2,p_5p_3\mathrm{and}p_6p_4$$
Since $`𝒢T_2`$ is a tree, this determines the homotopy class of $`𝒢f`$ as a self-map of $`𝒢T_2`$ completely.
A tight graph map in the homotopy class of $`𝒢f`$ maps the arcs corresponding to regions $`R_1`$ and $`R_2`$ across each other. Using the labeling of Figure 5(d), we have
$$aabc\mathrm{and}c\overline{c}\overline{b}\overline{a}$$
Thus $`𝒢T_2`$ must have a subset on which $`𝒢f`$ is conjugate to the one-sided shift on two symbols. Therefore, the trellis forces dynamics conjugate with the shift on two symbols. In particular, any map with the same trellis as the Smale horseshoe $`f`$ must have entropy $`h_{\mathrm{𝑡𝑜𝑝}}\mathrm{log}2`$.
###### Example 4.2 (Iterates of trellis maps).
Again consider the trellis $`T_2`$ of and let $`f`$ be the second iterate of the horseshoe map. One might expect the homotopy class of $`f`$ to have *more* entropy than that of $`f`$. However, $`𝒢f`$ maps all points $`p_0\mathrm{}p_6`$ to $`p_0`$ so is homotopic to a constant map. Thus we obtain no information about the dynamics. We can find diffeomorphisms homotopic to $`f`$ with this trellis and arbitrarily small entropy.
###### Example 4.3 (Trivial trellises).
Consider the trellis $`T_1`$ of Figure 6(a)which is a subset of the horseshoe trellis, and let $`f`$ be the horseshoe map. Cutting along the unstable manifolds we obtain the surface $`𝒞T_1`$ shown in Figure 6(b).
The components $`Y_0`$, $`Y_1`$ and $`Y_2`$ of $`Y_{T_1}`$ all map to $`Y_0`$ under $`𝒞f`$, so $`𝒞f`$ is homotopic to a constant. Therefore, our topological methods give no interesting dynamics.
An even more extreme example is given by the trellis $`T_0`$ of Figure 7(a) Cutting along the unstable manifolds we obtain the surface $`𝒞T_0`$ of Figure 7(b).
All maps on $`𝒞T_0`$ are homotopic to a constant, so again, our topological methods to any map with this trellis yields no information.
In each of these cases, we know that if $`f`$ is a diffeomorphism with this trellis, $`h_{\mathrm{𝑡𝑜𝑝}}(f)>0`$. However, we can find diffeomorphisms with arbitrarily small entropy.
###### Example 4.4 (The type-3 trellis).
The type-$`3`$ trellis $`T_3`$ is the simplest nontrivial trellis other than the horseshoe. It trellis occurs in the Hénon map for a range of parameter values, and a particular case is shown in as in Figure 8.
This figure was drawn using the DsTool implementation of the algorithm of Krauskopf and Osinga \[KO98\]. The trellis is shown in Figure 9(a).
The $`(U,S,𝒪)`$-coordinates for the vertices are
$`(0,0,+),(1,9,),(2,6,+),(3,5,),(4,4,+),(5,3,),`$
$`(6,2,+),(7,7,),(8,8,+),(9,1,)`$
and the vertices map $`(1,9,)(3,5,)(5,3,)(9,1,)`$.
Cutting along the unstable manifold, we obtain the surface $`𝒞T_3`$ and the embedded graph $`𝒢T_3`$ as shown in Figure 9(b). The action on the distinguished vertex set is
$$p_0,p_4,p_5p_0,p_1,p_2,p_6p_3,p_3p_4,p_7p_5\mathrm{and}p_8p_7$$
The graph is a tree, and the regions $`R_1`$ and $`R_2`$ are expanding under the the tight map
$$aabc_1\overline{c}_2,b,c_1c_2,c_2c_3\mathrm{and}c_3abc_1$$
This gives transition matrix (on $`\{a,c_1,c_2,c_3\}`$)
$$A=\left(\begin{array}{cccc}1& 1& 1& 0\\ & & & \\ 0& 0& 1& 0\\ 0& 0& 0& 1\\ 1& 1& 0& 0\end{array}\right)$$
The horizontal line in the matrix separates the rows corresponding to edges of $`R_1`$ from edges of $`R_2`$. The edge shift for this transition matrix is given in Figure 10, and since $`aR_1`$ and $`c_1,c_2,c_3R_2`$ we obtain a sofic shift on regions.
The characteristic polynomial of $`A`$ is $`\lambda (\lambda ^3\lambda ^22)`$, and the Perron-Frobenius eigenvalue $`\lambda _{\mathrm{max}}`$ of $`A`$ therefore satisfies $`\lambda _{\mathrm{max}}^3\lambda _{\mathrm{max}}^22=0`$. The value of $`\lambda _{\mathrm{max}}`$ is approximately $`1.70`$, giving a lower bound of $`0.527`$ for the topological entropy.
###### Example 4.5 (The type-$`n`$ trellis).
The horseshoe trellis and type-$`3`$ trellis are part of a family of simple trellises. The general type-$`n`$ trellis has vertices with coordinates
$`(0,0,+),(1,2n+3,),(2,2n,+),(3,2n1,),(4,2n2,+)\mathrm{}(2n1,3,)`$
$`(2n,2,+),(2n+1,2n+1,),(2n+2,2n+2,+),(2n+3,1,)`$
We consider trellis maps taking $`(1,2n+3,)`$ to $`(3,2n1,)`$. The graph $`𝒢T_n`$, shown in Figure 11, has two expanding regions $`R_1`$ and $`R_2`$ under the tight map.
$`R_1`$ has a single edge $`a`$, and $`R_2`$ has edges $`c_1,c_2\mathrm{}c_n`$ which map:
$`a`$ $``$ $`abc_1\overline{c}_2`$
$`c_i`$ $``$ $`\{\begin{array}{cc}c_{i+1}\hfill & \mathrm{if}i<n\hfill \\ abc_1\hfill & \mathrm{if}i=n\hfill \end{array}`$
where $`b`$ is an edge from the end of $`a`$ to the beginning of $`c_1`$. The transition matrix (on $`\{a,c_1,c_2,\mathrm{},c_n\}`$) is
$$A=\left(\begin{array}{ccccccc}\hfill 1& \hfill 1& \hfill 1& \hfill 0& \mathrm{}& \hfill 0& \\ & & & & & & \\ \hfill 0& \hfill 0& \hfill 1& \hfill 0& \mathrm{}& \hfill 0& \\ \hfill 0& \hfill 0& \hfill 0& \hfill 1& \mathrm{}& \hfill 0& \\ \hfill \mathrm{}& \hfill \mathrm{}& \hfill \mathrm{}& \hfill \mathrm{}& \mathrm{}& \hfill \mathrm{}& \\ \hfill 0& \hfill 0& \hfill 0& \hfill 0& \mathrm{}& \hfill 1& \\ \hfill 1& \hfill 1& \hfill 0& \hfill 0& \mathrm{}& \hfill 0& \end{array}\right)$$
The characteristic polynomial of this matrix is $`\lambda (\lambda ^n\lambda ^{n1}2)`$, from which we can find the entropy of the system. In particular $`\lambda _{\mathrm{max}}1`$ as $`n\mathrm{}`$, so $`h_{\mathrm{𝑡𝑜𝑝}}0`$.
###### Example 4.6 (The effect of boundary components).
Consider the trellis $`T_D`$ shown in Figure 12(a).
The $`(U,S,𝒪)`$-coordinates for the vertices are
$`(0,0,+),(1,10,),(2,7,+),(3,4,),(4,3,+),(5,8,),(6,9,+),`$
$`(7,2,),(8,5,+),(9,6,),(10,1,+)`$
The graph $`𝒢T_D`$ is shown in Figure 12(c), and the tight graph map is
$$\begin{array}{cccc}a_1a_1a_2a_3\hfill & a_2\hfill & a_3\overline{a}_3\overline{a}_2\overline{a}_1\hfill & \\ b_1\hfill & & & \\ c_1c_1c_2c_3\hfill & c_2\hfill & c_3\overline{c}_3\overline{c}_2\overline{c}_1\hfill & c_4a_1a_2a_3\hfill \end{array}$$
This map has entropy $`h_{\mathrm{𝑡𝑜𝑝}}=\mathrm{log}2`$.
Now suppose the trellis is embedded in a surface with three holes positioned at the stars in Figure 12(a) The graph is of the trellis is shown in Figure 12(c). The tight map is
$$\begin{array}{ccccc}a_1a_1a_2a_3\hfill & a_2a_4\hfill & a_3\overline{a}_3\overline{a}_2\overline{a}_1\hfill & a_4b_1b_2\overline{b}_1\hfill & \\ b_1c_1c_2c_3\hfill & b_2c_4c_5\overline{c}_4\hfill & & & \\ c_1c_1c_2c_3\hfill & c_2c_4c_5\overline{c}_4\hfill & c_3\overline{c}_3\overline{c}_2\overline{c}_1\hfill & c_4a_1a_2a_3\hfill & c_5a_4\hfill \end{array}$$
Since the map does not fold of the edge paths $`a_1a_2a_3`$ and $`c_1c_2c_3c_4`$, the dynamics of this map are the same as that of $`aa\overline{a}b`$, $`bc`$ and $`cc\overline{c}a`$. From this we can show that the characteristic polynomial of the transition matrix has a factor $`\lambda ^23\lambda +1`$, from which we obtain entropy $`h_{\mathrm{𝑡𝑜𝑝}}(f)h_{\mathrm{𝑡𝑜𝑝}}(g_T)=\mathrm{log}(\frac{3+\sqrt{5}}{2})`$.
Note that this entropy is larger than that for the trellis in a surface without holes. Collapsing the holes to points, we obtain a periodic orbit of period $`3`$. The braid type of this orbit is pseudo-Anosov, and the minimal representative has entropy $`\mathrm{log}(\frac{3+\sqrt{5}}{2})`$, the same as that computed above. Further, the trellis is exhibited by a blow-up of the pseudo-Anosov homeomorphism. Thus all the dynamics are forced by the isotopy class in the surface.
###### Example 4.7 (A toral Anosov trellis).
Let $`A`$ be that matrix
$$A=\left(\begin{array}{cc}2& 1\\ 1& 1\end{array}\right)$$
The eigenvalues of $`A`$ are $`\frac{1}{2}(3\pm \sqrt{5})`$ and the eigenvectors are
$$v_u=\left(\begin{array}{cc}1& \\ \frac{1+\sqrt{5}}{2}& \end{array}\right)v_s\left(\begin{array}{c}1\\ \frac{1+\sqrt{5}}{2}\end{array}\right)$$
The trellis $`T_A`$ of Figure 13(a) occurs in the toral Anosov map with matrix $`A`$
The points of intersection have coordinates
$`q_0`$ $`=`$ $`(0,0)`$
$`q_1`$ $`=`$ $`\frac{1}{10}(15+7\sqrt{5},2511\sqrt{5})`$
$`q_2`$ $`=`$ $`\frac{1}{10}(5+3\sqrt{5},104\sqrt{5})`$
$`q_3`$ $`=`$ $`\frac{1}{10}(10+6\sqrt{5},208\sqrt{5})`$
$`q_4`$ $`=`$ $`\frac{1}{10}(2\sqrt{5},5\sqrt{5})`$
and the Anosov map $`f`$ fixes $`q_0`$ and maps $`q_1q_2q_4`$.
The graph $`𝒢T_A`$ for $`T_A`$ is shown in Figure 13(b) and has edges which map:
$$a_1a_1,a_2ba_2,a_3ca_3,bba_2\overline{a}_3\overline{c}\mathrm{and}ca_1\overline{a}_2\overline{b}$$
If $`\alpha =a_1`$, $`\beta =ba_2`$ and $`\gamma =ca_3`$, then we have
$$\alpha \alpha ,\beta \beta \overline{\gamma }\beta \mathrm{and}\gamma \alpha \overline{\beta }\gamma $$
. Thus the growth rate of the number of periodic points is simply the Perron-Frobenius eigenvalue $`\frac{1}{2}(3+\sqrt{5})`$ of $`A`$, and all orbits of the Anosov map persist under homotopies preserving the trellis structure.
###### Example 4.8 (A heteroclinic trellis).
The heteroclinic trellis $`T_H`$ shown in Figure 14(a) occurs in the Smale horseshoe.
There are two saddle fixed points, $`p_0`$ and $`p_1`$. Cutting along the unstable manifold, we obtain the surface $`𝒞T_H`$ of Figure 14(b), and we can retract this to the graph $`𝒢T_H`$ as shown in Figure 14(c). The action on the distinguished vertex set is:
$$p_0,p_4p_0,p_1,p_5p_2,p_2p_5\mathrm{and}p_3p_4$$
. The regions $`R_1`$, $`R_2`$, $`R_3`$ and $`R_4`$ are expanding under the tight map, for which
$$aab,bc\overline{e}_2e_3d,c\overline{d}\mathrm{and}eab$$
This gives transition matrix (on $`\{a,b,c,d\}`$)
$$A=\left(\begin{array}{cccc}1& 1& 0& 0\\ & & & \\ 0& 0& 1& 1\\ 0& 0& 0& 1\\ 1& 1& 0& 0\end{array}\right)$$
The characteristic polynomial for $`A`$ is $`\lambda (\lambda ^3\lambda ^2\lambda 1)=0`$, and the maximum eigenvalue is $`\lambda _{\mathrm{max}}1.839`$. $`\mathrm{log}\lambda _{\mathrm{max}}0.609`$, so $`h_{}(f)0.609`$ for any map with this trellis action. Note that this entropy bound is less than that obtained from the horseshoe trellis $`T_2`$.
###### Example 4.9 (A trellis with tangential intersections).
Consider the trellis $`T_I`$ of Figure 15(a) which occurs in bifurcations from the Smale horseshoe and has tangential intersections.
Cutting along the unstable manifold, we obtain the surface $`𝒞T_I`$ shown in Figure 15(b) This is not a cross-cut surface, and while there is an exact deformation retract from this surface to a divided graph, we shall study the induced map using the Lefschetz theory.
The cohomology action gives
$$\alpha \alpha +\beta +\gamma ,\beta 0\mathrm{and}\gamma \alpha \beta \gamma $$
Just considering the cohomology action on $`\alpha `$, and $`\gamma `$, we have Lefschetz matrices
$$A=\left(\begin{array}{cc}1& 1\\ & \\ 1& 1\end{array}\right)A_{R_1}=\left(\begin{array}{cc}1& 1\\ & \\ 0& 0\end{array}\right)A_{R_2}=\left(\begin{array}{cc}0& 0\\ & \\ 1& 1\end{array}\right)$$
Thus for any word $``$ on $`R_1`$ and $`R_2`$, $`L(A_{})=\pm 1`$ and so $`\widehat{\mathrm{Per}}_{}(f)\mathrm{}`$. Again, we have at least $`2^n`$ points of period $`n`$ for $`f`$, and since $`R_1`$ and $`R_2`$ are disjoint, we can again deduce that the topological entropy is at least $`\mathrm{log}2`$.
## 5. Further Study
In this paper we describe a general framework for studying maps with tangles. However there are still many unanswered questions and opportunities for further work.
One particularly important problem is that of optimality of these methods. This is intimately related to the conditions we place on the map itself. As an example, consider a homoclinic trellis on the sphere with two intersections, and a map $`f`$ with this trellis. If $`f`$ is a diffeomorphism, we know that $`f`$ must have a horseshoe in some iterate, and hence be chaotic and have exponential growth of periodic points. Unfortunately, as previously remarked, we cannot find a lower bound for topological entropy, even though we know if must be strictly positive. Using the pruning theory of de Carvalho \[dC\] we can show that there is a homeomorphism with this trellis with zero entropy. This homeomorphism has stable and unstable curves at the fixed point, but this fixed point is not hyperbolic. Therefore, it is not surprising that our methods do not give periodic orbits when applied in this case.
For many examples, we can show that there is a uniformly hyperbolic diffeomorphism with the given trellis which realises the entropy bound given by the asymptotic Nielsen number. As remarked above, this cannot be true in general, but a nice result would be the following
###### Conjecture 5.1.
Let $`f`$ be a trellis map for the trellis $`T`$. Then $`N_{\mathrm{}}(f)`$ is a lower bound for all maps with trellis $`T`$ homotopic to $`f`$. Further, there is a homeomorphism homotopic to $`f`$ with topological entropy $`N_{\mathrm{}}(f)`$, and for all $`ϵ>0`$ there is a uniformly hyperbolic diffeomorphism homotopic to $`f`$ such that $`h_{\mathrm{𝑡𝑜𝑝}}h<N_{\mathrm{}}(f)+ϵ`$.
A possible way of constructing these diffeomorphisms is by using a tight graph map. For this method to work, we probably need to show that for any trellis map $`f`$, there is a tight graph map isomorphic to $`𝒞f`$ (for a suitable regional decomposition). Since we cannot in general find a morphism in the category of dynamical systems from a general graph map to a tight one without losing entropy, this could be a tricky problem.
Another interesting problem is the case of non-invertible maps. We have shown that there are no major problems unless points not in $`T^U`$ maps over $`T^U`$, in which case our method breaks down. Sander \[San\] showed that in general, non-invertible maps may have non-trivial tangles but still be non-chaotic. However, we still may be able to deduce chaos in more general situations than those described here.
Ultimately, we would like to refine this procedure into an algorithm suitable for implementation on a computer. This requires a way of encoding the important properties of trellises and trellis maps combinatorially. As we have seen, the $`(U,S,𝒪)`$ coordinate description for the vertices provides a good description of a homoclinic trellis on a sphere; in more complicated cases we have to take into account the homotopy classes of the curves in the surface $`M`$, and also the way different curves wind round each other.
Having obtained a complete description of a single trellis, we would then like to consider bifurcation sequences. This requires an especially good understanding of trellises with tangential intersections. Since Nielsen classes are open in the set of periodic points of a given period, they cannot be removed by sufficiently small perturbations, even if the trellis is destroyed. Therefore, our analysis of the trellis in Example 4.9 shows that all periodic horseshoe orbits are present at the bifurcation of the trellis, and therefore, given a sufficiently small perturbation, all such orbits of sufficiently low period remain. However, the possible orderings in which periodic orbits may be destroyed is unknown, though some results have been obtained by Hall \[Hal94\].
|
no-problem/9907/cs9907007.html
|
ar5iv
|
text
|
# Cross-Language Information Retrieval for Technical Documents
## 1 Introduction
Cross-language information retrieval (CLIR), where the user presents queries in one language to retrieve documents in another language, has recently been one of the major topics within the information retrieval community. One strong motivation for CLIR is the growing number of documents in various languages accessible via the Internet. Since queries and documents are in different languages, CLIR requires a translation phase along with the usual monolingual retrieval phase. For this purpose, existing CLIR systems adopt various techniques explored in natural language processing (NLP) research. In brief, bilingual dictionaries, corpora, thesauri and machine translation (MT) systems are used to translate queries or/and documents.
In this paper, we propose a Japanese/English CLIR system for technical documents, focusing on translation of technical terms. Our purpose also includes integration of different components within one framework. Our research is partly motivated by the “NACSIS” test collection for IR systems \[Kando et al., 1998\]<sup>1</sup><sup>1</sup>1http://www.rd.nacsis.ac.jp/~ntcadm/index-en.html, which consists of Japanese queries and Japanese/English abstracts extracted from technical papers (we will elaborate on the NACSIS collection in Section 4). Using this collection, we investigate the effectiveness of each component as well as the overall performance of the system.
As with MT systems, existing CLIR systems still find it difficult to translate technical terms and proper nouns, which are often unlisted in general dictionaries. Since most CLIR systems target newspaper articles, which are comprised mainly of general words, the problem related to unlisted words has been less explored than other CLIR subtopics (such as resolution of translation ambiguity). However, Pirkola \[Pirkola, 1998\], for example, used a subset of the TREC collection related to health topics, and showed that combination of general and domain specific (i.e., medical) dictionaries improves the CLIR performance obtained with only a general dictionary. This result shows the potential contribution of technical term translation to CLIR. At the same time, note that even domain specific dictionaries do not exhaustively list possible technical terms. We classify problems associated with technical term translation as given below:
1. technical terms are often compound word, which can be progressively created simply by combining multiple existing morphemes (“base words”), and therefore it is not entirely satisfactory to exhaustively enumerate newly emerging terms in dictionaries,
2. Asian languages often represent loanwords based on their special phonograms (primarily for technical terms and proper nouns), which creates new base words progressively (in the case of Japanese, the phonogram is called katakana).
To counter problem (1), we use the compound word translation method we proposed \[Fujii and Ishikawa, 1999\], which selects appropriate translations based on the probability of occurrence of each combination of base words in the target language. For problem (2), we use “transliteration” \[Chen et al., 1998, Knight and Graehl, 1998, Wan and Verspoor, 1998\]. Chen et al. \[Chen et al., 1998\] and Wan and Verspoor \[Wan and Verspoor, 1998\] proposed English-Chinese transliteration methods relying on the property of the Chinese phonetic system, which cannot be directly applied to transliteration between English and Japanese. Knight and Graehl \[Knight and Graehl, 1998\] proposed a Japanese-English transliteration method based on the mapping probability between English and Japanese katakana sounds. However, since their method needs large-scale phoneme inventories, we propose a simpler approach using surface mapping between English and katakana characters, rather than sounds.
Section 2 overviews our CLIR system, and Section 3 elaborates on the translation module focusing on compound word translation and transliteration. Section 4 then evaluates the effectiveness of our CLIR system by way of the standardized IR evaluation method used in TREC programs.
## 2 System Overview
Before explaining our CLIR system, we classify existing CLIR into three approaches in terms of the implementation of the translation phase. The first approach translates queries into the document language \[Ballesteros and Croft, 1998, Carbonell et al., 1997, Davis and Ogden, 1997, Fujii and Ishikawa, 1999, Hull and Grefenstette, 1996, Kando and Aizawa, 1998, Okumura et al., 1998\], while the second approach translates documents into the query language \[Gachot et al., 1996, Oard and Hackett, 1997\]. The third approach transfers both queries and documents into an interlingual representation: bilingual thesaurus classes \[Mongar, 1969, Salton, 1970, Sheridan and Ballerini, 1996\] and language-independent vector space models \[Carbonell et al., 1997, Dumais et al., 1996\]. We prefer the first approach, the “query translation”, to other approaches because (a) translating all the documents in a given collection is expensive, (b) the use of thesauri requires manual construction or bilingual comparable corpora, (c) interlingual vector space models also need comparable corpora, and (d) query translation can easily be combined with existing IR engines and thus the implementation cost is low. At the same time, we concede that other CLIR approaches are worth further exploration.
Figure 1 depicts the overall design of our CLIR system, where most components are the same as those for monolingual IR, excluding “translator”.
First, “tokenizer” processes “documents” in a given collection to produce an inverted file (“surrogates”). Since our system is bidirectional, tokenization differs depending on the target language. In the case where documents are in English, tokenization involves eliminating stopwords and identifying root forms for inflected words, for which we used “WordNet” \[Miller et al., 1993\]. On the other hand, we segment Japanese documents into lexical units using the “ChaSen” morphological analyzer \[Matsumoto et al., 1997\] and discard stopwords. In the current implementation, we use word-based uni-gram indexing for both English and Japanese documents. In other words, compound words are decomposed into base words in the surrogates. Note that indexing and retrieval methods are theoretically independent of the translation method.
Thereafter, the “translator” processes a query in the source language (“S-query”) to output the translation (“T-query”). T-query can consist of more than one translation, because multiple translations are often appropriate for a single technical term.
Finally, the “IR engine” computes the similarity between T-query and each document in the surrogates based on the vector space model \[Salton and McGill, 1983\], and sorts document according to the similarity, in descending order. We compute term weight based on the notion of TF$``$IDF. Note that T-query is decomposed into base words, as performed in the document preprocessing.
In Section 3, we will explain the “translator” in Figure 1, which involves compound word translation and transliteration modules.
## 3 Translation Module
### 3.1 Overview
Given a query in the source language, tokenization is first performed as for target documents (see Figure 1). To put it more precisely, we use WordNet and ChaSen for English and Japanese queries, respectively. We then discard stopwords and extract only content words. Here, “content words” refer to both single and compound words. Let us take the following query as an example:
* improvement of data mining methods.
For this query, we discard “of”, to extract “improvement” and “data mining methods”.
Thereafter, we translate each extracted content word individually. Note that we currently do not consider relation (e.g. syntactic relation and collocational information) between content words. If a single word, such as “improvement” in the example above, is listed in our bilingual dictionary (we will explain the way to produce the dictionary in Section 3.2), we use all possible translation candidates as query terms for the subsequent retrieval phase.
Otherwise, compound word translation is performed. In the case of Japanese-English translation, we consider all possible segmentations of the input word, by consulting the dictionary. Then, we select such segmentations that consist of the minimal number of base words. During the segmentation process, the dictionary derives all possible translations for base words. At the same time, transliteration is performed whenever katakana sequences unlisted in the dictionary are found. On the other hand, in the case of English-Japanese translation, transliteration is applied to any unlisted base word (including the case where the input English word consists of a single base word). Finally, we compute the probability of occurrence of each combination of base words in the target language, and select those with greater probabilities, for both Japanese-English and English-Japanese translations.
### 3.2 Compound Word Translation
This section briefly explains the compound word translation method we previously proposed \[Fujii and Ishikawa, 1999\]. This method translates input compound words on a word-by-word basis, maintaining the word order in the source language<sup>2</sup><sup>2</sup>2A preliminary study showed that approximately 95% of compound technical terms defined in a bilingual dictionary maintain the same word order in both source and target languages.. The formula for the source compound word and one translation candidate are represented as below.
$`S`$ $`=`$ $`s_1,s_2,\mathrm{},s_n`$
$`T`$ $`=`$ $`t_1,t_2,\mathrm{},t_n`$
Here, $`s_i`$ and $`t_i`$ denote $`i`$-th base words in source and target languages, respectively. Our task, i.e., to select $`T`$ which maximizes $`P(T|S)`$, is transformed into Equation (1) through use of the Bayesian theorem.
$$\mathrm{arg}\underset{T}{\mathrm{max}}P(T|S)=\mathrm{arg}\underset{T}{\mathrm{max}}P(S|T)P(T)$$
(1)
$`P(S|T)`$ and $`P(T)`$ are approximated as in Equation (2), which has commonly been used in the recent statistical NLP research \[Church and Mercer, 1993\].
$$\begin{array}{ccc}P(S|T)\hfill & \hfill & \underset{i=1}{\overset{n}{}}P(s_i|t_i)\hfill \\ \multicolumn{3}{c}{}\\ P(T)\hfill & \hfill & \underset{i=1}{\overset{n1}{}}P(t_{i+1}|t_i)\hfill \end{array}$$
(2)
We produced our own dictionary, because conventional dictionaries are comprised primarily of general words and verbose definitions aimed at human readers. We extracted 59,533 English/Japanese translations consisting of two base words from the EDR technical terminology dictionary, which contains about 120,000 translations related to the information processing field \[Japan Electronic Dictionary Research Institute, 1995\], and segment Japanese entries into two parts<sup>3</sup><sup>3</sup>3The number of base words can easily be identified based on English words, while Japanese compound words lack lexical segmentation.. For this purpose, simple heuristic rules based mainly on Japanese character types (i.e., kanji, katakana, hiragana, alphabets and other characters like numerals) were used. Given the set of compound words where Japanese entries are segmented, we correspond English-Japanese base words on a word-by-word basis, maintaining the word order between English and Japanese, to produce a Japanese-English/English-Japanese base word dictionary. As a result, we extracted 24,439 Japanese base words and 7,910 English base words from the EDR dictionary. During the dictionary production, we also count the collocational frequency for each combination of $`s_i`$ and $`t_i`$, in order to estimate $`P(s_i|t_i)`$. Note that in the case where $`s_i`$ is transliterated into $`t_i`$, we use an arbitrarily predefined value for $`P(s_i|t_i)`$. For the estimation of $`P(t_{i+1}|t_i)`$, we use the word-based bi-gram statistics obtained from target language corpora, i.e., “documents” in the collection (see Figure 1).
### 3.3 Transliteration
Figure 2 shows example correspondences between English and (romanized) katakana words, where we insert hyphens between each katakana character for enhanced readability. The basis of our transliteration method is analogous to that for compound word translation described in Section 3.2. The formula for the source word and one transliteration candidate are represented as below.
$`S`$ $`=`$ $`s_1,s_2,\mathrm{},s_n`$
$`T`$ $`=`$ $`t_1,t_2,\mathrm{},t_n`$
However, unlike the case of compound word translation, $`s_i`$ and $`t_i`$ denote $`i`$-th “symbols” (which consist of one or more letters), respectively. Note that we consider only such $`T`$’s that are indexed in the inverted file, because our transliteration method often outputs a number of incorrect words with great probabilities. Then, we compute $`P(T|S)`$ for each $`T`$ using Equations (1) and (2) (see Section 3.2), and select $`k`$-best candidates with greater probabilities. The crucial content here is the way to produce a bilingual dictionary for symbols. For this purpose, we used approximately 3,000 katakana entries and their English translations listed in our base word dictionary. To illustrate our dictionary production method, we consider Figure 2 again. Looking at this figure, one may notice that the first letter in each katakana character tends to be contained in its corresponding English word. However, there are a few exceptions. A typical case is that since Japanese has no distinction between “L” and “R” sounds, the two English sounds collapse into the same Japanese sound. In addition, a single English letter corresponds to multiple katakana characters, such as “x” to “ki-su” in $`<`$text, te-ki-su-to$`>`$. To sum up, English and romanized katakana words are not exactly identical, but similar to each other.
We first manually define the similarity between the English letter $`e`$ and the first romanized letter for each katakana character $`j`$, as shown in Table 1. In this table, “phonetically similar” letters refer to a certain pair of letters, such as “L” and “R”<sup>4</sup><sup>4</sup>4We identified approximately twenty pairs of phonetically similar letters.. We then consider the similarity for any possible combination of letters in English and romanized katakana words, which can be represented as a matrix, as shown in Figure 3. This figure shows the similarity between letters in $`<`$text, te-ki-su-to$`>`$. We put a dummy letter “$”, which has a positive similarity only to itself, at the end of both English and katakana words. One may notice that matching plausible symbols can be seen as finding the path which maximizes the total similarity from the first to last letters. The best path can easily be found by, for example, Dijkstra’s algorithm \[Dijkstra, 1959\]. From Figure 3, we can derive the following correspondences: $`<`$te, te$`>`$, $`<`$x, ki-su$`>`$ and $`<`$t, to$`>`$. The resultant correspondences contain 944 Japanese and 790 English symbol types, from which we also estimated $`P(s_i|t_i)`$ and $`P(t_{i+1}|t_i)`$.
As can be predicted, a preliminary experiment showed that our transliteration method is not accurate when compared with a word-based translation. For example, the Japanese word re-ji-su-ta (register)” is transliterated to “resister”, “resistor” and “register”, with the probability score in descending order. However, combined with the compound word translation, irrelevant transliteration outputs are expected to be discarded. For example, a compound word like “re-ji-su-ta tensou gengo (register transfer language)” is successfully translated, given a set of base words “tensou (transfer)” and “gengo (language)” as a context.
## 4 Evaluation
This section investigates the performance of our CLIR system based on the TREC-type evaluation methodology: the system outputs 1,000 top documents, and TREC evaluation software is used to calculate the recall-precision trade-off and 11-point average precision.
For the purpose of our evaluation, we used the NACSIS test collection \[Kando et al., 1998\]. This collection consists of 21 Japanese queries and approximately 330,000 documents (in either a combination of English and Japanese or either of the languages individually), collected from technical papers published by 65 Japanese associations for various fields. Each document consists of the document ID, title, name(s) of author(s), name/date of conference, hosting organization, abstract and keywords, from which titles, abstracts and keywords were used for our evaluation. We used as target documents approximately 187,000 entries where abstracts are in both English and Japanese. Each query consists of the title of the topic, description, narrative and list of synonyms, from which we used only the description. Roughly speaking, most topics are related to electronic, information and control engineering. Figure 4 shows example descriptions (translated into English by one of the authors). Relevance assessment was performed based on one of the three ranks of relevance, i.e., “relevant”, “partially relevant” and “irrelevant”. In our evaluation, relevant documents refer to both “relevant” and “partially relevant” documents<sup>5</sup><sup>5</sup>5The result did not significantly change depending on whether we regarded “partially relevant” as relevant or not..
### 4.1 Evaluation of compound word translation
We compared the following query translation methods:
1. a control, in which all possible translations derived from the (original) EDR technical terminology dictionary are used as query terms (“EDR”),
2. all possible base word translations derived from our dictionary are used (“all”),
3. randomly selected $`k`$ translations derived from our bilingual dictionary are used (“random”),
4. $`k`$-best translations through compound word translation are used (“CWT”).
For system “EDR”, compound words unlisted in the EDR dictionary were manually segmented so that substrings (shorter compound words or base words) can be translated. For both systems “random” and “CWT”, we arbitrarily set $`k=3`$. Figure 5 and Table 2 show the recall-precision curve and 11-point average precision for each method, respectively. In these, “J-J” refers to the result obtained by the Japanese-Japanese IR system, which uses as documents Japanese titles/abstracts/keywords comparable to English fields in the NACSIS collection. This can be seen as the upper bound for CLIR performance<sup>6</sup><sup>6</sup>6Regrettably, since the NACSIS collection does not contain English queries, we cannot estimate the upper bound performance by English-English IR.. Looking at these results, we can conclude that the dictionary production and probabilistic translation methods we proposed are effective for CLIR.
### 4.2 Evaluation of transliteration
In the NACSIS collection, three queries contain katakana (base) words unlisted in our bilingual dictionary. Those words are “ma-i-ni-n-gu (mining)” and “ko-ro-ke-i-sho-n (collocation)”. However, to emphasize the effectiveness of transliteration, we compared the following extreme cases:
1. a control, in which every katakana word is discarded from queries (“control”),
2. a case where transliteration is applied to every katakana word and top 10 candidates are used (“translit”).
Both cases use system “CWT” in Section 4.1. In the case of “translit”, we do not use katakana entries listed in the base word dictionary. Figure 6 and Table 3 show the recall-precision curve and 11-point average precision for each case, respectively. In these, results for “CWT” correspond to those in Figure 5 and Table 2, respectively. We can conclude that our transliteration method significantly improves the baseline performance (i.e., “control”), and comparable to word-based translation in terms of CLIR performance.
An interesting observation is that the use of transliteration is robust against typos in documents, because a number of similar strings are used as query terms. For example, our transliteration method produced the following strings for “ri-da-ku-sho-n (reduction)”:
> riduction, redction, redaction, reduction.
All of these words are effective for retrieval, because they are contained in the target documents.
### 4.3 Evaluation of the overall performance
We compared our system (“CWT+translit”) with the Japanese-Japanese IR system, where (unlike the evaluation in Section 4.2) transliteration was applied only to “ma-i-ni-n-gu (mining)” and “ko-ro-ke-i-sho-n (collocation)”. Figure 7 and Table 4 show the recall-precision curve and 11-point average precision for each system, respectively, from which one can see that our CLIR system is quite comparable with the monolingual IR system in performance. In addition, from Figure 5 to 7, one can see that the monolingual system generally performs better at lower recall while the CLIR system performs better at higher recall.
For further investigation, let us discuss similar experimental results reported by Kando and Aizawa \[Kando and Aizawa, 1998\], where a bilingual dictionary produced from Japanese/English keyword pairs in the NACSIS documents is used for query translation. Their evaluation method is almost the same as performed in our experiments. One difference is that they use the “OpenText” search engine<sup>7</sup><sup>7</sup>7Developed by OpenText Corp., and thus the performance for Japanese-Japanese IR is higher than obtained in our evaluation. However, the performance of their Japanese-English CLIR systems, which is roughly 50-60% of that for their Japanese-Japanese IR system, is comparable with our CLIR system performance. It is expected that using a more sophisticated search engine, our CLIR system will achieve a higher performance than that obtained by Kando and Aizawa.
## 5 Conclusion
In this paper, we proposed a Japanese/English cross-language information retrieval system, targeting technical documents. We combined a query translation module, which performs compound word translation and transliteration, with an existing monolingual retrieval method. Our experimental results showed that compound word translation and transliteration methods individually improve on the baseline performance, and when used together the improvement is even greater. Future work will include the application of automatic word alignment methods \[Fung, 1995, Smadja et al., 1996\] to enhance the dictionary.
## Acknowledgments
The authors would like to thank Noriko Kando (National Center for Science Information Systems, Japan) for her support with the NACSIS collection.
|
no-problem/9907/hep-lat9907018.html
|
ar5iv
|
text
|
# Non-perturbative states in the three-dimensional ϕ⁴ theory
## 1 DUALITY AND UNIVERSALITY
The three-dimensional Ising model is related to two other important three-dimensional theories: by duality, to the $`\text{Z}\text{Z}_2`$ gauge model, and by universality, to $`\varphi ^4`$ theory.
Duality is an exact equality of partition functions: it means that the Ising model and the $`\text{Z}\text{Z}_2`$ gauge model are different descriptions of the same physics. In particular, the broken symmetry phase of the Ising model is equivalent to the confined phase of the gauge model.
Universality tells us that the Ising model and $`\varphi ^4`$ theory behave in the same way when approaching the critical point. Universal quantities are the same in a critical region around the transition point. Indeed, the universal quantities of the Ising universality class have been predicted to great accuracy using perturbative $`\varphi ^4`$ theory (see e.g. Ref. ).
Let us apply the tools of universality and duality to the problem of determining the spectrum of massive excitations of the $`3D`$ Ising model in the broken symmetry phase. The glueball spectrum in the $`Z_2`$ gauge model has been thoroughly studied numerically: it turns out to be a rich spectrum with many excitations in various angular momentum channels. On the other hand, we certainly do not expect to find an interesting spectrum in perturbative $`\varphi ^4`$ theory, which describes just one particle.
Therefore it seems that duality and universality lead to contradictory expectations about the spectrum of the Ising model and $`\varphi ^4`$ theory in the broken phase. This work clarifies these issues by a numerical evaluation of the spectrum of both models, performed with a new variational procedure.
## 2 MONTE CARLO DETERMINATION OF THE SPECTRUM
The spectrum of a model is extracted from Monte Carlo simulations by studying the long distance exponential decay of correlation functions. It is convenient to study time slice observables: for example if $`\varphi `$ is the order parameter one defines
$$S(t)=\frac{1}{L^2}\underset{x,y}{}\varphi (t,x,y)$$
(1)
where $`L`$ is the lattice size in the $`x`$ and $`y`$ directions. One then studies connected correlators of the $`S`$ operator, the advantage being that they behave as a pure exponential in the long distance limit:
$$S(0)S(t)_cce^{m|t|}.$$
(2)
For a theory with a non-trivial spectrum one expects to see subleading exponentials as well:
$$S(0)S(t)_cc_1e^{m_1|t|}+c_2e^{m_2|t|}+\mathrm{}$$
(3)
Therefore a procedure to determine the spectrum from Monte Carlo simulation is to measure time slice correlations and fit them to Eq. (3).
However there is a more effective method, inspired by what is commonly done in lattice gauge theory to determine the glueball spectrum . One introduces a basis of suitably defined (time slice) operators $`\widehat{O}_i(t)`$ and then computes the matrix of cross-correlators
$$C_{ij}(t)=\widehat{O}_i(0)\widehat{O}_j(t)_c$$
(4)
$`C_{ij}(t)`$ is then diagonalized to read off the spectrum. The crucial point is of course the choice of the operator basis, which must be carefully fine-tuned to obtain an efficient determination of the spectrum. Our choice is described in detail in Ref. . Here we just mention that we included the standard magnetization Eq. (1) in the basis $`\{\widehat{O}_i\}`$ and that the other operators are defined on different length scales.
We simulated both the Ising model and the lattice regularized $`\varphi ^4`$ theory in the broken symmetry phase, at various temperatures well inside the scaling region where universality is expected to hold. We considered the $`0^+`$ channel only. It turns out that three states can be identified in this channel. Scaling is perfectly statisfied since the ratios between the masses of the three states do not change with the temperature within the scaling region. Universality is satisfied as well since we obtain compatible results for the ratios from the Ising and the $`\varphi ^4`$ simulations. Therefore we can quote a single result for each mass ratio:
$`{\displaystyle \frac{m_2}{m_1}}`$ $`=`$ $`1.83(3)`$ (5)
$`{\displaystyle \frac{m_3}{m_1}}`$ $`=`$ $`2.45(10)`$ (6)
Also duality is satisfied since in the $`\text{Z}\text{Z}_2`$ gauge model one obtains mass ratios of $`1.88(2)`$ and $`2.59(4)`$ in the $`0^+`$ channel of the glueball spectrum .
Note that the first excited state lies below the pair production threshold: this means that, in terms of continuum $`\varphi ^4`$ theory, it cannot be of perturbative origin. On the other hand the state at $`2.45`$ times the fundamental mass could well be a signature of the cut in the Fourier transform of the propagator induced by self interaction effects, and therefore a perturbative effect.
Let us analyze in more detail what we expect from perturbative $`\varphi ^4`$ theory in this respect: a simple one-loop computation (see Ref. ) shows that the cut in the momentum space propagator implies for the time slice correlators the behavior
$$S(0)S(t)_cc_1e^{m_1|t|}+c_2\frac{e^{2m_1|t|}}{t}$$
(7)
However the second term can be shown to be numerically indistinguishable from an exponential decay with mass $`2.4m_1`$. Therefore we conclude that while the state $`m_3`$ could be explained as a perturbative self-interaction effect, the state $`m_2`$ is certainly of non-perturbative origin.
## 3 BACK TO THE SPIN-SPIN CORRELATOR
In this section we apply the knowledge we have gained of the spectrum of the theory to the analysis of the (time slice) spin-spin correlator
$$G(t)=S(0)S(t)_c$$
(8)
It is useful to define the effective correlation length
$$1/\xi _{eff}(t)=\mathrm{log}G(t)\mathrm{log}G(t+1)$$
(9)
so that for $`t\mathrm{}`$ $`\xi _{eff}1/m`$, $`m`$ being the fundamental mass. Clearly if $`G(t)`$ had a purely exponential behavior then $`\xi _{eff}(t)`$ would be constant: the preasymptotic behavior of $`\xi _{eff}(t)`$ depends on higher mass states and/or interaction effects. These effects are shown in Fig.1, where $`m\xi _{eff}`$ is shown for various values of the temperature in the Ising model. The data from the various temperatures are perfectly compatible with each other, signaling that the preasymptotic behavior of $`\xi _{eff}(t)`$ is a physical, scaling effect and not a lattice artifact.
In Fig. 2 we use our knowledge of the spectrum to describe the behavior of $`\xi _{eff}(t)`$: the Monte Carlo are compared to the perturbative prediction
$$\xi _{eff}(t)=c_1\left[e^{m_1t}+f_{cut}(t)\right]$$
(10)
(dotted line), and to the curve
$$\xi _{eff}(t)=c_1\left[e^{m_1t}+f_{cut}(t)\right]+c_2e^{m_2t}$$
(11)
(solid line) where the constatnts $`m_1`$, $`m_2`$, $`c_1`$, $`c_2`$ are taken from our variational evaluation of the spectrum, and $`f_{cut}`$ is taken from one-loop perturbative calculations in the continuum theory (see Ref. ). The good agreement between this curve and the data suggests that in fact the third mass $`m_3`$ is not a new state but a perturbative interaction effect.
## 4 CONCLUSIONS
The main result of our analysis is that $`3D`$ $`\varphi ^4`$ theory has a rich spectrum of massive excitations that signals the existence of non-perturbative physics. This spectrum matches accurately the corresponding spectrum of the $`3D`$ Ising model, to which $`\varphi ^4`$ is related by universality, and the glueball spectrum of the $`3D`$ $`\text{Z}\text{Z}_2`$ gauge model, related by duality to the Ising model.
We are currently investigating higher spin excited states, corresponding to higher spin glueballs, and the effect of non-perturbative physics on the field theoretic prediction of universal quantities. Another interesting development would be to investigate the same issues in other $`3D`$ universality classes, in particular in $`N`$-component $`\varphi ^4`$ theory.
|
no-problem/9907/cond-mat9907082.html
|
ar5iv
|
text
|
# Off-equilibrium dynamics in a singular diffusion model
## Abstract
We introduce a schematic non-linear diffusion model where density fluctuations induce a rich out of equilibrium dynamics. The properties of the model are studied by numerical simulations and analytically in a mean field approximation. At low temperatures and high densities we find a long off-equilibrium glassy region, where the system evolves out of an initially pinned state showing aging and a slow decay of the autocorrelation as an enhanced power law, along with strong spatial heterogeneities and violation of the fluctuation dissipation theorem.
As fluids are supercooled below the melting temperature their structural relaxation becomes very slow and may result in a glass transition characterized by a dramatic increase of the viscosity. Under these conditions the dynamical evolution of glass-forming liquids shows a markedly out of equilibrium behavior . In the main approach to the glassy dynamics, the Mode Coupling theory , the dynamical equations are solved by resumming a non trivial set of diagrams. This theory predicts an equilibrium relaxation time $`\tau _0`$ which diverges as a power law at a dynamical transition. Today it’s well established that the quoted theory applies in a region located well before the point of structural arrest, where the relaxation time is found experimentally to diverge according to the Voghel-Fulcher law
$$\tau _0\mathrm{exp}[v(\rho _c\overline{\rho })^1]$$
(1)
Therefore we do not have information from the theoretical point of view on the kinetics close to the dynamical transition where standard theories do not apply. In this paper we introduce a schematic diffusion equation with a phenomenologically chosen mobility which reproduces the equilibrium relaxation time (1) observed in this region. We then study the consequences of such an assumption on the out of equilibrium dynamics. Due to its relative simplicity the model is amenable of both numerical and analytical investigations allowing a complete description of its features. Despite the simplicity of the equilibrium properties, the out of equilibrium behavior is complex and in qualitative agreement with the known properties of real systems. Our analytical results could in principle be tested quantitatively in experimental data and in molecular dynamics simulations.
Glassy systems are usually schematized as composed of particles rattling inside “cages” of typical size $`a`$ formed by the neighbors. Although diffusion is unimpeded inside the cells, motion over larger distances is strongly suppressed at high molecular densities because a global rearrangement of many particles is required. This glassy behavior is known to be reproduced also in the simple case of hard spheres . For suitably coarse graining of space and time scales, the dynamics of supercooled liquids is Brownian in its microscopic origin. Therefore we consider a diffusion equation for the variable $`\rho (\stackrel{}{r},t)`$ which represents the coarse grained particle density over distances of order $`a`$
$$\frac{\rho (\stackrel{}{r},t)}{t}=\left[M(\rho )\frac{\delta F\{\rho \}}{\delta \rho }\right]+\eta (\stackrel{}{r},t)$$
(2)
In Eq. (2) $`F[\rho ]=𝑑\stackrel{}{r}\left[\rho \mathrm{ln}\rho +(1\rho )\mathrm{ln}(1\rho )\right]`$ is the entropy of the lattice gas-like model , which we consider for sake of simplicity. More realistic forms of $`F[\rho ]`$, which take into account the attractive interactions between particles can also be considered; here we find, however, that the qualitative features of the model do not change . $`M(\rho )`$ is a mobility, specified below, which is supposed to capture the main features of the constrained cooperative dynamics of a dense fluid. $`\eta (\stackrel{}{r},t)`$ is a gaussianly distributed random field, representing the thermal noise, whose expectations are given by $`<\eta (\stackrel{}{r},t)>=0`$, $`<\eta (\stackrel{}{r},t)\eta (\stackrel{}{r}^{},t^{})>=2T\left\{M(\rho )\left[\delta (\stackrel{}{r}\stackrel{}{r}^{})\delta (tt^{})\right]\right\}`$, where $`<\mathrm{}>`$ means ensemble averages and $`T`$ is the temperature in units of the Boltzmann constant $`k_B`$.
Now we specify $`M(\rho )`$. As shown below, from Eq. (2) the characteristic equilibrium relaxation time $`\tau _0`$ behaves as $`M^1(\overline{\rho })`$, $`\overline{\rho }`$ being the average density, for low $`T`$. Then, in order to reproduce the behavior (1) of $`\tau _0`$, we assume a local mobility of the form
$$M(\rho )=e^{v[\rho (\stackrel{}{r},t)1]^1}$$
(3)
where a rescaled density, so that $`\rho _c=1`$, has been considered. Due to its generality, Eq. (2) is also suited for the description of different physical systems where a constrained cooperative dynamics is believed to play a fundamental role, such as granular materials . We have studied Eq. (2) for a temperature quench, by simulations and in a mean field approach. We sketch our numerical results before entering a detailed mean field analysis.
Eq. (2) has been simulated by a standard first order Euler discretization scheme on a $`128`$x$`128`$ two-dimensional square lattice with periodic boundary conditions, starting from an uncorrelated high temperature initial state. The system is quenched to a very low temperature (results will be presented for $`T=10^4`$, but similar behaviors are found for different temperatures).
The decay of the average density fluctuations $`S^2(t)=\sqrt{<(\rho \overline{\rho })^2>}`$, is plotted in Fig. $`1^b`$. For low densities $`\overline{\rho }`$ a normal liquid region is observed; $`S^2(t)`$ very quickly decays to the constant value characteristic of the equilibrium state. By raising the density, one enters a different region. Here the vanishing of $`M`$ slows the dynamics: this produces a long-lasting off-equilibrium glassy behavior before the equilibrium state is reached. The behavior of $`S^2(t)`$ shows that for large densities the dynamics can be divided in three regimes. Initially, for $`t`$ smaller than a characteristic time $`\tau _p`$, $`S^2(t)`$ remains constant. This is the first regime. An analysis of the system configuration $`\rho (\stackrel{}{r},t)`$ in this time domain does not show any appreciable evolution: the system is pinned in the initial configuration. Then, for $`t>\tau _p`$ a second regime is entered and less dense regions equilibrates whereas high density zones are still practically frozen. This is characterized by the decrease of $`S^2(t)`$. In this regime, that will be referred to as slow evolution, one observes pronounced correlated spatial heterogeneities in the system. This spatial pattern is outlined by a slow decay, as a function of $`k`$, of the structure factor $`C(\stackrel{}{k},t)=<\rho (\stackrel{}{k},t)\rho (\stackrel{}{k},t)>`$, that is consistent with a stretched exponential fit (see Fig. 2) $`C(\stackrel{}{k},t)𝒞\mathrm{exp}[l(t)k]^{2\mu }`$ with $`\mu 1/6`$ (at variance with the Gaussian decay of standard diffusion), similarly to some experimental observations . Finally the system enters the equilibrium state characterized by a constant value of $`S^2(t)`$. This whole pattern is reflected by the behavior of the particle mean square displacement $`R^2(t)`$, shown in Fig.$`1^a`$, calculated through
$$R^2(t)=_0^tD(t^{})𝑑t^{}$$
(4)
where $`D(t)=<M(\rho )>`$ is the average mobility . For low densities $`R^2(t)t`$ in the whole time domain, as expected for simple diffusion. As the density is increased toward the limiting value $`\overline{\rho }=1`$ three regimes can again be distinguished. After an initial linear increase (regime 1) a progressively more pronounced inflection is observed in an intermediate time domain (regime 2) whose duration is enhanced as $`\overline{\rho }`$ is increased. The same pattern is observed in both spin-glass like lattice gas and Lennard-Jones molecular dynamics simulation . Asymptotically, in equilibrium, $`R^2(t)t`$, as for simple diffusion.
In order to get analytical results we now introduce an approximation on Eq. (2) by first expanding the logarithm on the r.h.s. of Eq. (2) to lowest order and then by replacing the mobility $`M(\rho )`$ with the effective diffusivity $`D(\rho )=<M(\rho )>`$. Since average quantities do not depend on the position, due to space homogeneity, one has $`D(\rho )=D(t)`$. Eq. (2) then becomes
$$\frac{\rho (\stackrel{}{r},t)}{t}=D(t)^2\rho (\stackrel{}{r},t)+\eta (\stackrel{}{r},t)$$
(5)
where the rescaling $`tt/[\overline{\rho }(1\overline{\rho })]`$ and $`T\stackrel{~}{T}=T\overline{\rho }(1\overline{\rho })`$, has been performed, and $`<\eta (\stackrel{}{r},t)\eta (\stackrel{}{r}^{},t^{})>=2\stackrel{~}{T}D(t)^2\left[\delta (\stackrel{}{r}\stackrel{}{r}^{})\delta (tt^{})\right]`$. Transforming Eq. (5) into reciprocal space, one obtains the following formal solution for the two time correlator $`C(\stackrel{}{k},t^{},t)=<\rho (\stackrel{}{k},t^{})\rho (\stackrel{}{k},t)>`$, ($`tt^{}`$)
$`C(\stackrel{}{k},t^{},t)`$ $`=`$ $`e^{[R^2(t^{})+R^2(t)]k^2}`$ (7)
$`\left\{C(\stackrel{}{k},0,0)+\stackrel{~}{T}\left[e^{2R^2(t^{})k^2}1\right]\right\}`$
The whole problem is now reduced to the knowledge of $`R(t)`$ which must be calculated self-consistently enforcing Eq. (4), where $`D(t)`$ is given by $`D(t)=<M(\rho )>=_0^1M(\rho )P(\rho )𝑑\rho `$. Here $`P(\rho )`$ is the probability distribution of the density field that, for Eq. (5) can be shown to be Gaussian at all times. Then we have
$$D(t)=[2\pi S^2(t)]^{1/2}_0^1M(\rho )e^{(\rho \overline{\rho })^2/[2S^2(t)]}𝑑\rho $$
(8)
The quantity $`S(t)`$ can be computed as $`S^2(t)=(2\pi )^d_{k<\mathrm{\Lambda }}C(\stackrel{}{k},t,t)𝑑\stackrel{}{k}`$ where $`\mathrm{\Lambda }`$ is a momentum cutoff of order $`a^1`$. From Eq. (7), $`S(t)`$ is a function of $`R(t)`$:
$$S^2(t)=h[S^2(0)q\stackrel{~}{T}]R^d(t)\mathrm{\Phi }_d[\sqrt{2}\mathrm{\Lambda }R(t)]+q\stackrel{~}{T}$$
(9)
where $`\mathrm{\Phi }_d[x]=_0^xy^{d1}\mathrm{exp}(y^2)𝑑y`$, $`q=(\mathrm{\Sigma }_d/d)[\mathrm{\Lambda }/(2\pi )]^d`$, $`h=[d/(\mathrm{\Lambda }\sqrt{2})^d]`$ and $`\mathrm{\Sigma }_d`$ is the surface of the $`d`$-dimensional unitary hypersphere. Notice that the asymptotic value of the density fluctuations $`S^2(\mathrm{})=qT\overline{\rho }(1\overline{\rho })`$ vanishes at the point of structural arrest.
Eqs. (4, 8, 9) are a closed set of equations that can be studied analytically. In the present approximation the non linearity of $`M(\rho )`$ is accounted for by a self-consistency prescription for the calculation of $`D(t)`$. Similar approximation techniques are well developed and widely used in several field of statistical physics producing reliable results . From Eq. (7) the normalized correlator $`\stackrel{~}{C}(\stackrel{}{k},t^{},t)=C(\stackrel{}{k},t^{},t)/C(\stackrel{}{k},t^{},t^{})`$ is given by
$$\stackrel{~}{C}(\stackrel{}{k},t^{},t)=e^{[R^2(t)R^2(t^{})]k^2}$$
(10)
showing that a scaling form $`\stackrel{~}{C}(\stackrel{}{k},t^{},t)=𝒮[\varphi (t)/\varphi (t^{})]`$, with $`\varphi (t)=\mathrm{exp}\{R^2(t)k^2\}`$, is obeyed as suggested by a scaling approach to dynamical processes .
An important issue to understand the off-equilibrium dynamics is the relation between the response function to a small perturbing field $`h_\stackrel{}{k}`$, $`\chi _\stackrel{}{k}(t^{},t)_t^{}^t𝑑\tau \frac{\delta <\rho _\stackrel{}{k}(t)>}{\delta h_\stackrel{}{k}(\tau )}`$, and the correlation function in the unperturbed situation, $`C(\stackrel{}{k},t^{},t)`$. In equilibrium systems, where the fluctuation-dissipation theorem holds, the quantity $`X\stackrel{~}{T}\chi _\stackrel{}{k}(t^{},t)/C(\stackrel{}{k},t^{},t)`$ is equal to one. Out of equilibrium this relation is violated, and, generally, $`X`$ is a function of $`t^{}`$ and $`t`$: $`X=X(t^{},t)`$ . In the present mean-field approximation, one may interestingly show that the generalized “fluctuation-dissipation” ratio (FDR) $`X`$ is a function of the sole $`t^{}`$: $`X(t^{})=\{[C(\stackrel{}{k},0,0)\stackrel{~}{T}]\mathrm{exp}(2k^2R^2(t^{}))+\stackrel{~}{T}\}^1`$. Only if $`t^{}\mathrm{}`$, the usual version of the FDR with $`X=1`$ is recovered (notice that $`X1`$).
In the following we will report the main results of the mean field analysis referring to a longer publication for all the details. From the solution of the model one sees that if the density $`\overline{\rho }`$ is small or the temperature $`\stackrel{~}{T}`$ is high one immediately enters the asymptotic stationary state that will be described later on. For high densities and low temperatures, on the other hand, the evolution remains markedly far from equilibrium for a long period and three dynamical regimes are found corresponding to different behaviors of $`R(t)`$ (see Figs. $`1`$), as discussed below.
Regime 1 - Pinning: For short times, such that $`R(t)\mathrm{\Lambda }^1`$, as shown by Eq. (10), $`\stackrel{~}{C}(\stackrel{}{k},t^{},t)`$ is essentially constant since $`|k|<\mathrm{\Lambda }`$ and the system looks pinned. In this regime we also have $`\mathrm{\Phi }_d[\sqrt{2}\mathrm{\Lambda }R(t)]R(t)^d`$, consequently $`S(t)S(0)`$ and $`D(t)D(0)`$. Therefore $`R^2(t)D(0)t`$ (see inset Fig. $`1^a`$). The duration of this regime is $`\tau _p\mathrm{\Lambda }^2D^1(0)`$. Physically $`\tau _p`$ corresponds to the time the particle spends inside its cell (cage).
Regime 2 - Slow evolution: Pinning lasts up to $`\tau _p`$. For $`t>\tau _p`$, we have $`R(t)>\mathrm{\Lambda }`$, thus particles diffuse out of the cages and the evolution starts. For sufficiently long times, computing $`R(t)`$ through Eqs. (4, 8, 9) one finds $`R^2(t)b(\mathrm{ln}t)^\delta `$ (see inset Fig. $`1^a`$), where $`b=const.`$ and $`\delta =6/d`$. Eq. (7) implies that, for fixed $`t^{}`$ the correlator decays as an enhanced power law (see inset Fig. 2)
$$\stackrel{~}{C}(\stackrel{}{k},t^{},t)=\mathrm{exp}\{R^2(t^{})k^2\}\mathrm{exp}\left\{b[\mathrm{ln}(t)]^\delta k^2\right\}$$
(11)
and $`S(t)(\mathrm{ln}t)^{3/2}`$ (see inset Fig. $`1^b`$). A logarithmic relaxation of the density fluctuations is also observed in Molecular Dynamics simulations of out of equilibrium liquid glass formers . When also $`t^{}>\tau _p`$, one has $`\stackrel{~}{C}(\stackrel{}{k},t^{},t)=\mathrm{exp}\left\{b([\mathrm{ln}(t)]^\delta [\mathrm{ln}(t^{})]^\delta )k^2\right\}`$. The characteristic duration time, $`\tau _e`$, of the slow evolution regime can be estimated , at low $`\stackrel{~}{T}`$, to be $`\tau _eM^1(\overline{\rho })`$.
Regime 3 - Stationary state: For long times $`t>\tau _e`$ a simple diffusive behavior is obtained because $`D(t)`$ always attains asymptotically a constant value $`D(\mathrm{})`$. This imply $`R^2(t)D(\mathrm{})t`$, as can be seen in Fig. $`1^a`$, so that the normalized correlator exhibits the usual exponential decay as a function of $`t`$: $`\stackrel{~}{C}(\stackrel{}{k},t^{},t)=e^{D(\mathrm{})tk^2}e^{\{R^2(t^{})k^2\}}`$ (see inset Fig. 2). When $`t^{}>\tau _e`$, we have $`\stackrel{~}{C}(\stackrel{}{k},t^{},t)=e^{D(\mathrm{})[tt^{}]k^2}`$ and time translational invariance is obeyed. In the small temperature limit the density fluctuations $`S(\mathrm{})`$ can be approximatively neglected and $`D(\mathrm{})M(\overline{\rho })`$. This leads to $`\tau _0=M^1(\overline{\rho })`$ which gives Eq. (1), as previously stated.
So far we have studied the out off-equilibrium evolution of a system governed by Eq. (2) in the presence of a vanishing mobility for which Eq. (1) holds in equilibrium. We also want to consider the case in which the mobility vanishes as a power law, as is found for instance in the Mode-Coupling Theory of supercooled liquids : $`M(\rho )=[1\rho (\stackrel{}{r},t)]^\gamma `$. This relation implies an algebraic divergence of $`\tau _0`$: $`\tau _0=M(\overline{\rho })^1`$, as experimentally found in supercooled liquids in the temperatures or densities regions far away the ideal glassy transition. The different form of the mobility, as stated before, does not changes the global picture described so far with three different regimes. In the first and third regimes $`R^2(t)`$ is linear in $`t`$ as before, while in the second one we find, for $`\gamma >1`$, an anomalous diffusion $`R(t)wt^\beta `$, with $`w`$ = const. and $`\beta =4/(\gamma d+4)`$. In this regime we also find a stretched exponential decay of the normalized correlator $`\stackrel{~}{C}(k,t^{},t)\mathrm{exp}\left\{R^2(t^{})k^2\right\}\mathrm{exp}\left\{wt^\beta k^2\right\}`$.
In this paper we have introduced a phenomenological equation for off equilibrium glassy dynamics. The only ingredients of the model are the diffusive behavior and the request of a Vogel-Fulcher (or algebraic) divergence of $`\tau _0`$, obtained by assuming a mobility as in Eq.(3). With these sole ingredients the out of equilibrium evolution of the model is observed to be highly non trivial, even in the mean field approximation which we have studied in details. Consistently with mean field theory, also the numerical integration of the full model shows the existence of a gradual crossover from a normal liquid to a glassy behavior by raising the density. Some properties that are observed in systems close to the glassy transition, such as the existence of strong spatial heterogeneities, anomalous diffusion, slow decay and aging of density autocorrelations, are exhibited by the model. These predictions, as long as the non trivial fluctuation dissipation ratio $`X(t^{})`$, are all amenable of experimental check. In mean field this whole richness is observed in the preasymptotic off equilibrium dynamics (which, however, may be exponentially long), whereas the asymptotic equilibrium evolution is trivial. This is an important difference with real glassy systems, where a non trivial decay of $`\stackrel{~}{C}(\stackrel{}{k},t^{},t)`$ is also observed in equilibrium. However in mean field a non exponential decay of $`\stackrel{~}{C}(\stackrel{}{k},t^{},t)`$ can be ruled out on general grounds due to the Doob’s theorem . Further studies are in progress in order to characterize the complicate fluctuations occurring in the equilibrium state.
Acknowledgments We are grateful to M.Zannetti for interesting discussions and to S.Roux for valuable comments on the manuscript. F. C. thanks M. Cirillo and R. Del Sole for their hospitality in Rome university. This work was supported with the TMR network contract ERBFMRXCT980183 and by MURST(PRIN 97).
|
no-problem/9907/hep-lat9907022.html
|
ar5iv
|
text
|
# CERN/TH-99-197 CPT-99/PE.3856 FTUV/99-49IFIC/99-51 Finite-size scaling of the quark condensate in quenched lattice QCD
## Abstract
We confront the finite volume and small quark mass behaviour of the scalar condensate, determined numerically in quenched lattice QCD using Neuberger fermions, with predictions of quenched chiral perturbation theory. We find that quenched chiral perturbation theory describes the numerical data well, allowing us to extract the infinite volume, chiral limit scalar condensate, up to a multiplicative renormalization constant.
Introduction
Chiral symmetry breaking plays a central role in our comprehension of low energy QCD and understanding it from first principle calculations is of great importance. One of the cleanest ways of determining a condensate associated with the breaking of a global symmetry is through a finite-size scaling analysis. This technique has proved very successful in the study of scalar $`\mathrm{O}(\mathrm{N})`$ models . For chiral symmetry breaking in QCD, this would correspond to placing the system in a box and studying the scaling of the scalar condensate as a function of the volume $`V`$ and of the quark mass $`m`$ as the limit of restoration of chiral symmetry is approached ($`m0`$, $`V`$ finite).
The very small quark mass limit of QCD is expected to be well described by the lowest orders of chiral perturbation theory ($`\chi `$PT), which predict how the restoration of chiral symmetry takes place in a finite volume, as a function of the quark mass . The only free parameter entering the leading order contribution in the chiral expansion is the infinite volume quark condensate. Thus, a comparison of the mass and volume dependence of the finite volume quark condensate with the predictions of $`\chi `$PT provides a very powerful test of the hypothesis of spontaneous chiral symmetry breaking and permits an extraction of the infinite volume scalar condensate $`\mathrm{\Sigma }`$. Such a study requires, however, a good control over the chiral properties of the theory, which is difficult to achieve with traditional formulations of fermions on the lattice.
The situation is different, however, when Dirac operators that satisfy the Ginsparg–Wilson (GW) relation are considered. Actions constructed from such operators have been shown to have an exact lattice chiral symmetry . This symmetry ensures that the relations implied by chiral symmetry in the continuum, hold also on the lattice at finite lattice spacing $`a`$ . For a review of the GW relation and its implications, we refer to .
A particular realization of an operator satisfying the GW relation, which we will be using here, has been proposed by Neuberger :
$$D_\mathrm{N}[m+(1+s)(1\gamma _5Q(Q^2)^{1/2})],$$
(1)
where $`Qc_0\gamma _5(1+sD_\mathrm{W})`$, $`D_\mathrm{W}`$ is the Wilson Dirac operator, and the factor $`c_0`$ is a convenient normalization to keep the spectrum of $`Q^2`$ bounded by 1. The parameter $`s`$ satisfies $`|s|<1`$ and $`m`$ is the bare quark mass.
$`D_\mathrm{N}`$ satisfies the GW relation at zero quark mass. In contrast to the standard Wilson formulation, the breaking of the chiral symmetry is soft, i.e. only due to the quark mass term. This opens the possibility to confront finite volume simulations with finite-size scaling predictions in the regime of restoration of the chiral symmetry.
The complexity of the operator $`D_\mathrm{N}`$ renders its numerical treatment very demanding. We therefore restrict to the quenched approximation. The predictions of $`\chi `$PT must then be modified to take into account the effect of quenching. The finite size scaling of the quark condensate has recently been worked out using the framework of quenched chiral perturbation theory (q$`\chi `$PT) . In particular, analytical expressions for this scaling have been obtained in sectors of fixed topology. Operators satisfying the GW relation also satisfy an index theorem . Thus, by computing the eigenvalues of $`D_\mathrm{N}`$ at zero quark mass and identifying the zero modes, a clean separation of different topological sectors can be achieved, which is not possible with other formulations of lattice fermions. As we will see below, using the q$`\chi `$PT results in fixed topological sectors to interpret our numerical data proves very useful. A preliminary account of this work was presented at Lattice 99.
Light quarks on a torus
To study the volume dependence of the scalar condensate, we work on a four-dimensional torus of volume $`L^4`$. Under the assumption that chiral symmetry is spontaneously broken, a description of QCD in terms of a chiral Lagrangian should be a good approximation at momenta $`p4\pi F_\pi `$. To lowest order in $`p/4\pi F_\pi `$ and in the quark mass, this Lagrangian is given by
$`={\displaystyle \frac{F_\pi ^2}{4}}\mathrm{Tr}[_\mu U^{}(x)_\mu U(x)]\mathrm{\Sigma }\mathrm{Re}\mathrm{Tr}[Me^{i\theta /N_f}U]`$ (2)
where $`U(x)=\mathrm{exp}[i2\mathrm{\Pi }(x)/F_\pi ]SU(N_f)`$, $`\mathrm{\Pi }(x)`$ being the pion fields; $`M`$ is the quark mass matrix, which we take to be proportional to the identity matrix (i.e. $`M=mI`$), and $`\mathrm{\Sigma }`$ is the infinite volume and zero quark mass scalar condensate. In eq. (2) we have included the expected $`\theta `$ angle dependence.
Let us now consider the regime
$`M_\pi 1/LF_\pi ,`$ (3)
where $`M_\pi ^2=2m\mathrm{\Sigma }/F_\pi ^2`$ to leading order in $`\chi `$PT. In this regime, the partition function is dominated by the zero mode of the $`U(x)`$ field , since the action of the non-zero modes has a kinetic contribution that goes like $`F_\pi ^2L^21`$. The partition function then reduces, to leading order, to an integral over the $`SU(N_f)`$ group manifold:
$`Z={\displaystyle _{SU(N_f)}}𝑑U_0e^{V\mathrm{\Sigma }\mathrm{Re}\mathrm{Tr}[Me^{i\theta /N_f}U_0]},`$ (4)
where $`U_0`$ is the global mode. We can also define the partition function restricted to fixed topology by Fourier transforming in $`\theta `$ :
$`Z_\nu `$ $`=`$ $`{\displaystyle _0^{2\pi }}{\displaystyle \frac{d\theta }{2\pi }}{\displaystyle _{SU(N_f)}}𝑑U_0e^{i\theta \nu }\mathrm{exp}[V\mathrm{\Sigma }\mathrm{ReTr}[Me^{i\theta }U_0]]`$ (5)
$`=`$ $`{\displaystyle _{U(N_f)}}𝑑U_0det(U_0)^\nu \mathrm{exp}[V\mathrm{\Sigma }\mathrm{ReTr}[MU_0]].`$
These integrals and their derivatives with respect to the quark mass have been known for a long time. For details see .
In our case, however, we are interested in the quenched approximation. Recently a similar reasoning has been applied to quenched QCD. The main difference in the quenched case is that the chiral symmetry group is no longer $`SU(N_f)_L\times SU(N_f)_R\times U(1)`$, but a graded Lie group $`U(1|1)_L\times U(1|1)_R/U_A(1)`$. According to , the partition function for fixed topology is then given by
$`Z_\nu ={\displaystyle _{U(1|1)}}𝑑U_0sdet(U_0)^\nu \mathrm{exp}[V\mathrm{\Sigma }\mathrm{Re}s\mathrm{Tr}[MU_0]].`$ (6)
This integral has been computed analytically in terms of Bessel functions . By differentiating its logarithm with respect to the quark mass, $`m`$, the quark condensate for fixed topology is found to be
$`\mathrm{\Sigma }_\nu =\mathrm{\Sigma }z[I_\nu (z)K_\nu (z)+I_{\nu +1}(z)K_{\nu 1}(z)]+\mathrm{\Sigma }{\displaystyle \frac{\nu }{z}},`$ (7)
where $`zm\mathrm{\Sigma }V`$ and $`I_\nu (z),K_\nu (z)`$ are the modified Bessel functions.
This formula summarizes the scaling of the quark condensate in a periodic box with the volume and quark mass in the small $`m\mathrm{\Sigma }V`$ limit, as a function of only one non-perturbative parameter: the infinite volume condensate $`\mathrm{\Sigma }`$. For fixed volume, the limit as $`m0`$ is given by
$`\mathrm{\Sigma }_{\nu =0}`$ $`=`$ $`m\mathrm{\Sigma }^2V\left(1/2\gamma +\mathrm{log}2\mathrm{log}m\mathrm{\Sigma }V+𝒪\left(m\mathrm{\Sigma }V\mathrm{log}m\mathrm{\Sigma }V\right)^2\right)`$
$`\mathrm{\Sigma }_{\nu =\pm 1}`$ $`=`$ $`{\displaystyle \frac{1}{mV}}+{\displaystyle \frac{1}{2}}m\mathrm{\Sigma }^2V\left(1+𝒪\left(m\mathrm{\Sigma }V\mathrm{log}m\mathrm{\Sigma }V\right)^2\right).`$ (8)
where $`\gamma `$ is the Euler constant. These results have two interesting features that we wish to emphasize. First, there is a divergence $`1/m`$ in sectors with topology. From the point of view of the underlying theory, this is not surprising since it corresponds to the contribution of the fermionic zero modes. Note however that these terms do not contain information about the infinite volume condensate and vanish in the infinite volume limit, as expected. The second interesting feature is the appearance of a logarithmic enhancement in $`\mathrm{\Sigma }_{\nu =0}`$, which is also peculiar to the quenched approximation. This term contains information about the infinite volume condensate.
In principle, by fitting the dependence of the finite volume condensate in quark mass and volume to Monte Carlo data, we can extract the infinite volume condensate. However, the naive bare quark condensate that is measured on the lattice is UV-divergent. A simple dimensional analysis of the possible divergences shows that the bare scalar condensate has a leading cubic divergence. One important advantage of Neuberger’s operator is that the coefficient of this leading divergence is known analytically. It is $`6/(1+s)`$, for $`SU(3)`$. The cubic divergence can then be subtrated exactly. However, after this trivial subtraction, the condensate is still divergent and has the form:
$`\mathrm{\Sigma }_\nu ^{sub}(a)\overline{\mathrm{\Psi }}\mathrm{\Psi }_\nu ({\displaystyle \frac{6}{1+s}}){\displaystyle \frac{1}{a^3}}=C_2{\displaystyle \frac{m(a)}{a^2}}+C_1{\displaystyle \frac{m(a)^2}{a}}+\mathrm{\Sigma }_\nu ,`$ (9)
where $`m(a)`$ is the bare lattice mass. The constants $`C_i`$ are not known a priori and have to be determined, preferably non-perturbatively. The linear divergence proportional to $`m(a)^2`$ is negligibly small for the values of the mass and the cutoff we consider in this work. However, the quadratic divergence is not and turns out to be very important numerically. The condensate extracted through a fit of the lattice data to eqs. (9) and (7), of course, still requires a multiplicative renormalization to eliminate a residual logarithmic UV divergence in $`\mathrm{\Sigma }_\nu `$.
After subtracting the unphysical contribution of fermionic zero modes to $`\mathrm{\Sigma }_\nu `$, the finite volume condensate $`\mathrm{\Sigma }_\nu ^{sub}`$ vanishes, as expected, in the limit of zero quark mass . Not surprisingly, the power divergences can be separated, in principle, from the physical contribution to the condensate, by a study of the volume dependence of $`\mathrm{\Sigma }_\nu ^{sub}`$, while keeping the quark mass small enough to stay in the region of validity of $`\chi `$PT.
Even with the cubic divergence already subtracted, separating the physical condensate from the remaining power divergences may not be easy, in practice, because the statistical errors in these divergences can hide the small physical contribution. Thus, $`\mathrm{\Sigma }_\nu ^{sub}`$ must be computed with very good accuracy. Clearly, the logarithmic enhacement of $`\mathrm{\Sigma }_{\nu =0}`$ in eq. (8) could be very helpful in this respect; however, as we will see, extracting the condensate from the logarithmic term at zero topology requires much larger statistics than available to us at this time. We will concentrate instead on the study of the condensate in the topological charge one (or minus one) sector.
Numerical results
For our numerical simulations we work on hypercubic lattices of size $`L^4`$ with periodic boundary conditions for both the gauge and the fermion fields. We work in the quenched approximation and use standard methods to obtain decorrelated gauge field configurations.
In selecting the value of $`\beta =6/g_0^2`$, some care has to be taken. On the one hand, the quadratic divergence $`1/a^2`$ should not hide the physical effect. On the other hand, in choosing too small values of $`\beta `$ there is the risk that Neuberger’s operator falls into a different universality class . Indeed, by computing the low-lying eigenvalues of Neuberger’s operator at $`\beta =5.7`$ and $`s=0`$, we only found eigenvalues $`\mathrm{O}(1)`$ and hence no light physical modes. A scan of the lowest eigenvalue of $`Q^2`$ as a function of $`s`$ showed that $`\lambda _{\mathrm{min}}(Q^2)`$ decreased with increasing $`s`$, contrary to what is expected (and found) at larger values of $`\beta `$.
The situation at $`\beta =5.85`$ appeared to be different, however. The values of $`\lambda _{\mathrm{min}}(Q^2)`$ reach a maximum around $`s=0.6`$, where the localization properties should also be optimal . In accordance, the eigenvalues of Neuberger’s operator became very small, so that there is little doubt that light, physical modes are present. Since the lattice spacing at $`\beta =5.85`$ is $`a^11.5\mathrm{GeV}`$ we estimated that the quadratic divergence term would not hide the physical signal, at least for reasonable values of the physical condensate.
A technical challenge is the numerical treatment of the square root appearing in Neuberger’s operator. We have chosen a Chebyshev approximation for this task, which allows us to reach a well controlled accuracy. In order to avoid any systematic effects in the values of physical observables, we demand that
$$XQ^2P_{n,ϵ}(Q^2)^2X^2/(2X)^2<10^{16}.$$
(10)
In eq. (10) $`X`$ denotes a random vector and $`P_{n,ϵ}`$ denotes a standard Chebyshev approximation of the function $`1/\sqrt{x}`$ in the range $`ϵx1`$. $`P_{n,ϵ}`$ is a matrix-valued polynomial of degree $`n`$, which is constructed through numerically stable recursion relations . We require tantamount accuracies for all inversions. We note in passing that with the requirement of eq. (10) also the GW relation itself is satisfied to a similar accuracy for zero mass.
In order to decrease the degree of the polynomial employed, we have computed the 11 lowest eigenvalues of $`Q^2`$ and their corresponding eigenvectors and have set $`ϵ`$ to be the value of the largest. The contributions of these lowest lying eigenvectors are then treated exactly and projected out of the operator $`Q^2`$. Through this procedure, near-zero modes of $`Q^2`$ are taken into account automatically. All eigenvalue computations performed in our work are based on minimizing the Ritz functional .
As pointed out in , it is advantageous for the computation of the eigenvalues of Neuberger’s operator, and for its inversion as well, to stay in a given chiral subspace. This is possible because $`D_N^{}D_N`$ commutes with $`\gamma _5`$.
We computed the scalar condensate at several values of the quark mass using a multiple mass solver on lattices of size $`8^4`$, $`10^4`$ and $`12^4`$. We checked through the calculation of the two lowest eigenvalues of Neuberger’s operator to which topological sector each gauge field configuration belonged. We then obtained $`\mathrm{\Sigma }_\nu ^{sub}`$ by computing
$$\mathrm{\Sigma }_\nu ^{sub}=\frac{1}{V}\mathrm{Tr}^{}\left\{\frac{1}{D_N}+\frac{1}{D_N^{}}\frac{a}{1+s}\right\}_\nu ,$$
(11)
where the trace was performed in the chiral sector opposite to that with the zero modes and the gauge average was done in a sector of fixed topology $`\nu `$. With this definition, we take into account the contribution of all the non-zero eigenvalues of $`D_N`$ to the condensate <sup>4</sup><sup>4</sup>4With this definition the real eigenvalues at the cut-off level, $`m+2/a`$, are doubly counted. Although it is a completely negligible effect, we took it into account.. In this way, the term $`1/m`$ in eq. (8) is absent. Three gaussian sources and standard inverters were used to compute the trace in eq. (11). Topological charge zero configurations are very rare at larger volumes. For this reason we did not compute the condensate in this sector on the $`10^4`$ and $`12^4`$ lattices since the statistics we have gathered are too small.
We show in fig. 1 our results for $`a^3\mathrm{\Sigma }_{\nu =\pm 1}^{sub}/am`$ on our lattice volumes as a function of bare quark mass. We have 15, 10 and 7 gauge configurations on our $`8^4`$, $`10^4`$ and $`12^4`$ lattices, respectively. The solid lines are a fit of the data for all volumes and masses to eqs. (9) and (7). This fit has only two parameters, namely the infinite volume, zero quark mass, scalar condensate $`\mathrm{\Sigma }`$ and the coefficient of the quadratic divergence. We find $`a^3\mathrm{\Sigma }=0.0032(4)`$ and $`C_2=0.914(8)`$.
Clearly, the formulae derived in $`\chi `$PT give a very good description of the numerical data. The infinite volume condensate that we extract from this fit in physical units is $`\mathrm{\Sigma }(\mu 1.5\text{ GeV})=(221_9^{+8}\text{ MeV})^3`$, up to a multiplicative renormalization constant, which has not been computed yet for Neuberger’s operator. We stress that the quoted error on the condensate is purely statistical. It does not include, for instance, the expected systematic errors from finite lattice spacing effects, nor the possible contributions from higher orders in chiral perturbation theory of $`O(F_\pi L)^2`$. An additional cautoniary remark is that the statistics for the largest volume, $`12^4`$, is rather small as indicated by the large statistical error. We plan to increase the statistics in the future and include in the analysis also higher topologies, which are more frequent at larger volumes.
The condensate we obtain is quite close to that reported in ref. using Wilson fermions and a different method. However, a meaningful comparison can only be made when our systematic errors are quantified and the multiplicative renormalization included.
Recently the authors of also studied the quark condensate as a function of quark mass and volume, using Neuberger’s operator. However, a comparison with the predictions of q$`\chi `$PT in fixed topological sectors was not attempted and no definite conclusion on the existence or value of the infinite volume condensate was reached.
According to Random Matrix Theory (RMT), the value of $`\mathrm{\Sigma }`$ may also be extracted from the distribution of the lowest non-zero eigenvalue $`\lambda _{\mathrm{min}}(D_\mathrm{N})`$, defined by the square root of the lowest non-zero eigenvalue of $`D_N^{}D_N`$ at zero quark mass. We have only gathered a reasonable statistics for the smaller lattice of size $`8^4`$. In the topological charge $`\nu =0,1`$ sectors, the corresponding distributions are given by :
$$P_{\nu =0}(z)=\frac{z}{2}e^{\frac{1}{4}z^2},P_{\nu =\pm 1}(z)=\frac{z}{2}I_2(z)e^{\frac{1}{4}z^2},$$
(12)
where $`z\lambda _{\mathrm{min}}(D_\mathrm{N})\mathrm{\Sigma }V`$. Recently, the authors of ref. found very good agreement with these distributions in very small lattices. Inserting our value of $`a^3\mathrm{\Sigma }=0.0032(4)`$ in the distribution for zero topology of eq. (12), we get for the expectation value of this eigenvalue $`\lambda _{\mathrm{min}}(D_\mathrm{N})=0.135(15)`$ (where the error comes from the statistical error in the condensate), while from our data on the $`8^4`$ lattice (with 41 topology zero configurations) we obtain $`\lambda _{\mathrm{min}}(D_\mathrm{N})=0.170(12)`$. In topology one sectors, the expected value for the $`\lambda _{\mathrm{min}}(D_\mathrm{N})=0.237(17)`$. From the data, again obtained on the $`8^4`$ lattice ,with an accummulated statistics of 29 configurations, we obtain $`\lambda _{\mathrm{min}}(D_\mathrm{N})=0.218(13)`$. In addition, we generated a sample of eigenvalues according to the distribution of eq. (12) for $`\nu =\pm 1`$ with $`a^3\mathrm{\Sigma }=0.0032(4)`$ and a statistics identical to that of the corresponding simulation. The resulting mean value of $`\lambda _{\mathrm{min}}(D_\mathrm{N})`$ and error are fully compatible with those given by our simulation, indicating that our data do not suffer from autocorrelation effects. This provides a nice cross check on the value of $`\mathrm{\Sigma }`$ obtained from our finite-size scaling analysis.
We finally briefly comment on the difficulty in measuring the condensate in the topology zero sector. This is due to the logarithmic enhancement, which can be shown to originate from the contribution of the single lowest eigenvalue, $`\lambda _{\mathrm{min}}(D_N)`$ to the condensate, if the distribution of this eigenvalue is that given by RMT in eq. (12). As is clear from eq. (12), the distribution of the lowest eigenvalue does not have a gap and it is easy to check that the contribution of this single eigenvalue to the condensate, $`\mathrm{\Sigma }_{\mathrm{min}}`$, has a logarithmic IR divergence in the sector of zero topology:
$`\mathrm{\Sigma }_{\mathrm{min}}={\displaystyle \frac{1}{V}}{\displaystyle 𝑑zP_{\nu =0}(z)\frac{2m}{m^2+(z/\mathrm{\Sigma }V)^2}}=m\mathrm{\Sigma }^2V\mathrm{log}m\mathrm{\Sigma }V+O(m\mathrm{\Sigma }^2V),`$ (13)
which reproduces exactly the logarithmic dependence in eq. (8). We have generated a sample of eigenvalues according to eq. (12) with $`a^3\mathrm{\Sigma }=0.0032`$. In doing so we find that reconstructing the logarithmic behaviour of the $`\nu =0`$ condensate requires a statistics much larger than the one available to us. This is reflected in the results from our actual simulation where we find that the $`\nu =0`$ condensate on the $`8^4`$ lattice displays very large statistical errors. For this reason, we have not included these data in our determination of the scalar condensate.
Conclusion
Chiral perturbation theory assumes that chiral symmetry is spontaneously broken in QCD. Under this assumption it provides, for small enough values of the quark mass and large enough volumes, the mass and volume behaviour of the scalar condensate. This behaviour is determined to lowest order by only one free parameter, namely $`\mathrm{\Sigma }`$, the scalar condensate in infinite volume and for zero quark mass.
Recent developments in lattice QCD have revealed that, contrary to a long-standing belief, chiral symmetry can be realized on the lattice. This theoretical advance is connected to the Ginsparg–Wilson relation. Neuberger proposed a particular operator that satisfies this relation and we have used this operator in our numerical work.
Although Neuberger’s operator is very difficult to treat numerically, it can be used in practice. In this work we computed the scalar condensate on lattices of various sizes and for a number of quark masses, in the regime of chiral symmetry restoration. The results of this numerical computation are shown in fig. 1, where we confront our numerical data with the finite volume and mass behaviour predicted by quenched chiral perturbation theory. Obviously, chiral perturbation theory describes the numerical data well, providing evidence for the spontaneous breaking of chiral symmetry.
Although our results are very encouraging, a number of cautionary remarks have to be made. The lattices we used are rather small and it would be desirable to probe the system further on larger lattices. In addition, it would be important to repeat the calculation at a larger value of $`\beta `$ to estimate the lattice spacing effects. Here we were only able to determine a value of the scalar condensate up to a multiplicative renormalization constant, which would clearly be needed for quoting a physical value. Finally, all our results are obtained in the quenched approximation but, given the complexity of Neuberger’s operator, it would be very difficult to go beyond this approximation.
During the completion of this work, a paper using Neuberger’s operator to compute the scalar condensate appeared. The data presented in this paper are, however, taken in the strong coupling regime on only one lattice (with small L/$`a`$=4) and can hence not directly be compared to our work.
Acknowldegments We thank Martin Lüscher, Massimo Testa and Peter Weisz for many useful and stimulating discussions and T. Wettig for useful correspondance about RMT. We acknowledge the computer centre at NIC, (Jülich) and CIEMAT (Madrid) for providing computer time and technical support. L.L. acknowledges support from the EEC through the TMR network EEC-CT98-00169.
|
no-problem/9907/astro-ph9907388.html
|
ar5iv
|
text
|
# Improved Treatment of Cosmic Microwave Background Fluctuations Induced by a Late-decaying Massive Neutrino
## I Introduction
Anisotropies in the cosmic microwave background (CMB) contain an enormous amount of information about the universe. Data presently available has been used- to constrain from two up to eight cosmological parameters. With the promise of ever more precise measurements of these anisotropies, it has become possible to envision CMB fluctuations as a tool to go beyond this minimal set and constrain other areas of physics. Recent proposed constraints include limits on Brans-Dicke theories , constraints on time-variation in the fine-structure constant , tests of finite-temperature QED , and limits on various models for both stable and unstable massive neutrinos. All of these additional constraints, with one exception, are based on the high-precision fluctuation spectra expected from the MAP and PLANCK satellites. The sole exception is reference , in which Lopez et al. pointed out that the radiation from a neutrino decaying into relativistic decay products could produce such a large integrated Sachs-Wolfe (ISW) effect, that a fairly large mass-lifetime range can be ruled out from current observations. Lopez et al. argued that a neutrino with a mass greater than 10 eV and a lifetime between $`10^{13}`$ and $`10^{17}`$ sec could be ruled out. (Although this calculation assumes nothing about the nature of the decay products other than that they are relativistic, this limit is most useful when applied to decay modes into “sterile” particles such as a light neutrino and a Majoron, since other, more restrictive limits apply to photon-producing decays). Hannestad showed that the MAP and PLANCK experiments should produce an even larger excluded region in the neutrino mass-lifetime plane.
In this paper, we improve on a major approximation of references . In these papers, the relativistic decay products were simply added to the background neutrino energy density in the program CMBFAST . However, when the massive neutrinos decay, the spatial distribution of the decay products is determined by the distribution of the non-relativistic decaying particles; it is not identical to the distribution of the background massless neutrinos. In fact, the approach of references violates energy-momentum conservation. Although in this approach energy and momentum are explicity conserved at zeroth order (the mean), the first order perturbations violate energy-momentum conservation. This may seem like a small effect, but it actually has significant consequences for the CMB fluctuation spectrum.
In the next section, we discuss the formalism for the Sachs-Wolfe effect in the presence of neutrinos decaying after recombination. In section III, we present our results, showing the effects of correctly incorporating the spatial distribution of the decay products, and provide a simple physical explanation of these effects. In section IV, we show how our revised calculation affects the excluded region in the neutrino mass-lifetime plane, and in section V we briefly summarize our conclusions. A comparison of our new results with current data leads to the excluded region $`m_h>100`$ eV, $`\tau >10^{12}`$ sec, although smaller masses can also be excluded for a smaller range of $`\tau `$.
## II The ISW Effect with an Unstable Neutrino: Formalism
To calculate the CMB fluctuations in the presence of a decaying massive neutrino, we first review the basic precepts of the pertinent linear perturbation theory. The perturbed homogeneous, isotropic FRW metric can be parametrized as
$$ds^2=a(\tau )^2\left[d\tau ^2(1+2\psi )d\stackrel{}{x}^2(12\varphi )\right],$$
(1)
where $`a`$ is the scale factor normalized to unity today and $`\tau `$is the conformal time defined by $`d\tau =dt/a`$, $`t`$ being the proper time of a comoving observer. This particular gauge is referred to as the conformal Newtonian gauge because the behavior of the potentials ($`\varphi `$, $`\psi `$) is akin, loosely speaking, to that of the Newtonian potential. These potentials determine the large scale CMB behavior. In particular, the photon temperature perturbation decomposed into its Fourier and angular modes can be shown to be
$$\mathrm{\Delta }_{\mathrm{}}(k)=_0^{\tau _0}𝑑\tau (\dot{\varphi }(k,\tau )+\dot{\psi }(k,\tau ))\mathrm{exp}(\kappa (\tau ))j_{\mathrm{}}(k\tau _0k\tau ),$$
(2)
where the subscript ‘0’ refers throughout to the present time and $`\kappa `$ is the optical depth from the present to some conformal time $`\tau `$in the past. For the purpose of clarity, all the sources contributing to the anisotropy from inside the last scattering surface have been set to zero in Eq. 2. The effect of the sources contributing to the anisotropy between last scattering and the present (as given in Eq. 2) is called the Integrated Sachs-Wolfe (ISW) effect. The power in the $`\mathrm{}^{th}`$ multipole is normally defined as $`\mathrm{}(\mathrm{}+1)C_{\mathrm{}}`$ with
$$C_{\mathrm{}}=(4\pi )^2_0^{\mathrm{}}𝑑kk^2|\mathrm{\Delta }_{\mathrm{}}(k)|^2.$$
(3)
The sources mentioned in connection with the ISW effect can be varied. At any time, the modes that are important to the ISW effect correspond to those scales which are smaller than the sound horizon of the whole fluid (matter+radiation) at that time. For these modes, the potentials can decay if there is radiation pressure or if the universe expands rapidly. In models with no cosmological constant, the main contribution to the ISW effect comes from just after recombination (since radiation redshifts faster than matter). Inclusion of a cosmological constant leading to a rapid expansion of the universe late in its history would boost the power on larger scales (small $`\mathrm{}`$). Any other astrophysical process which contributes to the radiation content of the universe between last scattering and the present will lead to an increase in the total ISW effect. One such scenario is that of a massive particle decaying around or after last scattering. We will consider the case of a massive neutrino decaying non-relativistically into (effectively) massless particles. The details of the daughter particles turn out to be irrelevant.
To quantify the evolution of the massive neutrino density, we will consider the Boltzmann equation for its distribution. In a homogeneous and isotropic universe, the distribution of the collisionless massive neutrino decaying non-relativistically into two massless particles follows
$$\frac{}{\tau }f_h^0(q_h,\tau )=\frac{a^2m_h}{t_dϵ_h}f_h^0(q_h,\tau ),$$
(4)
where $`t_d`$ is the mean lifetime of the neutrino, and $`ϵ_h`$ and $`q_h`$ are the comoving energy and momentum: $`ϵ_h^2=q_h^2+m_h^2a^2`$, and a superscript ‘0’ will be used throughout to denote unperturbed quantities. We make the following simplifications throughout our treatment: (1) neglect inverse decays, (2) neglect spontaneous emission, (3) neglect Pauli blocking factor. The solution approaches the familiar $`\mathrm{exp}(t/t_d)`$ behavior as the neutrino becomes non-relativistic. The evolution equation for the energy density of the unstable neutrino is the integral of Eq. 4. It reads
$$\dot{\rho }_h^0+3\frac{\dot{a}}{a}(\rho _h^0+P_h^0)=\frac{am_hn_h^0}{t_d},$$
(5)
where overdots represent differentiation with respect to conformal time. It should be noted (as it is important if the decay is not completely non-relativistic) that the right hand side contains the product of $`m_h`$ and $`n_h^0`$ (number density) and not $`\rho _h^0`$.
We now turn on the perturbations in the metric. Although the conformal Newtonian gauge is the most useful in which to understand the ISW effect, for computational purposes<sup>*</sup><sup>*</sup>*The main advantage is that CMBFAST is written in synchronous gauge. we will define all our variables in the synchronous gauge. Thus, we will express the integrand in Eq. 2 in terms of perturbations in the synchronous gauge. The synchronous gauge has the property that the coordinate time and the proper time of a freely falling observer coincide. All the perturbations are in the spatial part of the metric ($`g_{ij}=a^2\delta _{ij}+a^2h_{ij}`$) in this gauge. The perturbation $`h_{ij}`$ can be Fourier transformed and broken up into its trace and a traceless part as
$$h_{ij}(\stackrel{}{x},\tau )=d^3k\frac{\mathrm{exp}(i\stackrel{}{k}\stackrel{}{x})}{k^2}\left[h(\stackrel{}{k},\tau )k_ik_j+6\eta (\stackrel{}{k},\tau )\left(k_ik_j\frac{k^2}{3}\delta _{ij}\right)\right].$$
(6)
Instead of working with the conjugate momentum in the perturbed space-time, we will use $`q_h`$ and $`ϵ_h`$ as defined above and in keeping with that, we will write out the perturbed massive neutrino distribution as
$$f_h(\stackrel{}{x},\stackrel{}{q}_h,\tau )=f_h^0(q_h,\tau )\left[1+\mathrm{\Psi }_h(\stackrel{}{x},\stackrel{}{q}_h,\tau )\right].$$
(7)
Due to the fact that the decay term is linear in $`f_h`$, the form of the equation for the evolution of $`\mathrm{\Psi }_h`$ is identical to that of the stable massive neutrino but with $`f_h^0`$ now given by Eq. 4. The stable massive neutrino case has been clearly worked out in Ref. .
The decay radiation rises exponentially from being negligible in the past to some maximum value at $`\tau \tau _d`$ and then drops off as $`a^4`$ like normal radiation. It is more informative therefore to follow the quantity $`r_{rd}=\rho _{rd}^0/\rho _\nu ^0`$ where ‘rd’ denotes the decay radiation and $`\rho _\nu ^0`$ is the cosmological density in a massless neutrino. The evolution equation for $`r_{rd}`$ is
$$\dot{r}_{rd}=\frac{m_hn_h^0}{\rho _\nu ^0}\frac{a}{t_d}.$$
(8)
The treatment of the perturbations in the decay radiation will be analogous to that of the massless neutrino as worked out in Ref. . To evolve the perturbations in the decay radiation, we will integrate out the momentum dependence in the distribution function by defining (in Fourier space)
$$F_{rd}(\stackrel{}{k},\widehat{n},\tau )=\frac{𝑑qq^3f_{rd}^0(q,\tau )\mathrm{\Psi }_{rd}(\stackrel{}{k},q,\widehat{n},\tau )}{𝑑qq^3f_{rd}^0(q,\tau )}r_{rd},$$
(9)
where $`\stackrel{}{q}=q\widehat{n}`$ and $`\mathrm{\Psi }_{rd}`$ is defined analagously to Eq. 7. The equation governing the evolution of $`F_{rd}`$ can be worked out to give
$`\dot{F}_{rd}+ik\mu F_{rd}+4\left({\displaystyle \frac{\dot{h}}{6}}+{\displaystyle \frac{\dot{h}+6\dot{\eta }}{3}}P_2(\mu )\right)r_{rd}`$ $`=`$ $`\dot{r}_{rd}\left(N_03iN_1P_1(\mu ){\displaystyle \frac{2}{3}}N_2P_2(\mu )+\mathrm{}\right),`$ (10)
$`N_0(k,\tau )`$ $`=`$ $`{\displaystyle \frac{𝑑q_hq_h^2f_h^0(q_h,\tau )\mathrm{\Psi }_h(k,q_h,\tau )\left(1\frac{8}{3}\left(\frac{q_h}{am_h}\right)^2+\mathrm{}\right)}{𝑑q_hq_h^2f_h^0(q_h,\tau )}},`$ (11)
where $`\mu =\widehat{k}\widehat{n}`$ and $`P_n(\mu )`$ are the Legendre polynomials of order $`n`$. The series of terms in these equations arises because the perturbed quantities depend on the direction of momentum and to get the contribution to a daughter particle with momentum $`\stackrel{}{q}`$, we need to integrate over all possible $`\stackrel{}{q}_h`$. Thus Eq. 10 depends on both $`\mu `$ and $`\stackrel{}{q}\stackrel{}{q}_h`$. The situation simplifies enormously for non-relativistic decays because each term $`N_p`$, which contributes to the $`p^{th}`$ multipole progressively, is of $`𝒪(q_h^p/a^pm_h^p)`$ or higher. In Eqs. 10 and 11, the series has been truncated by only keeping terms up to $`𝒪(q_h^2/a^2m_h^2)`$ in the integrand. Similar equations for the evolution of perturbations in the decay radiation can be found in references . Apart from $`N_0`$, the terms on the right-hand side of Eq. (10) are completely negligible for non-relativistic decays.
The use of Eq. 10 is our only difference from the treatment in ref. . In the latter paper, the relativistic decay products were simply added to the neutrino background in CMBFAST. This is equivalent to setting the right hand side of Eq. 10 to zero. Since the perturbations in the decay products are determined by the perturbations in both the metric and the decaying massive particles, they are correctly described by Eq. 10. Although this may seem like a minor difference, it produces very large effects, as we now show.
## III The ISW Effect with an Unstable Neutrino: Results
The formalism outlined above for the evolution of an unstable neutrino and its decay products was integrated into the CMBFAST code . We investigated a range of masses from 10 eV to $`10^4`$ eV and lifetimes from $`10^{12}`$ to $`10^{18}`$ seconds. The underlying cosmology was taken to be a standard ($`\mathrm{\Omega }=1`$) CDM model with $`h=0.5`$ (with $`H_0=100h`$ km sec<sup>-1</sup> Mpc<sup>-1</sup>); baryon density $`\mathrm{\Omega }_Bh^2=.02`$ and scale invariant isentropic initial conditions (the same model was used in ref. ). Our results are shown in Fig. 1 for several masses and lifetimes, along with the results obtained by simply adding the decay products to the relativistic background. As pointed out in Ref. there is indeed an enhancement in the spectra at relatively large scales due to the ISW effect produced by the decaying neutrino. We will see in section IV that for many values of neutrino mass and lifetime, the spectrum produced is far from that observed today, and therefore a large region of parameter space is ruled out due to this effect.
The location of this ISW induced bump is determined by the lifetime of the neutrino. For lifetimes shorter than the age of the universe, inhomogeneities on scales $`k`$ project onto angular scales $`\mathrm{}k\tau _0`$ where $`\tau _0`$ is the conformal time today, and we assume a flat universe. The potentials vary in time (and hence cause the ISW effect) most significantly at the time of decays on scales of order the sound horizon: $`k_{sh}^23/(4\tau _d^2w)`$ where $`w=P/\rho `$. Therefore, the bump in the spectrum is produced at $`\mathrm{}k_{sh}\tau _0(\tau _0/\tau _d)(4w/3)^{1/2}`$. At these late times, the dominant contribution to $`w`$ comes from the decay radiation; hence $`w\mathrm{\Omega }_{rd}/3`$ where $`\mathrm{\Omega }_{rd}`$ is the fraction of critical density in decay radiation. Therefore, the ISW bump should be roughly at
$$\mathrm{}_{ISW}\frac{\tau _0}{\tau _d}\sqrt{\frac{9}{4\mathrm{\Omega }_{rd}(\tau _d)}}.$$
(12)
For a matter dominated universe the conformal time and time are related as follows: $`\tau t^{1/3}`$. For a $`m_h=10`$ eV, $`t_d=10^{15}`$ sec neutrino, $`\mathrm{\Omega }_{rd}0.15`$ and $`\tau _0/\tau _d(4\times 10^{17}\mathrm{sec}/10^{15}\mathrm{sec})^{1/3}7.4`$. Therefore, in this case we expect $`\mathrm{}_{ISW}29`$. The actual peak occurs at a larger value of $`l`$, due to entropy fluctuations which decrease $`w`$, thereby increasing $`k_{sh}`$ and, finally, $`\mathrm{}_{ISW}`$.
Notice from Figure 1 that we find quantitative disagreement with the results of Lopez et al. (dashed curves). The new results show that a more accurate treatment of the spatial distribution of the decay products produces a surprisingly large change in the CMB fluctuation spectrum compared to the results of reference . This difference is larger for smaller masses as can be seen in the figure.
At least for low masses, the most obvious difference between the the old and new spectra is the smaller size of the ISW effect for the new case. This difference has a physical explanation: by not properly treating the perturbations in the decay radiation we overestimate an important source of the potential decays that drive the ISW effect. To see this we first expand the Boltzmann equation for decay radiation perturbations, Eq. 9, in multipole moments, $`F_{rd}=_lF_{rd,l}P_l`$, to obtain the following hierarchy, shown here for $`l2`$:
$`\dot{\delta }_{rd}+{\displaystyle \frac{2}{3}}\left(\dot{h}+2\theta _{rd}\right)`$ $`=`$ $`{\displaystyle \frac{\dot{r}_{rd}}{r_{rd}}}\left(\delta _h\delta _{rd}\right),`$ (13)
$`\dot{\theta }_{rd}k^2\left({\displaystyle \frac{\delta _{rd}}{4}}\sigma _{rd}\right)`$ $`=`$ $`{\displaystyle \frac{\dot{r}_{rd}}{r_{rd}}}\theta _{rd},`$ (14)
$`\dot{\sigma }_{rd}{\displaystyle \frac{2}{15}}\left(2\theta _{rd}+\dot{h}+6\dot{\eta }\right)`$ $`=`$ $`{\displaystyle \frac{\dot{r}_{rd}}{r_{rd}}}\sigma _{rd},`$ (15)
where $`\delta _{rd}=F_{rd,0}/r_{rd}`$, $`\theta _{rd}=3kF_{rd,1}/4r_{rd}`$ and $`\sigma _{rd}=F_{rd,2}/2r_{rd}`$. The treatment of Lopez et al. is equivalent to neglecting the right hand sides of the equations above. This simplification breaks down near $`\tau \tau _d`$, where $`\dot{r}_{rd}/r_{rd}`$ is not negligible.
Neglecting the $`\dot{r}_{rd}/r_{rd}`$ terms in the Boltzmann equations for the decay radiation perturbations results in errors in the perturbations. Let us focus on $`\theta _{rd}`$, which turns out to be primarily responsible for the big difference. Consider Eq. 14 for modes above the horizon at $`\tau \tau _d`$, since for these modes the approximate treatment of Ref. gives wrong results. For these modes the $`k^2`$ terms calculated in the approximate scheme can be shown (see Appendix) to be roughly similar to its exact value. Then the exact solution $`\theta _{rd}`$ is related to the approximate solution $`\theta _{rd}^a`$ by
$$\dot{\theta }_{rd}\dot{\theta }_{rd}^a\frac{\dot{r}_{rd}}{r_{rd}}\theta _{rd}.$$
(16)
where the superscript here and in what follows denotes the solution to the set of equations 13-15 obtained in the approximate scheme by neglecting the feedback terms on the right hand side. The exact solution for $`\theta _{rd}`$ is therefore much smaller than the approximate one. Examples for several different modes are shown in Figure 2.
These large overestimates of $`\theta _{rd}`$ lead to correspondingly large overestimates of the ISW effect and are primarily responsible for the differences between our spectra and those generated in Ref. . The Appendix demonstrates precisely how the perturbations in the decay-produced radiation affect the potentials that govern the ISW effect, and how treating the decay products as identical to the massless neutrinos violates energy-momentum conservation. The bottom line is that the ISW effect depends significantly on the behavior of $`\theta _{rd}`$ and inaccuracies in it lead directly to inaccuracies in the $`C_l`$’s. Why does the approximation work better for higher mass neutrinos? The ISW effect is generated during times when the universe has appreciable radiation. For low-mass neutrinos whose decay radiation never dominates the energy density, the decay radiation redshifts away relative to the matter, and is only important near $`\tau \tau _d`$. Therefore, neglecting the $`\dot{r}_{rd}/r_{rd}`$ terms creates errors in the decay radiation perturbations at the crucial time when they are driving the ISW effect. If the neutrino is massive enough, then its decay products are important for a range of times with $`\tau \tau _d`$ when the approximation is good. So the approximate treatment works better for higher-mass neutrinos, like the $`m_h=10`$ keV, $`t_d=10^{12}`$ sec case.
There are other visible differences between the anisotropy spectra generated in reference and our more accurate treatment. One difference, which exacerbates the rise in power at large scales, is a drop in the small-scale ISW effect. For modes which enter the horizon when there is significant radiation, the $`\delta _h`$ term in Eq. (13) is an important source term. This increases $`\delta _{rd}`$ relative to $`\delta _{rd}^a`$ and since $`\delta _{rd}`$ is a source for the evolution of $`\theta _{rd}`$, it implies that $`\theta _{rd}^a<\theta _{rd}`$. Thus there is a decrease in the ISW effect at small scales in the approximate scheme of ref . This is not visible for the 10 eV unstable neutrino (in Fig. 1) because of the comparatively large signature of the first peak, but it is readily apparent for the 10 keV neutrino because of the large ISW effect at small scales.
## IV Comparison with current CMB data
Since the detection of anisotropies in the CMB by COBE, there have been dozens of observations of anisotropies on a wide variety of angular scales (refs. -). We now use these observations to place more accurate limits on neutrino mass and lifetime.
In ref. , a very rough constraint was placed on decaying neutrino models: a model was excluded if the power at $`l=200`$ was greater than at $`l=10`$. As we have noted in the previous section, a more accurate treatment of the decaying neutrinos results in a much smaller distortion in the CMB spectrum for a certain range of neutrino masses and lifetimes. However, as we will see, consideration of all the data leads to constraints which are almost as stringent as the rough contours in ref. .
CMB experiments typically report an estimate of the band power
$$\widehat{C}_i=\frac{1}{4\pi }\frac{\underset{l}{}(2l+1)W_{i,l}C_l}{_lW_{i,l}/l}$$
(17)
where $`W_{i,l}`$ is the window function which depends on beam size and chopping strategy of experiment $`i`$. Each of these comes with an error bar or, in the case of correlated measurements, an error matrix $`M^1`$. The naive way to constrain parameters in a theory then is to form
$$\chi ^2=\underset{i,i^{}}{}\left(\widehat{C}_iC_i(C_l)\right)M_{ii^{}}\left(\widehat{C}_i^{}C_i^{}(C_l)\right).$$
(18)
Here we have explicitly written the dependence of $`C_i`$ on the theoretical $`C_l`$’s which in turn depend on the cosmological parameters. This naive statistic is useful only if the band power errors are Gaussian. In fact, the probability distribution is typically non-Gaussian, with a large tail at the high end and a sharp rise at the low end of the distribution. In recognition of this, and guided by some compelling theoretical arguments, Bond, Jaffe, and Knox proposed forming an alternative statistic:
$$\chi ^2=\underset{i,i^{}}{}\left(\widehat{Z}_iZ_i(C_l)\right)M_{ii^{}}^Z\left(\widehat{Z}_i^{}Z_i^{}(C_l)\right)$$
(19)
where
$$Z_i\mathrm{ln}\left(C_i+x_i\right)$$
(20)
with $`x_i`$ an experiment dependent quantity, determined by the noise. The covariance matrix is now
$$M_{ij}^Z=\left(\widehat{C}_i+x_i\right)M_{ij}\left(\widehat{C}_i+x_i\right).$$
(21)
Bond, Jaffe, and Knox have tabulated and made available the relevant data from the experiments in refs. -. We use this information and formalismWe also account for calibration uncertainty in the manner set down in ref. . to constrain the mass and lifetime of unstable neutrinos.
The $`\chi ^2`$ in eq. 19 depends on the parameters of the cosmological model. In principle, it would be nice to allow as many parameters as possible to vary in addition to the mass and lifetime of the neutrino. This must be balanced against the constraints imposed by non-negligible time needed to run the modified version of CMBFASTThe modified version, accounting for decaying neutrinos, takes about ten times longer than the plain vanilla code.. Our strategy is to vary the mass and lifetime of the neutrino; the overall normalization of the $`C_l`$’s; the primordial spectral index (equal to one for Harrison-Zel’dovich fluctuations); and the calibration of each experiment. For the other cosmological parameters, we make “conservative” choices. That is, we choose values likely to make the power on small scales $`(l200)`$ as large as possible compared with the power on large scales. This acts against the effect of the decaying neutrino, which boosts up power on large scales, and therefore leads to more conservative limits. At each point in $`(m,\tau )`$ space, we use a Levenberg-Marquardt algorithm (see e.g. ) to find the values of normalization, spectral index, and calibration which minimize the $`\chi ^2`$ defined in Eq. 19. The contours in Fig. 3 show these best fit $`\chi ^2`$ in the $`(m,\tau )`$ plane.
Figure 3 shows the constraints on the neutrino mass and lifetime for a Hubble constant $`h=0.5`$ and $`\mathrm{\Omega }_Bh^2=.02`$ in a flat ($`\mathrm{\Omega }=1`$) matter dominated ($`\mathrm{\Omega }_\mathrm{\Lambda }=0`$) universe. The high baryon content is above the favored value of Tytler and Burles and serves to raise the power on small scales. Masses greater than $`100`$ eV are ruled out for almost all lifetimes we have explored ($`\tau >10^{12}`$ sec). For lifetimes between $`10^{14}`$ and $`10^{15}`$ sec, masses as low as $`30`$ eV are excluded at the two-sigma level. These results are similar to those of ref. , but more reliable because of the improvements in the calculated spectra and the more careful treatment of the data.
We checked that the contours for a different set of $`(h,\mathrm{\Omega }_b)`$ were similar to the contours in Fig 3. Fig. 4 shows the results for a cosmological constant-dominated universe. Again, a sizable region is ruled out, reflecting the robustness of the constraint.
Hannestad performed a similar calculation, using future CMB experiments to rule out decaying neutrino models, but he used the same approximation as in reference ; the decay products were added into the background neutrino density. We expect that his excluded-region contours for low masses should shrink since ISW effect is the main discriminator for these masses.
It has been noted that the decay products from a very massive neutrino could keep the universe substantially populated with radiation or even radiation-dominated for most of its history. The presence of radiation has the effect of stopping the growth of density perturbations, which in a matter-dominated universe would grow as $`\delta a`$. Since these density perturbations should (eventually) collapse into the structure we see today, it is clear that structure formation arguments can also provide constraints on the neutrino mass and lifetime. Very coarse constraints on the radiation density can placed by requiring that the scales relevant to structure formation are able to grow sufficiently (assuming of course, we know the initial perturbations), as is done in Ref. . In fact, for a scale-invariant initial spectrum, the structure formation arguments of Ref. also rule out a region at the bottom-right of our excluded region. A more detailed analysis yields more stringent constraints . In light of this, it is important to understand that the constraints from CMB are most useful for low masses, i.e., for massive decaying neutrinos which do not affect the late-time growth of the density perturbations appreciably. Future experiments (MAP and PLANCK) have the potential to constrain neutrino masses as low as 1 eV and maybe even lower . In the end, CMB and large scale structure constraints on massive decaying neutrinos both overlap and complement each other.
## V Conclusions
Our results indicate that for calculations involving the effects of decaying particles on CMB fluctuations, exact conservation of energy-momentum (not just conservation of the mean energy-momentum) is crucial. When perturbations in the decay products are correctly treated as being determined by the perturbations in both the metric and the decaying massive particle, energy and momentum of the massive particle plus its decay products are conserved. The result is a much smaller change (when an unstable neutrino is added) in the CMB fluctuation spectrum than was noted in ref. . However, by using a comparison with current data, rather than a simple constraint on $`C_{200}/C_{10}`$, we have been able to obtain an excluded region only slightly less restrictive than that obtained in ref. . This excluded region will grow as more data becomes available, culminating potentially in very restrictive limits from MAP and PLANCK . Our results, of course, can be generalized to arbitrary decaying particles.
The CMB spectra used in this work were generated with a modified version of CMBFAST . We thank Lloyd Knox for providing the data used to generate the constraints in section IV. This work was supported by the DOE and the NASA grant NAG 5-7092 at Fermilab and by the DOE grant DE-FG02-91ER40690 at Ohio State.
## A Source of Decaying Potentials
Here we show that the ISW effect in the decaying neutrino model is primarily driven by the dipole of the decay-produced radiation, $`\theta _{rd}`$. The ISW effect is driven by time changes to the potentials (eq. 2), which are determined from Einstein’s equations. In synchronous gauge, the source of these time changes is
$$\dot{\varphi }+\dot{\psi }=t_1+t_2+t_3+t_4,$$
(A1)
where
$`t_1`$ $`=`$ $`\left[2+{\displaystyle \frac{3}{k^2}}\left({\displaystyle \frac{\dot{a}}{a}}\right)^2(5+3w)\right]\dot{\eta },`$ $`t_3`$ $`=`$ $`2{\displaystyle \frac{\dot{a}}{a}}\eta ,`$ $`t_2`$ $`=`$ $`{\displaystyle \frac{1}{2k^2}}\left({\displaystyle \frac{\dot{a}}{a}}\right)^2(5+3w)\dot{h},`$ $`t_4`$ $`=`$ $`{\displaystyle \frac{3}{k^2}}\left(2{\displaystyle \frac{\dot{a}}{a}}D_\sigma \dot{D}_\sigma \right).`$
The quantity $`D_\sigma `$ is related to the anisotropic stress of the fluid: $`D_\sigma =(3/2)(\dot{a}/a)^2(1+w)\sigma `$, where $`w=P/\rho `$ is the equation of state of the universe. We will consider the behavior of superhorizon-scale perturbations, where $`k\tau 1`$. We assume that the neutrinos decay well into the matter-dominated phase of the universe, and that the decay radiation never dominates the energy density of the universe, but does come to dominate the standard radiation, i.e., photons and massless neutrinos. Then the equation of state takes a simple form near neutrino decay: $`w1/3\mathrm{\Omega }_{rd}`$. In addition, the total fluid perturbation sources $`\theta `$ and $`\sigma `$ are dominated by the decay radiation, so that we can write $`\theta 4w\theta _{rd}`$, and $`\sigma 4w\sigma _{rd}`$. These assumptions are well motivated for $`m_h=10`$ eV, $`t_d=10^{15}`$ sec neutrinos which decay well into the matter dominated era, with $`\mathrm{\Omega }_{rd}0.15`$ at decay.
We first examine the behavior of $`\dot{\varphi }+\dot{\psi }`$in the approximation where we neglect the $`\dot{r}_{rd}/r_{rd}`$ terms in the Boltzmann equations for the decay radiation perturbations, i.e., we treat the decay radiation as massless neutrinos as in Lopez, et al . We then consider the effect of relaxing the approximation and calculating the decay radiation perturbations correctly. We denote the use of the Lopez et al. approximation in all quantities by the superscript-$`a`$.
The potentials do not decay in a completely matter-dominated universe; $`\dot{\varphi }+\dot{\psi }`$ is sourced by the decay radiation and is therefore first order in $`w`$. The term $`t_4^a`$ is directly related to $`\sigma `$ and so is of order $`w`$. The linearized Einstein equations imply that $`\dot{\eta }\theta w\theta _{rd}`$, so that $`t_1^a`$ is also of order $`w`$. However, $`t_2^a`$ and $`t_3^a`$ are each zeroth order in $`w`$, so their sum must cancel to lowest order. Using the linearized Einstein equations and the continuity equation we find that
$$t_2^a+t_3^a8\frac{w\eta }{\tau },$$
(A3)
demonstrating the required cancellation. In our approximation, the decay radiation perturbations can be calculated from the Boltzmann equation for massless neutrinos, which admit analytic solutions for the superhorizon modes of $`\theta _{rd}^a`$ and $`\sigma _{rd}^a`$. Using these solutions, it can be shown that
$$t_1^a29\frac{w\eta }{\tau },t_4^a12\frac{w\eta }{\tau },$$
(A4)
so that $`t_1^a`$, $`t_2^a+t_3^a`$ and $`t_4^a`$ each contribute roughly comparable amounts to $`\dot{\varphi }+\dot{\psi }`$. In calculating the effect of the approximation on $`\dot{\varphi }+\dot{\psi }`$we will therefore have to separately consider each term. The quantities $`D_\sigma `$ and $`\dot{\eta }`$ are very much affected by the approximation, since they directly depend on the decay radiation perturbations, and the error in the decay radiation perturbations is of order the quantities themselves. This implies that $`\delta \dot{\eta }|\dot{\eta }|`$ and $`\delta D_\sigma |D_\sigma |`$, where $`\delta x|xx^a|`$ is the absolute error in the variable $`x`$. The zeroth order quantities $`\eta `$ and $`\dot{h}`$ are much less affected by the approximation. For super-horizon modes, we do not expect $`\eta `$ to evolve much from its initial value, and so the error in it (determined by the error in $`\dot{\eta }`$) is naturally small. Following this line of reasoning, one can write
$`\delta \eta `$ $``$ $`\delta \dot{\eta }\tau k^2\tau ^2w\eta ,`$ (A5)
$`\delta \dot{h}`$ $``$ $`k^2\tau \delta \eta k^4\tau ^3w\eta .`$ (A6)
Using these relations we find that
$$\delta t_1\frac{w\eta }{\tau },\delta t_2(k\tau )^2\frac{w\eta }{\tau },\delta t_3(k\tau )^2\frac{w\eta }{\tau },\delta t_4\frac{w\eta }{\tau },$$
(A7)
which makes it clear that for superhorizon modes, the errors in $`t_1`$ and $`t_4`$ dominate the error in $`\dot{\varphi }+\dot{\psi }`$. Numerically it is seen that the error in $`t_1`$ is the most important.
We can see why the error in $`t_1`$ is the most important in a simple way. The dominant source term (see eq. 14) for $`\dot{\theta }_{rd}^a`$ is $`\delta _{rd}^a`$. Now $`\dot{\delta }_{rd}`$ and $`\dot{\delta }_{rd}^a`$ differ only by the terms on the right-hand side of eq. 13 (since $`\dot{h}`$ is not much affected by the approximation and $`\theta _{rd}\dot{h}`$ for super-horizon modes). But the right-hand side of eq. 13 contains the difference of $`\delta _h`$ and $`\delta _{rd}`$, and hence the fractional error in $`\delta _{rd}`$ is expected to be much smaller relative to that in $`\theta _{rd}`$ or $`\sigma _{rd}`$. So we can assume that $`\delta _{rd}`$ and $`\delta _{rd}^a`$ are roughly the same for the purpose of estimating the errors in $`\theta _{rd}`$ and $`\sigma _{rd}`$ in the approximate scheme. Therefore, for super-horizon modes (at $`\tau \tau _d`$), we can write to a good approximation
$$\dot{\theta }_{rd}\dot{\theta }_{rd}^a\frac{\dot{r}_{rd}}{r_{rd}}\theta _{rd}\text{and}\dot{\sigma }_{rd}\dot{\sigma }_{rd}^a\frac{\dot{r}_{rd}}{r_{rd}}\sigma _{rd}.$$
(A8)
From this equation, we can gauge that the fractional errors in $`\theta _{rd}`$ and $`\sigma _{rd}`$ are roughly the same, and close to $`\tau \dot{r}_{rd}/r_{rd}`$. But since the coefficient for $`t_4`$ in $`\dot{\varphi }+\dot{\psi }`$is much less than that for $`t_1`$ (see eq. A4), we expect that the error in $`t_1`$ dominates which implies that $`\delta (\dot{\varphi }+\dot{\psi })\dot{\eta }\theta _{rd}`$. From eq. A8 we have that $`\theta _{rd}^a>\theta _{rd}`$ for super-horizon modes at $`\tau \tau _d`$. Therefore for these modes,
$$\left|\dot{\varphi }^a+\dot{\psi }^a\right|>\left|\dot{\varphi }+\dot{\psi }\right|.$$
(A9)
This is the reason for the dramatic rise in power at large scales when we neglect the decay terms in the decay radiation Boltzmann equations.
Merely adding the decay radiation to the massless neutrino background causes large errors in the ISW effect. However, there exists a method of calculation that yields good results without introducing a separate Boltzmann hierarchy for the decay radiation. Since this method might be helpful for other late time processes which affect the CMB spectrum, and since it demonstrates that we really have isolated the source of our disagreement with Lopez et al. , we present it here.
The fix can be accomplished by explicitly evolving $`\alpha =1/(2k^2)(\dot{h}+6\dot{\eta })`$, a quantity that contains the problematic $`\dot{\eta }`$, within CMBFAST in the following differential equation:
$$\dot{\alpha }+2\frac{\dot{a}}{a}\alpha =\eta \frac{9}{2k^2}\left(\frac{\dot{a}}{a}\right)^2(1+w)\sigma $$
(A10)
The dominant quantity on the right-hand side is $`\eta `$, which is quite unaffected by the approximation (recall that $`\delta \eta k^2\tau ^2w\eta `$. In contrast, when $`\alpha `$ is set by the equation (the default in CMBFAST),
$$\alpha =\frac{a}{\dot{a}}\eta +\frac{3}{2k^2}\frac{\dot{a}}{a}\delta +\frac{9}{2k^4}\left(\frac{\dot{a}}{a}\right)^2(1+w)\theta ,$$
(A11)
the $`\theta `$ term is important, and inaccuracies in the large scale behavior are generated. The condition that $`\dot{\alpha }`$ given by Eq. (A10) should match that obtained from Eq. (A11) is conservation of momentum for the massive neutrino and its decay products. In the approximate scheme, the unperturbed quantities for the decay radiation are calculated correctly while the perturbations in it are set equal to that of the massless neutrino. This violates the energy-momentum conservation conditions for the system of the massive neutrino plus its decay products, and this is the reason behind the fact that different combinations of the Einstein equations lead to different potential decay rates. It may be noted that the CMBFAST code used to calculate the fluctuation spectrum implicitly assumes energy-momentum conservation. When this condition is violated, the code cannot produce internally consistent results. In Fig. 5, we have plotted the result of using Eq. A10 (in place of Eq. A11) with the approximate scheme and as expected, good agreement with the actual curves is obtained. This exercise clearly shows that it is important to check for energy-momentum conservation when using approximate methods to model any part of the energy-momentum tensor.
|
no-problem/9907/astro-ph9907306.html
|
ar5iv
|
text
|
# The isolated neutron star candidate RX J1605.3+3249
## 1 Introduction
Several arguments based on present metallicity of the interstellar medium, rate of supernovae and on properties of the observed radio pulsar population indicate that of the order of 10<sup>8</sup> to 10<sup>9</sup> old isolated neutron star (INS) should exist in the Galaxy. Ostriker, Rees & Silk (ostricker (1970)) were the first to propose that a sizeable fraction of these old and radio-quiet neutron stars could be heated by accretion from interstellar medium and be again detectable by their far-UV and soft X-ray emission. Early modelling of this population by Treves & Colpi (treves91 (1991)) and Madau & Blaes (mb94 (1994)) led to the conclusion that re-heated old neutron stars should indeed appear in large numbers in soft and UV all-sky surveys with a possible concentration in the directions of highest interstellar medium densities, namely the galactic plane in general and more specifically molecular clouds.
Boosted by these predictions several optical identification campaigns of ROSAT X-ray and UV sources were initiated and it readily became clear that the number of possible isolated neutron star candidates was substantially below average model predictions (e.g., Manning et al. manning (1996), Motch et al. motch97 (1997), Danner danner1 (1998)). However, four good isolated neutron star candidates were discovered so far in the ROSAT all-sky survey. These candidates share as common properties a soft X-ray spectrum with black body temperatures below 100 eV, no radio emission detected and very high F<sub>X</sub>/F<sub>opt</sub> ratio in excess of 10<sup>4</sup>. The X-ray brightest of these candidates are RX J1856.5-3754 which was optically identified with a V=25.6 blue object (Walter et al. walter97 (1997)) and the pulsating source RX J0720.4-3125 (Haberl et al. haberl97 (1997)) which has no counterpart brighter than B=26.1 (Motch & Haberl mh98 (1998), Kulkarni & van Kerkwijk KvK98 (1998)). Other good cases are RX J0806.4–4123 (Haberl et al. haberl98 (1998)) and 1RXS J130848.6+212708 (Schwope et al. schwope (1999)).
In the mean time, the possibility that a fraction of these candidates could rather be young neutron stars has gained considerable credit. The lack of detectable radio emission from these sources could be explained by a B, P<sub>spin</sub> position behind the radio pulsar death line or more simply by beaming effects which are stronger at long spin periods (Wang et al. wang98 (1998)). Finally, RXTE observations have demonstrated that soft $`\gamma `$-ray repeaters are newly born neutron stars with extremely high magnetic fields (Kouveliotou et al. kou98 (1998)) and belong to the population of magnetars proposed by Duncan & Thomson (duncan92 (1992)). As radio emission can be quenched by the strong magnetic field, these objects may remain undetected by classical radio means and their birth rate could amount to 10% of that of ordinary pulsars (Kouveliotou et al. kou94 (1994)). Since the primary store of energy in a magnetar is that in the magnetic field, B decay could constitute a significant source of heat, allowing magnetars to remain detectable in X-rays over longer times than ordinary pulsars (Heyl & Kulkarni hk98 (1998)).
In fact, several of the INS found so far could be young neutron stars, perhaps descendant of soft $`\gamma `$-ray repeaters, rather than old accreting INS as originally thought. Detailed study of the few known INS and determination of their X-ray powering mechanism is therefore of high importance.
In this paper, we report on X-ray and optical observations of one of the X-ray brightest isolated neutron star candidates. The source RX J1605.3+3249, also known as RBS 1556 (Schwope et al. schwope (1999)), was extracted from the ROSAT all-sky survey on the basis of its soft spectrum and lack of bright optical and radio counterpart.
## 2 Selection of isolated neutron star candidates from the ROSAT all-sky survey
Thanks to its soft sensitivity well suited to the detection of sources with $`T_{\mathrm{bb}}`$ = 20-100 eV, as expected from young cooling neutron stars or from old ones re-heated by accretion, the ROSAT all-sky survey offers a highly valuable database for detecting these elusive objects. In order to find candidates, we selected ROSAT all-sky survey sources displaying HR1 and HR2 hardness ratios compatible with intrinsically soft spectra slightly modified by a reasonable amount of interstellar absorption. Hardness ratios 1 and 2 are defined as
$$\mathrm{HR1}=\frac{(0.52.0)(0.10.4)}{(0.10.4)+(0.52.0)}$$
$$\mathrm{HR2}=\frac{(1.02.0)(0.51.0)}{(1.02.0)}$$
where (A-B) is the raw background corrected source count rate in the A$``$B energy range expressed in keV. Based on simulations of black body energy distributions folded with the ROSAT PSPC response we decided to extract all-sky survey sources having hardness ratios compatible (i.e. within one standard error value) with HR1 $``$ -0.25 and HR2 $``$ -0.5. This parameter space corresponds to $`T_{\mathrm{bb}}`$ $``$ 100 eV and N<sub>H</sub> $``$ 4$`\times `$10<sup>21</sup> to 3$`\times `$10<sup>20</sup> cm<sup>-2</sup> for $`T_{\mathrm{bb}}`$ = 40 eV and $`T_{\mathrm{bb}}`$ = 100 eV respectively. Our observational strategy was then to identify all sources in large sky areas down to the faintest X-ray flux level possible in order to find new candidates and also efficiently constrain the space density of these objects. Results from this global study will be presented in a later paper. The possible INS nature of RX J1605.3+3249 = 1RXS J160518.8+324907 was discovered while identifying the northern sample. 1RXS J160518.8+324907 has a count rate of 0.875$`\pm `$0.041 cts/s, HR1 = $``$0.70$`\pm `$0.03 and HR2 = $``$0.58$`\pm `$0.10. To our knowledge, RX J1605.3+3249 is the brightest isolated neutron star candidate in the northern hemisphere.
## 3 ROSAT observations
ROSAT observed the field of RX J1605.3+3249 in pointed mode on two occasions. The first observation was carried out during the 1998 PSPC revival period from 1998 February 18 till 22 for a total exposure time of 4413 s. The second observation was performed with the HRI from 1998 March 2 to 4 and lasted 19307 s. The source was detected on both occasions.
### 3.1 X-ray spectral analysis
The time averaged PSPC spectrum is well represented by a blackbody energy distribution with best fit parameters of $`T_{\mathrm{bb}}`$ = 92 eV and N<sub>H</sub> = 1.1 10<sup>20</sup> cm<sup>-2</sup> ($`\chi _{44}^2`$ = 50.9). At the 95% confidence level, the allowed range is $`T_{\mathrm{bb}}`$ = 86 - 98 eV and N<sub>H</sub> = 0.6 - 1.5 10<sup>20</sup> cm<sup>-2</sup>. We show in Figs. 1 and 2 the best blackbody fit and the corresponding allowed spectral parameter range. The observed flux corresponds to a bolometric luminosity of L<sub>bol</sub> = 1.1$`\times `$10<sup>31</sup> ($`d`$/100 pc)<sup>2</sup> erg s<sup>-1</sup>. Assuming isotropic blackbody emission, the source radius scales as $`R`$ = 1.1 km ($`d`$/100 pc). Black body temperature and line of sight absorption compare well with those observed from other INS candidates such as RX J1856.5-3754 ($`T_{\mathrm{bb}}`$ = 57$`\pm `$1 eV, N<sub>H</sub> = 1.4$`\pm `$0.1 10<sup>20</sup> cm<sup>-2</sup>; Walter, Wolk & Neuhäuser walter96 (1996)) and RX J0720.4-3125 ($`T_{\mathrm{bb}}`$ = 79$`\pm `$4 eV, N<sub>H</sub> = 1.3$`\pm `$0.3 10<sup>20</sup> cm<sup>-2</sup>; Haberl et al. haberl97 (1997)).
### 3.2 Search for X-ray variability
Source flux appears remarkably constant over years to weeks time scales. We list in Table 1 the various source intensity measurements which are all compatible with a strictly constant emission. For the HRI observation we computed an equivalent PSPC count rate using the blackbody spectral description derived from the 1998 pointed PSPC observation. The energy distribution measured from hardness ratios did not change either between the almost 6.5 years elapsed from survey to pointed PSPC observations.
We also searched for both aperiodic and periodic variability in the ROSAT PSPC and HRI time-series. In all cases we failed to detect any significant variability. Applied to light curves binned in 100 s intervals the Kolmogorov-Smirnov test gives a 95% confidence upper limit of 43% on variability amplitude for the PSPC time series and does not provide useful constraints for the HRI data. Power spectrum analysis also fails to detect any periodic signal with an estimated upper limit of 50% and 36% full amplitude modulations for periods longer than 1 s in the PSPC and HRI data respectively. We note that if RX J1605.3+3249 had exhibited pulsations with an amplitude similar to those seen in RX J0720.4-3125 (24% , Haberl et al. haberl97 (1997)), we would not have detected them.
### 3.3 Source position
Considering the extreme optical faintness of the counterpart any attempt to identify the source obviously requires as good as possible X-ray localization. Although the positioning of the X-ray source on the HRI or PSPC instrumental reference grid can be accurate at the arcsec level for relatively bright sources such as RX J1605.3+3249, the uncertainty on the attitude of the satellite introduces a dominant 8 - 10 arcsec error. However, if enough identified and well localized sources are present in the field of view, it is possible to correct for the unknown attitude error and retrieve the intrinsic accuracy achievable with the given detector.
Above a Maximum Likelihood of 8, a total of 23 and 14 sources are detected in the PSPC and HRI fields of view respectively. We have cross-correlated the HRI source list with SIMBAD, FIRST and USNO-A2 catalogues. Among these detections, 4 PSPC and 6 HRI sources have a positive match in the searched catalogues. We also took into account the 4″ shift across the 40′ HRI field of view due to the pixel size being 0.9972$`\pm `$0.0006 instead of 1 arcsec (Hasinger et al. hasinger98 (1998)). For one of the HRI and PSPC field source, RX J1605.5+3239, we had two possible identifications, either the nucleus of the spiral galaxy CASG 1345 or the FIRST radio source located 6.5″ away. Identifying the HRI source with the FIRST entry yields attitude correction vectors incompatible with those derived from other sources in the HRI field of view. We therefore assumed that the HRI source was identified with the galactic nucleus. We fitted to the differences between X-ray and optical positions expressed in arcsec, relations of the form $`\alpha _\mathrm{X}`$ $``$$`\alpha _{\mathrm{opt}}`$ = 5″ (X<sub>IMA</sub> $``$ X<sub>Center</sub>) (1 $``$ 0.9972) + Cte and $`\delta _\mathrm{X}`$ $``$$`\delta _{\mathrm{opt}}`$ = 5″ (Y<sub>IMA</sub> $``$ Y<sub>Center</sub>) (1 $``$ 0.9972) + Cte where X<sub>IMA</sub> and Y<sub>IMA</sub> are the position of the sources on the grid of 5″ size HRI pixels. The fit took into account the error on the positioning on the instrumental grid. We show in Fig. 3 the best fit obtained for the declination axis.
Applying these relations to RX J1605.3+3249 moves the uncorrected HRI position by 0.4″ to the East and 3.6″ to the North to $`\alpha `$ = 16h 05m 18.66s and $`\delta `$ = +32° 49′ 19.7″ (2000.0 eq.) with a 1 $`\sigma `$ error of 0.64″. Using the 4 identified PSPC sources yields a compatible position at $`\alpha `$ = 16h 05m 18.75s and $`\delta `$ = +32° 49′ 17.8″ (2000.0 eq.) with a 1 $`\sigma `$ error of 3.4″. The HRI and PSPC pointed positions are well within the 90% confidence ROSAT survey error circle ($`\alpha `$ = 16h 05m 18.8s and $`\delta `$ = +32° 49′ 07.5″ (2000.0 eq.) with a 1 $`\sigma `$ error of 7.0″).
## 4 Optical data
### 4.1 Observations
First optical observations took place from 1998 April 14 to 17 using the Canada France Hawaii telescope. Images and spectra were obtained with the OSIS V instrument which provides image stabilisation and active guiding. With the 2048 $`\times `$ 2048 STIS2 CCD the pixel size is 0.156″ on the sky. We acquired several 15 min long images through the B, R and I filters. Total exposure time amounts to 60 min in B, 75 min in R and 15 min in I. FWHM seeing was quite constant throughout the run with a mean value of 1.0″. Images were corrected for bias and flat-fielded using standard MIDAS procedures. All nights were of photometric quality. Observation of standard stars in M92 (Christian et al. christian85 (1985)) allowed to calibrate the images with respect to the Kron-Cousins BVRI system.
We also obtained low resolution spectroscopy of the 4 brightest optical objects named A,B,C and D and located inside or close to the ROSAT all-sky survey error circle. Wavelength calibration was derived from the observation of Hg/Ar arc spectra and the instrumental response was computed using the flux standard star Feige 34. The Multiple Object Spectroscopy mode of the OSIS instrument allowed the simultaneous acquisition of spectra from the 4 objects through 1″ slits and using the V150 grism. This configuration yields a FWHM resolution of $``$ 2 nm and a useful spectral range of 365 to 990 nm. We acquired 6 individual frames with exposure times ranging from 30 to 60 min. The total spectral exposure time is 225 min. Spectra were flat-fielded, wavelength calibrated, extracted and corrected for atmospheric absorption and instrumental response using standard MIDAS procedures.
Additional images were obtained with the Keck LRIS (Low-Resolution Imaging Spectrograph) on 1999 February 23. These observations took place under good sky conditions with a seeing around 0.9 arcsec. Two B and two R images of the field were taken with a total exposure time of 15 min in each filter. Using observations of one Landolt standard star field, a photometric calibration with estimated uncertainty of 0.1-0.2 mag was achieved.
### 4.2 Imaging data
We show on Fig. 4 the summed CFHT R band image with the 90% confidence level ROSAT HRI and ROSAT survey error circles over-plotted. The Keck B image is displayed in Fig. 5. We calibrated astrometrically our CCD image using 5 USNO-A2 stellar like objects. The attitude corrected HRI 90% confidence error radius of 2″ shown here includes the additional astrometric error of 0.7″ arising from the CCD calibration. The only detectable object in the ROSAT HRI error circle is C.
R band image profile measurements reveal that objects A and B are resolved. A seems extended in all directions whereas B has a stellar-like core with diffuse emission towards the SE direction. On the other hand, objects C and D appear unresolved.
BRI photometry of objects A,B,C and D as derived from CFHT observations is listed in Table 2. Only object D is bright enough to be detected in the CFHT B band image. We estimate limiting magnitudes of B $``$ 24.6 and R $``$ 25.0 on the summed images.
Keck photometry of object C gives R = 23.3 and B-R = 2.6 consistent with CFHT observations. The limiting magnitudes of the Keck images are estimated to be B $``$ 27 and R $``$ 26.
### 4.3 Spectroscopic data
In general, the signal to noise ratio of the average spectra is not good enough to unambiguously measure a redshift or detect the presence of weak emission lines. Telluric absorption lines (e.g. the $`\lambda `$6800-6900Å complex) are not pronounced.
Identifying the flux drop bluewards of 6000Å in object A (see Fig. 6) with the Ca break leads to a redshift of $``$ 0.5. At this redshift, other spectral features such as G band, H$`\beta `$ and Mg band may be seen in the spectrum.
The spectrum of object B leaves hardly any doubt that the stellar-like core is in fact a dwarf M3-4 star, the extended emission being then a likely background field galaxy. We show on Fig. 7 the observed spectrum together with that of a comparison M3V star extracted from the atlas of Torres-Dodgen & Weaver (torres93 (1993)). The NaI line, TiO and CaH bands are clearly detected. Neglecting the contribution of the background galaxy, the R-I colour index is also consistent with that of a M3-4V star.
As object C is the only one detected in the small attitude corrected HRI error circle, special attention was given to its analysis. Being about 0.7 mag fainter than star B the spectral features of C are obviously less recognizable. There is however good evidence that C is also a M star, of slightly earlier spectral type than star B. For comparison, we show on Fig. 8 the flux calibrated spectrum of C together with that of a template M0V star extracted from the spectral atlas of Jacobi Hunter and Christian (jhc (1984)). The NaD line and the broad TiO molecular bands visible in the smoothed M0V spectrum can be also seen in object C. The similarity of the energy distributions is also striking and the R-I and B-R colour indexes of C are also consistent with a $``$ M2V star. All these evidences support the conclusion that C is a rather early M type star.
Finally, based on Ca break, H$`\beta `$ and NaD line positions, the spectrum of object D (Fig. 6) suggests an identification with a galaxy at $`z`$ $``$ 0.3.
## 5 Discussion
### 5.1 The nature of RX J1605.3+3249
The position of object C in the attitude corrected HRI error circle is somewhat puzzling but may not be fully significant. At $`l`$ = 53°, $`b`$ = 48°, the surface density of stars brighter than R = 23.5 is 12,300 deg<sup>-2</sup> (Robin & Crézé robin86 (1986)). There is therefore more than 1% probability that a star falls by chance in the 2″ radius error circle.
The photometric distance to the M2V star is $``$ 7 kpc. If the X-ray source were physically associated with the star, its radius would be $`R`$ $``$ 80 km, i.e. too large for a neutron star and too small for a white dwarf. The emitting area could be compatible with a polar cap heated by accretion as seen in AM Her systems in which case, our optical observations could have been obtained during a low state. This explanation cannot be strictly ruled out, as the white dwarf could be cool enough in the low state ($`T_{\mathrm{eff}}`$$``$ 20,000 K) to remain undetected at B = 25.85 which is the B magnitude of object C. We also do not have enough spectral resolution to detect the emission lines which could reveal heating of the late M star by the white dwarf. However, the absence of X-ray and optical variability and the unusually hot temperature of RX J1605.3+3249 compared to those of black body components in polars make this possibility rather unlikely.
A hot white dwarf would have to be located at unrealisticly large distances ($`d`$ $``$ 270 kpc), although this distance may be overestimated by the black body fit. The black body temperature is also hot compared to that observed from the hottest known PG 1159 stars (e.g. Werner et al. werner (1996)). Finally, if the high temperature of the white dwarf were due to nuclear burning at its surface, we would detect the heated accretion disc and mass donor star or the surrounding nebula such as in SMC N67.
In general, all classes of soft emitters else than neutron stars are difficult to reconcile with the observational picture. Taking the expected V magnitude of the M2V star (V = 24.24) as an upper limit to the optical emission from the ROSAT source implies F<sub>X</sub>/F<sub>opt</sub> $``$ 10<sup>4</sup>. Comparing with the F<sub>X</sub>/F<sub>opt</sub> distribution of bright ROSAT all-sky survey sources identified in SIMBAD (e.g. Motch et al. motch98 (1998)) shows that all galactic or extragalactic classes of sources other than isolated neutron stars are probably ruled out (see also Fig. 3 in Schwope et al. schwope (1999)). In particular, the source is optically too faint to be identified with even extreme cases of cataclysmic variables or AGN.
This conclusion is independent of the HRI attitude correction since none of the other optical candidates studied in the large ROSAT survey error circle is a likely counterpart of the X-ray source. In particular, the extragalactic objects lack the broad emission lines usually seen in soft AGN (e.g. Greiner et al. greiner (1996)).
Assuming a neutron star radius of 10 km implies a source distance of 900 pc for full surface emission. However, this distance estimate is very sensitive to the actual energy distribution in the ROSAT band. For instance, Rajagopal and Romani (raro (1996)) have shown that blackbody fits to neutron star model atmospheres folded through the ROSAT PSPC tend to overestimate the effective temperature by a factor of up to 3 depending on chemical composition. Applying the maximum correction could bring the source to much closer distances ($`d`$ $``$ 100 pc).
The total galactic N<sub>H</sub> in the direction of RX J1605.3+3249 is 2.47 10<sup>20</sup> cm<sup>-2</sup> (Dickey, J. & Lockman, dickey (1990)) about twice that derived from the blackbody fits to PSPC data (see Fig. 2). However, this difference may not be significant since similar to the effective temperature, the estimated photelectric absorption sensitively depends upon the assumed soft X-ray energy distribution.
We conclude that RX J1605.3+3249 exhibits all the features, soft thermal-like spectrum, high F<sub>X</sub>/F<sub>opt</sub>, expected from an isolated neutron star.
### 5.2 Accretion from the interstellar medium
As for RX J0720.4-3125, the average particle density available for accretion in the vicinity of the neutron star is probably small. IRAS maps do not reveal any particular density enhancement in the direction of RX J1605.3+3249. Using the mean particle density versus scale height law of Dickey & Lockman (dickey (1990)) yields average densities of only 0.3 cm<sup>-3</sup> at 100 pc and 0.06 cm<sup>-3</sup> at 300 pc. In order to explain the X-ray luminosity of the source by Bondi Hoyle accretion, very low values of the total relative plus sound speed velocities must be achieved, 12 km s<sup>-1</sup> and 3.5 km s<sup>-1</sup> for $`d`$ = 100 and 300 pc respectively. In particular, large distances of the order of 900 pc as suggested by PSPC spectral fitting seem incompatible with the accretion model. However, this major difficulty could be bypassed assuming the presence of a local overdensity or if the blackbody modelling strongly overestimates the total accretion luminosity.
So far, three isolated neutron star candidates have been detected in the galactic plane ($`|b|20`$° (Haberl et al. haberl98 (1998)) and only two RX J1605.3+3249 and 1RXS J130848.6+212708 (Schwope et al. schwope (1999)) above the plane. RX J1605.3+3249 is the X-ray brightest high galactic latitude candidate. The steep decrease of particle density with distance for RX J1605.3+3249 could allow to sensitively test the mechanism leading to X-ray emission in this newly discovered class of objects.
### 5.3 Cooling neutron star
The strongest argument in favour of a young cooling neutron star is probably the stability of X-ray emission over long time scales. This property of RX J1605.3+3249 is also shared by RX J0720.4-3125 and RX J1856.5-3754. Neutron stars accreting in binaries always exhibit some kind of variability on a large range of time scales from seconds to months. Although the accretion conditions prevailing in binaries are usually far from those assumed for accretion from interstellar medium, it is still puzzling that none of the members of this new class so far studied in details shows convincing evidences for variability.
For normal pulsars, the blackbody temperature of 1.1 10<sup>6</sup> K implies cooling ages in the range of 2$`\times `$ 10<sup>4</sup> to 10<sup>5</sup> yr depending on the presence of accreted material at the surface of the neutron star (Chabrier et al. chabrier (1997)). Considering the probable overestimation of $`T_{\mathrm{eff}}`$ by the blackbody fit, ages of up to 10<sup>6</sup> yr are still possible.
RX J1605.3+3249 and its cousins RX J0720.4-3125 and RX J1856.5-3754 have in common two properties which are never encountered together in other classes of neutron stars, i) absence of strong radio emission and ii) absence of luminous hard X-ray tail above the thermal spectrum. These features could be used to define the new class of objects.
Simultaneous absence of radio and hard X-ray emission is understandable in the framework of old accreting neutron stars. The low magnetic field or the long spin period necessary for accretion to take place are likely to put the pulsar beyond the death line, in the graveyard, and the emitted X-ray spectrum is expected to resemble that of a soft black body for a large range of parameters (Zampieri et al. zampieri (1995)). We envisage below the implications of such properties for young cooling INS.
#### 5.3.1 Radio emission
The most simple explanation for the absence of radio emission is that the radio beam does not cross the earth. The beaming fraction which is the proportion of the sky swept by the radio beam decreases with increasing spin period (Biggs biggs (1990)) and is of the order of 0.2 for the overall pulsar population (e.g. Lyne et al. lyne98 (1998)). Therefore, radio detection means may miss a large fraction of the pulsar population among which a part may be hot enough to be detected in soft X-rays.
As noted by Kulkarni & van Kerkwijk (KvK98 (1998)), this explanation cannot hold for the pulsating source RX J0720.4-3125 because the time needed to brake the neutron star to the long spin period of 8.39 s (assuming a magnetic field of 10<sup>12</sup> G) is much larger than the cooling time. For the same reason, the absence of radio emission from RX J1605.3+3249 is unlikely to be due to a position beyond the death line as this would also imply rather long spin periods incompatible with the hot emission.
Another possibility is that RX J1605.3+3249 is a magnetar with a dipolar surface field B larger than about 4$`\times `$10<sup>13</sup> G in which case radio emission may be quenched (Heyl & Kulkarni hk98 (1998)).
#### 5.3.2 X-ray spectrum
The fact that the X-ray spectrum is to the accuracy of the measurements thermal-like suggests a much reduced magnetospheric activity compared to other known neutron stars.
Among the 27 pulsars detected in X-rays and listed in Becker & Trümper (bt97 (1997)), only three middle aged pulsars (Geminga, PSR B0656+14 and PSR B1055-52) have in addition to a power law a recognizable black body component in their soft X-ray energy distribution. An interesting case is PSR B0656+14, a radio emitting pulsar about $``$ 10<sup>5</sup> yr old with a magnetic field of 4.7$`\times `$10<sup>12</sup> G and located at a distance of 760 pc. PSR B0656+14 exhibits a ROSAT PSPC spectrum ($`T_{\mathrm{bb}}`$ = 80-90 eV; Possenti et al. possenti (1996)) strikingly similar to that of RX J1605.3+3249. The additional faint hard component needed to fit the spectrum of PSR B0656+14 would not have been detected in RX J1605.3+3249 because of the lower statistics.
Neutron stars born with magnetic field B $``$ 10<sup>14</sup> G, the magnetars, are thought to be powerful soft X-ray emitters because magnetic field decay provides an additional source of heat (Thompson & Duncan td96 (1996), Heyl & Kulkarni hk98 (1998)). The young magnetars associated with soft $`\gamma `$-ray repeaters such as SGR 1806-20 or SGR 1900+14 exhibit powerlaw-like quiescent X-ray spectra without evidences for black body components. These non-thermal energy distributions could be the signature of a compact synchrotron nebula (Marsden et al. marsden (1998)). It has been proposed that the class of anomalous braking X-ray pulsars could be related to magnetars (e.g. Thompson & Duncan td96 (1996)) and could constitute a later, less active soft $`\gamma `$-ray repeater phase. In general, anomalous X-ray pulsars have again powerlaw-like spectra with in two cases possible blackbody components (see Thompson & Duncan td96 (1996) and references therein). Pulsating sources like RX J0720.4-3125 could represent an even later stage of magnetar evolution with remaining very high magnetic field (B $``$ 10<sup>14</sup> G, Heyl & Hernquist hh (1998)) and perhaps still the possibility to emit powerful $`\gamma `$-ray bursts on occasion. Because of the additional energy source a magnetar could reach the temperature of 1.1 10<sup>6</sup> K after 10<sup>6</sup> yr (Heyl & Hernquist hh (1998)). It is however unclear whether the absence of a strong non-thermal component in the X-ray spectrum is compatible with the remaining high magnetic field.
### 5.4 Expected brightness of the optical counterpart
Taking into account the unfortunate possibility of a chance alignement between the neutron star and object C implies a B magnitude fainter than $``$ 26 for RX J1605.3+3249. Because of the relatively high temperature, the extrapolation of the black body seen in soft X-rays to the optical regime would imply extremely faint optical magnitudes close to V = 30. However, all neutron stars observed so far display optical continuum above the Rayleigh-Jeans tail of the soft X-ray thermal component. This is not unexpected since black body fits tend to overestimate $`T_{\mathrm{eff}}`$ . Furthermore, a non thermal optical component has been detected in at least two cases, Geminga and PSR B0656+14. Scaling the optical flux with the PSPC count rate of RX J0720.4-3125 and RX J1856.5-3754 and neglecting any temperature effects yields V$``$27.2 for RX J1605.3+3249. On the other hand, if RX J1605.3+3249 is similar to Geminga or PSR B0656+14 its B magnitude could be as bright as our limit of 26. Therefore the source may well be bright enough to be optically identified with current means and optical imaging could allow the detection of proper motion which is a crucial test for determining the X-ray powering mechanism.
## 6 Conclusions
The only optical object detected in the small HRI error circle is a late M star most probably unrelated to the X-ray source. Altogether, X-ray and optical observations of RX J1605.3+3249 strongly suggest that the soft X-ray source is due to thermal emission from a nearby isolated neutron star. However, based on the presently available data, it is not possible to distinguish between accretion from the interstellar medium or cooling as the main X-ray emitting mechanism.
The constancy of the X-ray flux on various time scales and the difficulties encountered by the accretion model as a result of the small mean ambient densities are arguments, although not fully compelling, for a cooling neutron star.
The undetermined neutron star spin period and the lack of sensitive measurement of a hard X-ray component prevent us from drawing any firm conclusion on the nature of the source. One possibility is that RX J1605.3+3249 is a twin of PSR B0656+14 but that the radio beam does not intercept the earth. Alternatively, RX J1605.3+3249 could be a magnetar, maybe similar to RX J0720.4-3125.
Further sensitive optical and X-ray observations with the XMM and AXAF satellites could help to unveil the real nature of this object.
###### Acknowledgements.
The ROSAT project is supported by the German Bundesministerium für Bildung, Wissenschaft, Forschung und Technologie (BMBF/DLR) and the Max-Planck-Gesellschaft.
|
no-problem/9907/cond-mat9907431.html
|
ar5iv
|
text
|
# The Three-Magnon Contribution to the Spin Correlation Function in Integer-Spin Antiferromagnetic Chains
## Abstract
The exact form factor for the O(3) non-linear $`\sigma `$ model is used to predict the three-magnon contribution to the spin correlation function, $`S(q,\omega )`$, near wavevector $`q=\pi `$ in an integer spin, one-dimensional antiferromagnet. The 3-magnon contribution is extremely broad and extremely weak; the integrated intensity is $`<2\%`$ of the single-magnon contribution.
The Hamiltonian for the one dimensional Heisenberg antiferromagnet of spin $`s`$ is
$$H=J\underset{i}{}\stackrel{}{S}_i\stackrel{}{S}_{i+1}.$$
(1)
Based upon the large-s limit we write the spin operators as
$$\stackrel{}{S}_js(1)^j\stackrel{}{\varphi }(j)+\stackrel{}{l}(j),$$
(2)
where $`s\stackrel{}{\varphi }`$ and $`\stackrel{}{l}`$ represent the staggered and uniform magnetization of the spin chain. We set the lattice spacing to 1. The low energy behaviour of this Hamiltonian can be described by the O(3) non-linear $`\sigma `$ model . Recently, the exact form factors for this field theory were calculated . The resulting prediction for $`S(q,\omega )`$ at $`q0`$ was discussed in . Here we comment on the prediction at $`q\pi `$. In the continuum approximation, the (zero temperature) spin correlation function at $`q=\pi +k\pi `$ is
$$S^{ab}(\pi +k,\omega )=s^2𝑑x𝑑t\mathrm{exp}i(\omega tkx)<\mathrm{\Omega }\varphi ^a(x,t)\varphi ^b(0,0)\mathrm{\Omega }>\delta ^{ab}S(\pi +k,\omega ),$$
(3)
where $`|\mathrm{\Omega }>`$ is the groundstate.
We insert a complete set of asymptotic states. It is known that the spectrum consists of a triplet of massive magnons. Thus, the asymptotic states will be characterized by a spin index and a momentum for each particle in the state.
$`n>=a_1,p_1;a_2,p_2;\mathrm{};a_n,p_n>`$
It is convenient to label the particles’ momenta by the rapidities, $`\theta _i`$:
$$E_i=\mathrm{\Delta }\mathrm{cosh}\theta _i,p_i=(\mathrm{\Delta }/v)\mathrm{sinh}\theta _i$$
(4)
where $`v`$ is the spin-wave velocity corresponding to the velocity of light in the quantum field theory and $`\mathrm{\Delta }`$ is the gap corresponding to the rest mass energy in the quantum field theory. ($`v2.49J`$ and $`\mathrm{\Delta }.4107J`$ for the s=1 chain.) Thus we may write:
$$S^{ab}(\pi +k,\omega )=s^2𝑑x𝑑t\mathrm{exp}i(\omega tkx)\underset{n}{}\frac{1}{n!}\underset{i=1}{\overset{n}{}}\frac{d\theta _i}{4\pi }<\mathrm{\Omega }\varphi ^a(x,t)n><n\varphi ^b(0,0)\mathrm{\Omega }>.$$
(5)
We use
$$<\mathrm{\Omega }\varphi (x,t)n>=<\mathrm{\Omega }\varphi (0)n>\mathrm{exp}[i(E_ntP_nx)],$$
(6)
where $`P_n`$ and $`E_n`$ refer to the total momentum and energy of the state $`|n>`$, to obtain
$$S(\pi +k,\omega )=s^2\frac{(2\pi )^2}{3}\underset{a}{}\underset{n}{}\frac{1}{n!}\underset{i=1}{\overset{n}{}}\frac{d\theta _i}{4\pi }\delta (kP_n)\delta (\omega E_n)<\mathrm{\Omega }\varphi ^a(0,0)n>^2.$$
(7)
The field is renormalized
$$\mathrm{\Phi }^a(x)=\frac{1}{\sqrt{Z}}\varphi ^a(x)$$
(8)
in order that we satisfy the relation $`<\mathrm{\Omega }\mathrm{\Phi }^a(0)b,p>=\delta ^{ab}`$. Symmetry arguments guarantee that only asymptotic states with an odd number of magnons will offer non-zero matrix elements.
For the one particle contribution,
$$S_1(\pi +k,\omega )=s^2vZ\pi \frac{\delta (\omega \sqrt{v^2k^2+\mathrm{\Delta }^2})}{\sqrt{v^2k^2+\mathrm{\Delta }^2}}.$$
(9)
The integrated intensity is,
$$S_1(q)𝑑wS_1(q,w)=s^2vZ\pi \frac{1}{\sqrt{v^2k^2+\mathrm{\Delta }^2}}.$$
(10)
Numerical simulations on the $`s=1`$ antiferromagnet indicate that $`Z1.26`$.
The three particle contribution can be written:
$`S_3(\pi +k,\omega )`$ $`=`$ $`s^2vZ{\displaystyle \frac{\pi }{\sqrt{\omega ^2v^2k^2}}}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{d(\theta _1\theta _2)d(\theta _2\theta _3)}{(4\pi )^2}}\delta (\sqrt{\omega ^2v^2k^2}M(\theta _1,\theta _2,\theta _3))`$ (12)
$`{\displaystyle \frac{1}{3}}{\displaystyle \underset{a,a_1,a_2,a_3}{}}<\mathrm{\Omega }\mathrm{\Phi }^a(0,0)a_1,p_1;a_2,p_2;a_3,p_3>^2,`$
where:
$$M\sqrt{\left(\underset{i=1}{\overset{3}{}}E_i\right)^2v^2\left(\underset{i=1}{\overset{3}{}}p_i\right)^2}\text{and}\theta _{ij}\theta _i\theta _j.$$
(13)
Remarkably, the 3-particle form factor has been calculated exactly using the integrability of the non-linear $`\sigma `$ model:
$`{\displaystyle \frac{1}{3}}{\displaystyle \underset{a,a_1,a_2,a_3}{}}<\mathrm{\Omega }\mathrm{\Phi }^a(0,0)a_1,p_1;a_2,p_2;a_3,p_3>^2`$ $`=`$ $`\pi ^6|\psi (\theta _1,\theta _2,\theta _3)|^2[2(\theta _{21}^2+\theta _{32}^2+\theta _{31}^2)+12\pi ^2],`$ (14)
$`\psi (\theta _1,\theta _2,\theta _3)`$ $``$ $`{\displaystyle \underset{i>j}{}}\psi (\theta _{ij}),`$ (15)
$`\psi (\theta )`$ $``$ $`{\displaystyle \frac{\theta i\pi }{\theta (2\pi i\theta )}}\mathrm{tanh}^2{\displaystyle \frac{\theta }{2}}.`$ (16)
The resulting integral in Eq. (12) can be easily performed numerically. The result for $`S(\pi ,\omega )`$ is shown in Fig. 1.
The three-magnon contribution vanishes below $`3\mathrm{\Delta }`$. In the limit $`\omega 3\mathrm{\Delta }`$, it behaves as:
$$S_3(\pi ,\omega )s^2vZ\times .01045(\omega 3\mathrm{\Delta })^3/\mathrm{\Delta }^5.$$
(17)
It has a rounded, asymmetric peak at $`\omega 6.33\mathrm{\Delta }`$ then decays at high energy as $`s^2vZ\times 19.9/\{\omega ^2[\mathrm{ln}(\omega /\mathrm{\Delta })]^2\}`$. The integrated intensity of the three particle contribution at $`q=\pi `$ is
$$S_3(\pi ).0193S_1(\pi ).$$
(18)
This 3-particle contribution to $`S(q,\omega )`$ is very weak and very broad. It is instructive to calculate the average frequency of the 3-particle term:
$$\overline{\omega }_3\frac{𝑑\omega \omega S_3(\pi ,\omega )}{𝑑\omega S_3(\pi ,\omega )}$$
(19)
Using the result of for the integral in the numerator of Eq. (19) we find:
$$\overline{\omega }_375.2\mathrm{\Delta }.$$
(20)
This enormous value is explained, to some extent, by the rapid decrease of $`S_3(\pi ,\omega )`$ at $`\omega 3\mathrm{\Delta }`$ and its slow drop off at large $`\omega `$, making the integral in the numerator of Eq. (19) only logarithmically convergent. The contributions to $`S`$ from still higher numbers of particles have also been considered . Indications are that these are still more negligible than this tiny 3-particle term.
It is clear that this result for $`S_3`$ cannot be applied completely to the s=1 antiferromagnetic chain. In particular, from the numerically determined single magnon dispersion relation, which is quite well fit by:
$$E(p)\sqrt{\mathrm{\Delta }^2+v^2\mathrm{sin}^2p},$$
(21)
it can be seen that the maximum possible energy of a 3 magnon state with total crystal momentum $`\pi `$ is only about $`17\mathrm{\Delta }`$. $`S(\pi ,\overline{\omega }_3)`$ in the field theory gets significant contributions from bosons with momenta considerably larger than $`\pi `$, the maximum possible in the lattice model. In fact the relativistic approximation to the dispersion relation seems to break down significantly for $`p>.2\pi `$. We note that, for $`\omega <9\mathrm{\Delta }`$, only magnons with $`p<.2\pi `$ contribute to $`S_3(\pi ,\omega )`$. Thus we might hope that $`S_3(q,\omega )`$ calculated from the field theory is fairly accurate for $`\pi q<.2\pi `$ and $`\omega <9\mathrm{\Delta }`$. The non-relativisitic corrections to the magnon dispersion relation make $`S_3(\pi ,\omega )`$ vanish for $`\omega `$ greater than about $`17\mathrm{\Delta }`$. If we only integrate over $`S_3(\pi ,\omega )`$ up to $`\omega =17\mathrm{\Delta }`$ this reduces $`S_3(\pi )`$ to about $`.012S_1(\pi )`$. However, the non-relativistic corrections may also tend to increase $`S_3(\pi ,\omega )`$ for $`9\mathrm{\Delta }<\omega <17\mathrm{\Delta }`$ since they flatten the dispersion relation hence increasing the density of states. Clearly the value of $`\overline{\omega }_3`$ in the s=1 chain must be less than the maximum possible frequency of about $`17\mathrm{\Delta }`$. If we make a rough estimate that $`\overline{\omega }_310\mathrm{\Delta }`$ and $`S_3(\pi ).02S_1(\pi )`$ then we can estimate the overall average frequency (also ignoring 5 and more magnon contributions) as:
$$\overline{\omega }\frac{𝑑\omega [S_1(\pi ,\omega )+S_3(\pi ,\omega )]\omega }{𝑑\omega [S_1(\pi ,\omega )+S_3(\pi ,\omega )]}\mathrm{\Delta }+\frac{S_3(\pi )}{S_1(\pi )}\overline{\omega }_3\mathrm{\Delta }[1+.02\times 10]=1.2\mathrm{\Delta }.$$
(22)
This estimate can be compared to the numerically determined value. We may use an exact sum rule for the Heisenberg antiferromagnet:
$$𝑑\omega \omega S(q,\omega )=(1/2)<[[H,S^z(q)],S^z(q)]>=(2/3)|e_0|(1\mathrm{cos}q),$$
(23)
where $`e_0`$ is the groundstate energy per site. Using the numerically determined value of $`S(q)`$ and $`e_0`$ gives $`\overline{\omega }(q)`$. This is plotted in Fig. (18) of Ref. where it is referred to as $`\omega _{\text{SMA}}`$. From this figure we see that at $`q=\pi `$, $`\overline{\omega }1.2\mathrm{\Delta }`$ in agreement with our crude estimate of Eq. (22). This lends some confidence to our prediction that the 3-magnon contribution to $`S(q,\omega )`$ near $`q\pi `$ is extremely broad and extremely weak, with $`\overline{\omega }_3`$ of order the maximum possible 3-magnon energy, $`10\mathrm{\Delta }`$ and the relative integrated intensity of order 2%. As such, it will be extremely difficult to observe experimentally. We note that the 2-particle contribution to $`S(q,\omega )`$ near $`q=0`$ is also extremely difficult to observe since it appears to only occur for $`q<.2\pi `$ and since $`S(q,\omega )q^2`$ as $`q0`$. Thus experimental applications of the beautiful exact results on the non-linear $`\sigma `$ model remain elusive.
If we assume that the 3-particle form factor in Eq. (16) goes to a non-zero constant at the top of the 3-particle continuum, $`\omega 17\mathrm{\Delta }`$, then it follows from phase space considerations that $`S_3(\pi ,\omega )`$ drops discontinously to 0 at that energy. This is quite different than the continous vanishing of $`S_3`$ at the bottom of the 3-particle continum which is entirely due to the vanishing of the form factor there. Possibly this discontinous drop at the top of the continuum might be easier to observe experimentally than other features. However clearly the tiny size of the drop would make even this extremely difficult.
We would like to thank Bill Buyers for interesting us in this problem. This research was supported in part by NSERC of Canada.
|
no-problem/9907/hep-ph9907260.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
$`CP`$-violation in heavy flavors is becoming an object of experimental studies \[1-3\]. However, many related theoretical problems are not quite understood yet. For example, it was noted only recently (see, e.g. \[4-8\]) that some discrete ambiguities may appear when extracting $`CP`$-violating parameters from experiment and comparing them with theory. Such ambiguities may shadow manifestations of New Physics (see discussion in ). Specific problems arise also in decays of $`D`$-mesons, where doubly Cabibbo-suppressed transitions may imitate flavor mixing effects.
Discussed in this talk is the unique role of neutral kaons produced in decays of heavier flavors. Well-studied strangeness oscillations are sensitive to the relative initial content of $`K^0`$ and $`\overline{K}^0`$. Therefore, they may be used to analyze detailed properties of the decays and heavier flavor hadrons themselves, just as, say, asymmetric decays of hyperons are used to analyze the hyperon polarization and properties of hyperon production. At this way one becomes able to eliminate any ambiguities of $`CP`$-violating parameters. Decays to neutral kaons are capable as well to separate ”right” and ”wrong” strangeness transitions (Cabibbo-allowed and Cabibbo-suppressed amplitudes) for charmed and beauty hadrons, both neutral and charged mesons or baryons, again unambiguously. Also discussed are unfamiliar manifestations of kaon $`CP`$-violation in heavy meson decays.
## 2 Amplitude ambiguities and their nature
Let us consider, in the standard manner, decays of neutral flavored spinless mesons $`M`$ and $`\overline{M}`$. The most popular way to search for $`CP`$-violation is to study decays
$$M(\overline{M})X_{CP}$$
(1)
into a state of definite $`CP`$-parity. $`CP`$-violation in the decays may be described by the parameter
$$\lambda _X=\frac{q_M}{p_M}\frac{A_{\overline{M}X}}{A_{MX}}.$$
(2)
It is rephasing invariant and is commonly considered as unambiguous. However, there is an intrinsic sign ambiguity hidden in this parameter. Ambiguities described in the literature look differently in different papers (e.g., ), but all of them are really related to just this sign ambiguity.
We can reveal the ambiguity by expressing $`\lambda _X`$ through decay amplitudes $`A_X^{(1)}`$ and $`A_X^{(2)}`$ of the eigenstates $`M^{(1)}=p_MM+q_M\overline{M},M^{(2)}=p_MMq_M\overline{M}.`$ In such a way we obtain
$$\lambda _X=\frac{A_X^{(1)}A_X^{(2)}}{A_X^{(1)}+A_X^{(2)}},\mathrm{Re}\lambda _X=\frac{|A_X^{(1)}|^2|A_X^{(2)}|^2}{|A_X^{(1)}+A_X^{(2)}|^2},\mathrm{Im}\lambda _X=\frac{2\mathrm{Im}(A_X^{(1)}A_X^{(2)})}{|A_X^{(1)}+A_X^{(2)}|^2}.$$
(3)
This expression clearly shows that $`\lambda _X`$ changes its sign under interchange of $`M^{(1)}`$ and $`M^{(2)}`$. Thus, to fix the sign of $`\lambda _X`$ we need to identify who is who in the set of the eigenstates. In other words, definition (2) should be appended by some physical labeling for the eigenstates.
Consider the situation in more detail. Decays (1) go along two branches: $`M(\overline{M})M^{(1)}X_{CP}`$ and $`M(\overline{M})M^{(2)}X_{CP}.`$ They produce separate contributions to the decay amplitude which can interfere. Time distribution of any decay (1) contains two kinds of terms linear in $`\lambda _X`$, both unambiguously measurable. Direct contributions of the two branches combine into the term proportional to
$$\mathrm{Re}\lambda _X\mathrm{sinh}\frac{(\mathrm{\Gamma }^{(1)}\mathrm{\Gamma }^{(2)})t}{2},$$
while interference of the branches gives the term proportional to
$$\mathrm{Im}\lambda _X\mathrm{sin}(m^{(1)}m^{(2)})t.$$
Structure of these terms allows to formulate the ambiguity problem more explicitly.
There are three possible ways of labeling the eigenstates:
* Lifetime labeling identifies the states as longer or shorter lived. Then the sign of $`\mathrm{\Delta }\mathrm{\Gamma }`$ is fixed by definition, so Re$`\lambda _X`$ is experimentally unambiguous. But the sign of $`\mathrm{\Delta }m`$ is generally unknown, and the sign of Im$`\lambda _X`$ appears to be ambiguous.
* $`CP`$-parity labeling identifies the states (though may be approximately) as $`CP`$-even or $`CP`$-odd. Here we define (see, e.g., ref.) that $`M^{(1)}`$ has the same (approximate) $`CP`$-parity as the final state $`X_{CP}`$ if it decays to $`X_{CP}`$ more intensely than $`M^{(2)}`$. This means that $`|A^{(1)}|>|A^{(2)}|`$, and so this definition fixes the sign of Re$`\lambda _X`$ through eq.(3). The sign of $`\mathrm{\Delta }\mathrm{\Gamma }`$ becomes measurable, but the signs of $`\mathrm{\Delta }m`$ and Im$`\lambda _X`$ stay unknown.
* Mass labeling identifies the states as heavier or lighter. Here the sign of $`\mathrm{\Delta }m`$ is fixed, and Im$`\lambda _X`$ may be measured unambiguously. But such convention does not fix the sign of $`\mathrm{\Delta }\mathrm{\Gamma }`$, and thus Re$`\lambda _X`$ has the sign ambiguity (see, e.g., ref.).
It is evident now that all experimental sign ambiguities would be eliminated if we could relate those three labelings to each other.
The lifetime and $`CP`$-parity labelings can be related to each other in a straightforward way by comparing time-dependences of decays into final states of different $`CP`$-parities. However, their relation to the mass labeling is not so simple.
We can illustrate the general situation by comparing it to the well-studied case of neutral kaons. Kaon eigenstates are defined at present as $`K_S`$ amd $`K_L`$ through their lifetimes. Correspondence of the lifetimes and $`CP`$-parities has been achieved in decays to $`2\pi `$ and/or $`3\pi `$. Note that similar attempt was recently made also for $`D`$-mesons , but the achieved precision appeared still insufficient to notice any difference of the two lifetimes.
Kaon mass labeling, i.e. identification of $`K_L`$ as the heavier state, became possible only after special complicated experiments on coherent regeneration (their summary see in ) which related to each other the masses and $`CP`$-parities of kaon eigenstates. Without such mass labeling the standard $`CP`$-violating kaon parameters $`\eta `$ could be measured only up to the sign of $`\mathrm{Im}\eta `$.
For $`B`$\- and $`D`$-mesons the coherent regeneration cannot be observed because of their too short lifetimes. Instead one may use some theoretical assumptions (e.g., ). However, there should exist direct experimental ways to relate all three kinds of labeling, independently of any theoretical assumptions. One of them is described in the next section. Interestingly enough and similar to how it was for kaons, the direct experimental interrelation of eigenmasses and eigenwidths appears to be impossible for $`B`$\- and $`D`$-mesons as well (the corresponding discussion see in ). Both masses and widths can be directly related only to $`CP`$-parities of the eigenstates, and only after that to each other.
## 3 Neutral kaons as analyzers of heavier flavors
It is well known that weak decays of hyperons, being asymmetric due to parity violation, are good analyzers which may be used to measure hyperon polarization in various processes. In direct analogy, it was suggested that the neutral kaons, with their decay oscillations, may be used to analyze properties of heavy flavor hadrons. Here we only briefly explain the main ideas underlying such an approach. More technical details, with accurate formulas, may be found in .
Instead of decays (1) we consider now decays
$$M(\overline{M})X_{CP}K^0(\overline{K}^0)$$
(4)
and assume that $`X_{CP}`$ has definite values of both $`CP`$-parity and spin. Generally, there are two possible kinds of flavor transitions, $`MK^0`$ and $`M\overline{K}^0`$ (and two charge conjugate ones), with corresponding different amplitudes. The very essential point is that a definite coherent mixture of $`M`$ and $`\overline{M}`$ just before the decay (4) produces some different, but also definite coherent mixture of $`K^0`$ and $`\overline{K}^0`$ just after the decay. As a result, time evolution of neutral kaons after decay (4) coherently continues evolution of the flavored neutral mesons before the decay .
The neutral kaons may be observed only through their decay, so we really have cascade decays with two stages and two decay times, $`t_M`$ and $`t_K`$ (we mean not average lifetimes, but event-by-event times). Flavor oscillations at the two stages are correlated<sup>2</sup><sup>2</sup>2 The term ”cascade mixing” looks not adequate, since mixings at the two stages have the usual standard form and are not related to each other; related are only flavor oscillations.. Such coherent double-flavor oscillations produce generally non-factorisable dependence on the two decay times. It is rather complicated (see ), and for better understanding its main features we may first simplify it by assuming exact $`CP`$-conservation (for both $`M`$-mesons and kaons!). Then the states $`X_{CP}K_S`$ and $`X_{CP}K_L`$ have definite $`CP`$-parities. Eigenstates $`M^{(1)}`$ and $`M^{(2)}`$ also have definite $`CP`$-parities, which may be used as their labels (just as $`K_1`$ and $`K_2`$ for neural kaons before discovery of $`CP`$-violation). Of course, only two transitions, say,
$$M^{(1)}X_{CP}K_L,M^{(2)}X_{CP}K_S$$
(5)
are possible here, instead of four ones in the general case (initial and final $`CP`$-parities should be the same).
If we observe the secondary kaons by their decays to $`2\pi `$ or $`3\pi `$ modes, then only one of transitions (5) contributes (recall the assumption of exact $`CP`$-conservation!). As a result, dependences on $`t_M`$ and $`t_K`$ become factorised. Such decay chains allow to relate lifetimes and $`CP`$-parities of $`M^{(1,2)}`$. But if we use semileptonic kaon decays, with contributions from both $`K_L`$ and $`K_S`$, then both transitions work; time distribution contains their interference term proportional to
$$\mathrm{cos}(\mathrm{\Delta }m_Mt_M\mathrm{\Delta }m_Kt_K).$$
Its measurement, evidently, determines the sign of $`\mathrm{\Delta }m_M`$ in respect to the known sign of $`\mathrm{\Delta }m_K`$ and, therefore, relates masses and $`CP`$-parities of $`M`$-eigenstates.
$`CP`$-violation complicates this picture due to interferences of all four cascade branches . Nevertheless, studies of decays (4) are still capable to relate mass labeling of heavy meson eigenstates with their $`CP`$-parity labeling and, then, with lifetime one. Therefore, as explained above, such studies can eliminate all experimental ambiguities in $`CP`$-violating parameters.
Decays of charmed particles suggest one more test-ground for the analyzing power of neutral kaon oscillations. As a rule, $`B`$-meson decays to neutral kaons have only one flavor transition (and its charge conjugate), $`BK`$ or $`B\overline{K}`$ (compare decays considered in refs. and ). On the opposite, $`D`$-mesons, neutral and charged (and even charmed baryons), always have both kinds of transitions. One of then is doubly Cabibbo-suppressed, i.e. relatively small (about 3% in the amplitude). Nevertheless, it is of special interest: it exemplifies a new kind of weak transitions and might demonstrate different (larger?) $`CP`$-violation. The same is true, of course, for decays to charged kaons as well, which may produce kaon of ”wrong” sign. However, decays to neutral $`K`$’s have an essential difference. Final states for the charged kaon case are not coherent, and one can compare only absolute values of the decay amplitudes. As was first noticed in (see also ref.), decays to neutral kaons produce coherent states and allow to measure as well the relative phase of the amplitudes.
The current literature contains suggestions to realize this by measuring probabilities of transitions $`DK_S,DK_L`$ . But in such an approach the relative phase of the transition amplitudes cannot be measured, and separation of the amplitudes for $`D\overline{K}^0,DK^0`$ appears to be ambiguous. Measurement of strangeness oscillations for the secondary neutral kaons makes the amplitude separation for different flavor transitions quite unambiguous (more details see in ; similar ideas were discussed in ). We emphasize that the oscillations may provide as well clear separation of two sources of ”wrong” strangeness production, the Cabibbo-suppressed transition and mixing of initial neutral $`D`$-mesons.
## 4 Secondary kaon $`CP`$-violation
Manifestations of kaon $`CP`$-violation in kaon decays has been studied in all detail, at least phenomenologically. However, production of neutral kaons (in particular, in decays of heavier particles) provides different manifestations, not quite familiar. Some of them look rather formal at present, b ut may become physically meaningful in future experiments. Moreover, they might be useful for studying $`CP`$-violation related to heavier hadrons. Here we consider two kinds of such manifestations: for amplitudes, and for decay yields.
We begin with the problem, what are amplitudes for production of $`K_{S,L}`$, say, in decays (4) of the meson $`M`$. Since $`K_{S,L}=p_KK^0\pm q_K\overline{K}^0`$, it seems natural to express those amplitudes through the amplitudes $`A_{MK}^{(X)}`$ and $`A_{M\overline{K}}^{(X)}`$ of flavor transitions $`MK^0`$ and $`M\overline{K}^0`$ as
$$p_K^{}A_{MK}^{(X)}\pm q_K^{}A_{M\overline{K}}^{(X)}.$$
(6)
Such expressions indeed exist in the literature, but they are incorrect. To understand why and to find correct expressions we first recall the general meaning of amplitudes.
When given an initial state $`|i>`$ and the $`S`$-matrix, the amplitudes $`A_{ik}`$ of transitions $`|i>|k>`$ are defined by decomposition of the final state $`S|i>`$ in terms of some set of states $`|k>`$:
$$S|i>=\underset{k}{}A_{ik}|k>.$$
If the set is orthonormalized we arrive at the canonical expression $`A_{ik}=<k|S|i>`$. However, this expression is inapplicable if the states $`|k>`$ are not orthogonal.
Let us apply this consideration to decays (4). Decay of the meson $`M`$ produces the kaon state (up to normalization)
$$A_{MK}^{(X)}K^0+A_{M\overline{K}}^{(X)}\overline{K}^0.$$
To find amplitudes of transitions $`MK_{S,L}`$ we should decompose this final state in terms of $`K_{S,L}`$ and extract the corresponding coefficients. Since
$$K^0=(K_S+K_L)/(2p_K),\overline{K}^0=(K_SK_L)/(2q_K),$$
we finally obtain
$$A_{MS}^{(X)}=\frac{A_{MK}^{(X)}}{2p_K}+\frac{A_{M\overline{K}}^{(X)}}{2q_K},A_{ML}^{(X)}=\frac{A_{MK}^{(X)}}{2p_K}\frac{A_{M\overline{K}}^{(X)}}{2q_K}.$$
(7)
Amplitudes for decays of $`\overline{M}`$ may be found in the same way.
Expressions (6) and (7) coincide if $`|p|^2=|q|^2=1/2`$, i.e. when $`CP`$ is conserved and the states $`K_{S,L}`$ are orthogonal. They differ when $`CP`$ is violated and the states $`K_{S,L}`$ are not orthogonal. Using decomposition over non-orthogonal set of states is non-standard, but looks natural for kaons with $`CP`$-violation. Expressions (7) just correspond to those used earlier in refs. . Of course, expressions (6) and (7) for amplitudes lead to different expressions for decay probabilities as well.
There is one more consequence of kaon $`CP`$-violation for processes with neutral kaons produced. Let us compare decays of initially pure states $`K^0`$ and/or $`\overline{K}^0`$ into a particular mode. Their time dependences generally oscillate. The oscillations would be absent for some decay modes if $`CP`$ were conserved; they are present for any mode when $`CP`$ is violated. These oscillations are different for initial $`K^0`$ or $`\overline{K}^0`$. As a result, decay yields at a particular time moment (and even total decay yields) are also different for initially pure $`K^0`$ or $`\overline{K}^0`$. Of course, the relative difference is of the order $`|\eta |`$. Similar difference, generally, exists for any coherent mixture of $`K^0`$ and $`\overline{K}^0`$ having no symmetry under their interchange.
Now, compare decays of, say, $`D`$ (neutral or charged) and $`\overline{D}`$, with neutral kaons produced. Assume that the $`D`$-decay generates a kaon system $`aK^0+b\overline{K}^0`$. The conjugate $`\overline{D}`$-decay without any $`CP`$-violation, direct or in $`D`$-mixing, generates the conjugate system $`a\overline{K}^0+bK^0`$. Using any particular way to detect the secondary neutral kaons (e.g., particular decay modes and/or particular interval(s) of kaon decay times) will lead to different results for $`D`$ and $`\overline{D}`$ due to kaon $`CP`$-violation. The difference is still present after integration over $`t_K`$. Theoretically, such $`t_K`$-integrated effect was first demonstrated some years ago for the sequence $`D^\pm \pi ^\pm K^0(\overline{K}^0),K^0(\overline{K}^0)\pi ^+\pi ^{}`$ . Recent (but not quite correct) discussion of the small-$`t_K`$ region see in . We emphasize that similar effects of the kaon $`CP`$-violation should appear in all decays of heavier flavor hadrons, both mesons and baryons, to neutral kaons.
Presence of $`D`$-meson $`CP`$-violation does not eliminate the discussed effect. Moreover, kaon $`CP`$-violation in cascade decays appears to be coherent with the $`D`$-meson $`CP`$-violation and may be used to analyze its details. Thus, we may have one more example of the analyzing power of neutral kaons.
## 5 Conclusion
In summary, we see that neutral kaons being decay products may provide the great analyzing power for very detailed studies of heavier flavor hadrons and their decays.
## Acknowledgments
Correspondence with B.Kayser, H.Lipkin, Y.Nir, J.P.Silva and Z.-Z.Xing was useful for me in preparing this talk.
|
no-problem/9907/astro-ph9907046.html
|
ar5iv
|
text
|
# Spectroscopic observations of convective patterns in the atmospheres of metal-poor stars
## 1 Introduction
The solar granulation pattern observed by direct imaging in the optical continuum is the result of the convective motions in the solar envelope. The velocity fields and spatial patterns present in the solar photosphere leave a signature on spectra, even when they lack spatial resolution. Line asymmetries reveal similar shapes for most lines (Dravins, Lindegren & Norlund 1981), while line shifts become gradually bluer when the line formation occurs deeper in the photosphere (Allende Prieto & García López 1998a, hereafter APGL). These features cannot be realistically explained by any other known mechanism such as isotopic shifts, hyperfine structure, pressure shifts, or line blends.
Similar effects are expected to be present in other stars. Over the last two decades, David Gray and collaborators (see, e.g., Gray 1982, Gray & Toner 1986, Toner & Gray 1988, Gray & Nagel 1989, Gray et al. 1992) and Dainis Dravins (see, e.g., Dravins 1982, 1987a, 1987b) have extended the measurement of line asymmetries to many other stars, confirming the expectations: the shapes of the line bisectors of late-type stars with convective envelopes are similar to the solar case. Surprisingly, opposite curvature are found in the bisectors of hotter stars, which are not expected to develop convective envelopes. The difficulties in obtaining highly accurate radial velocity measurements and the need to separate the radial velocity, the gravitational shift, and convective shifts has precluded the use of absolute line shifts as a tool to probe convection on late-type stars. The use of differential line shifts has already been attempted by Nadeau & Maillard (1988) for M giants.
Gray (1982) and Gray & Toner (1986) identified a sequence in the line asymmetries for late-type dwarfs, giants, and supergiants. Using no information about the line shifts, they averaged line bisectors for lines of different depths at the line center, showing that this method is very useful as a first order approach to understanding the changes of the granular patterns with atmospheric parameters.
In the last two decades, large advance has been made in the modeling of convection in stellar atmospheres. Three-dimensional hydrodynamical numerical simulations of the stellar atmospheres are able now to reproduce the observed line asymmetries, opening up the possibility of understanding convective velocity fields in the photospheres of stars others than the Sun (see, e.g., Nordlund & Dravins 1990). Although empirical models of granulation can be constructed and may be very useful to disentangle the interplay between velocity fields and granulation contrast, full hydrodynamical modeling is the most powerful tool for understanding the physical mechanism behind the observed convective patterns.
Of particular significance is the proper understanding of convection and the effects of convective inhomogeneities in metal-poor stars on the main sequence or close to it (subdwarfs, subgiants). Abundance analyses of such stars are fundamental in the study of primordial nucleosynthesis as well as the early evolution of the Galaxy. In these stars, the convection zones and the resulting inhomogeneities reach visible photospheric layers, mainly due to the high transparency of the gas because of the low electron pressure and the lack of metal absorption in the ultraviolet. The latter circumstance also leads to a hot non-local radiation field in the near ultraviolet which may induce severe departures from local thermodynamic equilibrium (LTE). It is thus very important to explore the effects of convection and departures from LTE in stars of this type.
Detailed comprehension of surface convection in metal-poor stars is of high importance for fine analysis of spectral line shapes, such as the retrieval of isotopic ratios of lithium (Smith, Lambert & Nissen 1998), boron (Rebull et al. 1998), or barium (Magain & Zhao 1993), which have to rely on one-dimensional models that may introduce an important uncertainty.
We have acquired spectra of adequate quality of two metal-poor stars, to address these fundamental questions of whether, or not, the three-dimensional inhomogeneities and the convective velocity motions in the photospheres of metal-poor stars lead to severe errors in the results obtained from the use of one-dimensional model atmospheres in the spectroscopic analysis of of metal-poor stars. In this paper, we select clean profiles, measure, and average line bisectors of different lines in order to compare convection in the photospheres of solar composition and metal-poor stars. After giving the details of the observing and reduction procedure in §2, we shall discuss the solar case and carefully check the quality of the spectroscopic observations in §3. The analysis of line asymmetries in the photospheres of the metal-poor compared to the solar composition stars is the subject of §4, and §5.
## 2 Observations
We have selected two well-known field stars belonging to the population II: the moderate metallicity dwarf Gmb1830 (HD103095, HR4550; \[Fe/H\] $``$ –1.3; G8 V) and the more metal-poor subgiant HD140283 (\[Fe/H\] $``$ –2.7; G0 IV). The Sun and the solar-like metallicity star $`\theta `$ UMa (HD82328, HR3775; F6 IV) were included in the program to be used as references.
Observations were carried out during three campaigns from 1995 to 1997 using the 2dcoudé echelle spectrograph (Tull et al. 1995) coupled to the Harlan J. Smith 2.7m Telescope at McDonald Observatory (Mt. Locke, Texas). The cross-disperser and the availability of a 2048x2048 pixels CCD detector made it possible to gather up to 300 Å in a single exposure, in a series of non-overlapping segments. The set-up provided resolving powers ($`\lambda /\mathrm{\Delta }\lambda `$) in the range 170,000-220,000. As many 1/2 hour exposures were acquired as were needed to reach a final signal to noise ratio (SNR) of $``$ 300–600. Table 1 describes the three observational campaigns devoted to the program.
A very careful data reduction was applied using the IRAF<sup>2</sup><sup>2</sup>2IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. software package, and consisted in: overscan (bias) and scattered light substraction, flatfielding, extraction of one-dimensional spectra, wavelength calibration, and continuum normalization. Wavelength calibration was performed for each individual image on the basis of $``$ 300 Th-Ar lines spread over the detector. The possibility of acquiring daylight spectra with the same spectrograph allows us to perform a few interesting tests. Comparison of the wavelengths of 60 lines in a single daylight spectrum (SNR $``$ 400-600, depending on the spectral order) with the highly accurate wavelengths measured in the solar flux spectrum by Allende Prieto & García López (1998b) showed that the rms differences were at the level of 58 m s<sup>-1</sup> ($`\frac{1}{11}`$ pixel).
Before coadding the individual one-dimensional spectra, they were first cross-correlated to correct for the change in Doppler shifts and instrumental displacements, such as those produced by the variation of weight as the CCD’s liquid nitrogen dewar empties. Fig. 1 shows the measured shifts between different spectra of HD140283, relative to the first of them, on the night of May 20 1995. The observed shifts (joined by the solid line) do not correspond to those expected from the difference of velocities in the line of sight between the Earth and the Sun, indicated by circles in Fig. 1. Our procedure introduced an uncertainty in the wavelength scale, whose magnitude depends on the SNR, the presence of telluric lines, and the time separation among the individual spectra. When the standard deviation of the velocity shifts from the cross-correlation of the different available orders was below $``$ 150 m s<sup>-1</sup>, the frames were co-added to increase the SNR. This strategy ensures that the errors introduced in the line shifts when co-adding the spectra are of the order of $`\frac{150}{\sqrt{N}}`$ m s<sup>-1</sup> or less, where N, the number of useful orders, is in the range 8–17. In this way no significant extra asymmetry is artificially produced.
Fig. 2 demonstrates how well the shifts are determined for the individual spectra of HD140283 taken on May 20 1995. The left panel shows the Fe I line at 5393 Å in the different spectra, after correcting the shifts displayed in Fig. 1, while the right panel shows the resulting pattern, ten times magnified, after subtracting the mean spectrum. It is apparent in this figure that no significant residuals remain.
A final test to check that our procedure leads to consistent results is to compare the bisectors measured in the individual exposures with the averaged bisector, and with the bisector measured in the averaged spectrum. This is shown in Fig. 3, for the series in Fig. 1. The average bisector (thick solid line) is in agreement with the bisector of the average line profile (dashed line). The bisectors were averaged using normalized weights $`\frac{1}{\sigma ^2}`$, which corresponds to the weighting performed when co-adding the spectra. While the direct average of the bisectors relative to the line center avoids the shifting of the spectra to the same zero, it requires the location of the centers of the line profiles in the noisy individual exposures, increasing the errors in the final bisector. Alternatively, the final bisector can be measured directly on the averaged spectrum, computed after shifting safely the individual spectra to a common zero through simultaneous cross-correlation of all the spectral orders recorded in a given image. The latter procedure has been adopted in this work.
## 3 The Sun: accuracy of the measurements
The highest quality high resolution stellar spectra currently available are those of the Sun. The study of line asymmetries and shifts in the optical solar spectrum has been the subject of much work since the early studies of St. John (1928) and Burns and collaborators at Allegheny Observatory (Burns 1929, Burns & Kiess 1929, Burns & Megger 1929). There are recent measurements in the center of the solar disk, at different positions across it, in total flux, and with high spatial and time resolution (see, e.g., Pierce & Breckenridge 1973, Dravins, Lindegren & Norlund 1981, Livingston 1982, Balthasar 1984, Brandt & Solanki 1990, Stathopoulou & Alissandrakis 1993, APGL), but a comprehensive study and classification of the line asymmetries in the flux spectrum of the Sun is still missing. Such a study providing an average solar bisector could serve as a template for comparison with other stars.
Making use of the solar flux spectrum in the atlas of Kurucz et al. (1984), we have measured the line asymmetries of 39 Fe I clean lines selected by Meylan et al. (1993) from the fit of Voigt profiles in the same atlas. The line asymmetries were quantified by means of the bisectors at a given set of flux levels from the continuum. Irregular parts in the line profiles were removed and excluded from the average. All the lines were averaged together, to produce a mean flux bisector, as commonly done with stellar bisectors (Fig. 4; solid line with error bars indicating the mean error). A new average bisector (Fig. 4; solid line with shadow area indicating the mean error) was constructed taking into account the line shifts for Fe I lines measured by APGL from the same atlas, to place the individual bisectors on an absolute scale. In Fig. 4, the absolute mean flux bisector has been shifted from its true (bluer) position to overlap with the mean flux bisector computed without taking into account the blue-shifts of the individual lines. The differences between the profiles with and without correction for the velocity shifts are comparable to the precision with which bisectors are measured, but more remarkable, the velocity difference or span between the bluest and the reddest part of the mean bisectors does not change.
As listed in Table 1, we have acquired day-light spectra in different spectral ranges. Figure 5 compares the bisector measured in these spectra (curve with error bars) to that measured in the solar FTS atlas of Kurucz et al. (1984) from three Fe I lines ($`\lambda \lambda `$ 5217, 5296, and 5933 Å). The agreement is very good, indicating, as expected, that the errors in our measurements are below $``$ 60 m s<sup>-1</sup>. This comparison warranties the adequateness of the procedure employed to remove the scattered light, as this problem is much more important for the day-light spectra than for the stellar spectra. The average of 14 lines in the McDonald day-light spectrum provides a solar flux mean bisector quite similar to that previously calculated from the FTS atlas (see below). Here and hereafter, error bars for the bisectors of individual lines are computed following the considerations in Gray (1983).
The convective line asymmetries depend on the velocity fields, the granulation contrast, and the area occupied by the rising granules and the falling intergranular lanes in the line formation layers. As a result of this, convective line asymmetries have been observed to vary with the line depth, the atomic parameters (chemical species, excitation potential, transition probability, etc.), i. e., all those parameters defining the height of the line formation region in the atmosphere. Some spectral lines can be affected by other sources of asymmetry, such as isotopic shifts or hyperfine structure splitting. These are difficult to estimate as accurate laboratory data are missing (see, e.g. Kurucz 1993). Lacking laboratory studies and accurate calculations on collisional broadening, we can not definitely exclude that some lines could be affected by pressure shifts, inducing line asymmetries (Allende Prieto, García López & Trujillo Bueno 1997). These facts make the use of average stellar bisectors questionable. Ideally, detailed modeling of the convective line asymmetries should be carried out by comparison of predicted and observed profiles of individual lines. However, there is a common behavior of the line asymmetries for most of the lines in the solar and stellar spectra and, at least for the solar case, the velocity span of the mean bisector does not change much between taking into account or not the absolute position of the individual lines. Furthermore, there is a remarkably smooth variation of the average stellar bisectors with spectral type and luminosity class (Gray 1982, Gray & Toner 1986). This justifies the use of mean bisectors as a first approach to the classification of the line asymmetries across the HR diagram.
## 4 Metal-poor stars. Photospheric line asymmetries.
For the most metal-poor star in the sample (HD140283) the task of selecting lines free of blends and anomalous shapes is straightforward, because the lack of metals greatly reduces the overlapping of different lines. For the less metal-poor stars, blending becomes more common, and the selection more difficult. The calculation of an initial average and its standard deviation was made and then a new average, excluding the points deviating significantly from the first average, was calculated. Finally, it was verified that a more critical selection, keeping only the bisectors which exhibited the dominant shape, in all cases, provided almost identical results.
### 4.1 The dwarf Gmb1830
We have identified a total of 25 clean lines in the spectral range available for Gmb1830 (G8V). Their wavelength, suggested identification, equivalent width, and excitation potential are listed in Table 2. We proceeded as described in the preceding section for the Fe I lines in the solar atlas, and obtained the mean flux bisector for Gmb1830 and the sky-light spectra acquired at McDonald. They are listed in Table 3. Figure 6a shows all the bisectors measured in Gmb1830, and the mean flux bisector for this star, surrounded by error bars, describing the mean error. Figs. 6b is similar to 6a, but for the Sun (from 14 clean lines observed at McDonald). The line bisector shapes are not highly homogeneous but the typical C shape, which can be attributed to the effects of convection, is apparent for most of the lines.
Line bisectors for F-K dwarfs behave in such a way that, although the solar-like C-shape is common for all, the velocity span shows the smallest values around the spectral class G8 (Gray 1982). Direct comparison of the mean bisectors in Figure 7 shows that the mean bisector of Gmb1830 (mean errors represented with error bars) shows a smaller velocity span than that of a solar metallicity G2 (the Sun; mean errors in gray). This implies that, for this moderately metal-poor dwarf, we do not detect any significant signature of the lower metallicity in its mean flux bisector.
Rotation strongly affects the shape of the integrated-light line asymmetries (see Gray 1986; Smith, Livingston & Huang 1987; Dravins & Nordlund 1990). However, the rotational velocities of the Sun (1.9 km s<sup>-1</sup>; Gray 1992) and Gmb1830 (2.2 km s<sup>-1</sup>; Fekel 1997) are quite close, and then, their asymmetries can be directly compared. Several other factors may be playing a significant role in this comparison. The presence of one or more unresolved companions could well induce systematic line asymmetries in the spectral lines that might be wrongly interpreted as convective patterns. Beardsley, Gatewood, & Kamper (1974) claimed a detection of radial velocity variations in the spectra of Gmb1830, but this claim was not confirmed (Griffin 1984; Heintz 1984). The Hipparcos catalogue (ESA 1997) has found the star to show photometric variations spanning 0.14 mag., and although the same catalogue does not register a visual companion for Gmb1830 within 10 arcseconds, it has been claimed in the past (see Beardsley et al. 1974) that the star has a fainter (5-5.5 mag.) companion.
Cyclical activity and magnetic variations are known to be linked to changes in oscillations, and granulation properties (Gray & Baliunas 1995; Jiménez-Reyes et al. 1998). Radick et al. (1998) have detected solar-like periodic variability in the Ca II H and K emission of Gmb1830 with a period close to seven years. The possible differences between the mean bisector of Gmb1830 and the Sun associated with their different metallicities, may indeed have been diluted under some of these effects.
### 4.2 The subgiant HD140283
As a result of the very low metal content of the star, the high resolution spectrum of HD140283 shows only a few lines. The practical advantage is that the lines present are quite clean. A total of 24 lines were selected in the three spectral ranges available for HD104283 (G0 IV), while only 16 were considered clean in the case of $`\theta `$ UMa (F6 IV), a comparison solar metallicity star. The data on the lines selected is also included in Table 2. All the measured bisectors in these stars are plotted in Figure 8, the mean flux bisector is overplotted with the mean error marked by the error bars. Table 4 lists the flux mean bisectors. Almost every line detected in the spectrum of HD140283 was considered clean. The homogeneity of the bisector shape is higher than for the cooler dwarfs studied in the preceding sub-section.
Fig. 9 directly compares the flux mean bisectors for the two stars and the Sun (solid line with the mean error marked in grey). The expectation based on the smooth trends with the spectral class found by Gray (1982) is that the hotter star ($`\theta `$ UMa; F6 IV; dashed line with error bars) should have photospheric bisectors with the larger velocity span. That is in clear contradiction with Fig. 9, where the bisector corresponding to HD140283 (solid line with error bars) shows a red asymmetry as high as $``$ 300 m s<sup>-1</sup>. The obvious suggestion is that the abnormal behavior of the line bisectors measured in the spectrum of HD140283 is the result of its very low metallicity (a factor $``$ 500 less than the Sun or $`\theta `$ UMa).
In this case there is a significant difference in the rotational velocities between HD140283 ($``$ 3.5 km s<sup>-1</sup>; Magain & Zhao 1993) and $`\theta `$ UMa (6.4 km s<sup>-1</sup>; Fekel 1987). This difference is large enough to produce a respectable effect. However, the comparison with the slower Sun (G2 V<sup>3</sup><sup>3</sup>3Differences between luminosity class V and IV are likely to be negligible (see Gray 1982).) is not affected by this parameter, and leads to the same result.
The time span of our observations (three years) suggests that the line asymmetries observed in HD140283 are stable. The star has been monitored for radial velocity variations with a negative result (Carney & Latham 1987, Mazeh et al. 1996). $`\theta `$ UMa has been claimed to show periodic radial velocity variations by Abt & Levy (1976), but this has been placed into question by the analysis of Morbey & Griffin (1987).
## 5 Mg I b<sub>1</sub> and b<sub>2</sub> lines in the spectrum of HD140283
Mean flux bisectors studied in §4 correspond to photospheric lines, and characterize the velocity fields only in this region of the atmosphere. However, velocity fields in upper layers, such as the chromosphere (Samain 1991, García López et al. 1992), the transition region, or the corona (Brekke et al. 1998) of late-type stars have been shown to be much stronger. While the photospheric observed line shifts are directed bluewards and amount a few hundreds meters per second, transition region and coronal lines are shifted to the red by kilometers per second (Wood et al. 1996, Wood, Lynsky & Ayres 1997).
We have measured the asymmetries of the strong MgI b<sub>1</sub> and b<sub>2</sub> lines at 5183 and 5172 Å, respectively, whose cores are known to form higher up than the photospheric layers in the solar atmosphere, in the spectrum of HD140283. The line bisectors are displayed in Fig.10, and compared with the mean flux bisector. In this Figure, the zero velocity is again arbitrarily set to the bottom of the lines. They exhibit a C-shape, although are quite different to the bisectors of the photospheric lines. This could indicate a larger dissimilitude, compared with the photospheric lines, between the velocity fields and the inhomogeneities in the layers where the core and the wings are formed. Assuming the upper part of the photospheric and the strong-line bisectors overlap (as suggested by the shape and the velocity span), the excursion of the line bottom to the red would be tracing the disappearance of the photospheric correlation between temperature and velocity, and therefore the convective blueshift, towards higher atmospheric layers.
Alternatively, the peculiar shape of these bisectors might be the result of the presence of significant contributions of different isotopes of magnesium. There are three magnesium isotopes in the solar-system mixture, <sup>24</sup>Mg, <sup>25</sup>Mg, and <sup>26</sup>Mg, whose abundance fractions are 0.7899, 0.1000, and 0.1101 (Anders & Grevesse 1989), but it is established that the fractions of <sup>25</sup>Mg and <sup>26</sup>Mg, relative to <sup>24</sup>Mg decline with metallicity (Barbuy, Spite & Spite 1987, McWilliam & Lambert 1988), and the resulting asymmetry is expected to be very small for a star like HD140283.
Unlike the solar metallicity stars, the lack of line crowding makes it possible to measure the asymmetries of lines which form between photospheric and chromospheric layers in the extreme metal-poor stars. This could be an important tool to understand how the dynamics of the atmosphere changes from producing blueshifts in the photosphere to redshifts in higher layers. The observations obviously must constrain future three-dimensional simulations of photospheric dynamics further out from the center of the star.
## 6 Summary and conclusions
We have searched for differences between the convective velocity fields and granular motions of metal-poor stars and solar metallicity stars by observing line asymmetries in the optical spectra at very high resolution.
Clear differences have been found for the most metal-poor star in our sample, probably reflecting the low opacity of the metal deficient atmospheres and the changes in visible convective flow patterns due to this. The line asymmetries found in this case show a significantly different shape as compared with its solar-metallicity counterpart, perfectly distinguishable from the observed line-to-line differences.
The lack of metal line blends and the relatively narrow line wings in metal-poor stars makes it possible to measure the line asymmetries in strong lines such as the Mg I b<sub>1</sub> and b<sub>2</sub>, whose cores are formed higher in the atmosphere, possibly revealing a convective pattern rapidly changing with depth which shows up as a markedly redder asymmetry in the line core, as compared with photospheric lines.
Detailed comparison between observed line profiles and three-dimensional numerical simulations of the photospheres of late-type stars, as affected by the underlying convective dynamics, should be a powerful tool to improve the understanding of the atmospheric structure and dynamics of these objects (Allende Prieto et al. 1999, Asplund et al. 1999). Such comparisons should give place to more reliable abundance analyses for these stars.
We thank the staff at McDonald Observatory, in particular David Doss, for their kind and professional help. The comments of the referee were particularly helpful to improve some aspects of both the contents and the presentation. This work has been partially funded by the Spanish DGES under projects PB92-0434-C02-01 and PB95-1132-C02-01, the National Science Foundation (grant AST961814), and the Robert A. Welch Foundation of Houston, Texas.
FIGURE CAPTIONS
|
no-problem/9907/astro-ph9907341.html
|
ar5iv
|
text
|
# Environmental Dependence of the Fundamental Plane of Galaxy Clusters
## 1 Introduction
Finding regularities in the characteristics of objects is one of the bridges from data collection to the development of theoretical understanding. A now widely cited regularity concerns the Fundamental Plane (FP) of galaxies (Djorgovski and Davis 1987; Dressler et al. 1987). When three parameters describing galaxies (various formulations are possible) are plotted, the points approximate a plane in the parameter space, telling us that there are really only two independent variables.
More recently, the same approach has been applied to galaxy clusters and a Fundamental Plane has been found here, too (Schaffer et al. 1993; Adami et al. 1998). Fritsch and Buchert (1999)-hereafter FB-examine not only the cluster FP but scatter about it. They define (non-uniquely) the FP in terms of total optical luminosity $`L_o`$, half-light radius $`R_o`$, and X-Ray luminosity $`L_x`$. FP can be used to predict and/or qualify physical characteristics. For instance, FB examine substructure in galaxy clusters, defined as a lack of symmetry and misalignment of concentric isophotes. They find that clusters with strong substructure lie far from the FP. They define the FP as the plane clusters take in the absence of substructure, and the “empirical plane”(EP) as the plane best fit to all clusters in their sample. EP and FP are very close together in their study.
Fujita and Takahara (1999, hereafter FT) have also defined a different fundamental plane using other parameters: gas density, core radius, and temperature as determined for a set of clusters published in Mohr et al. (1999). The gas densities published in Mohr et al. were determined differently for clusters with and without cooling flows, $`\rho _2`$ and $`\rho _1`$ respectively. When a cooling flow is present, FT convert $`\rho _2`$ to $`\rho _1`$ which is more representative of a cluster’s global structure. Thus, for cooling flow clusters, $`\rho _2`$ represents the central gas density as determined in the region of excess emission (the innermost region of a cooling flow cluster) and $`\rho _1`$ represents the converted gas density over a larger (but still central) region of the cluster (see FT and Mohr et al. for further details). FT use $`\rho _1`$ to create their fundamental plane which has much less scatter than the plane of FB.
Recently, it has proven possible to find regularities in the internal properties of clusters using a highly uniform, complete catalog of galaxy cluster redshifts (Miller et al. 1999a; Slinglend et al. 1998). Novikov et al. (1999) described an alignment between the wind direction distorting radio jets inside clusters and the long axis of the supercluster in which the cluster is embedded. Loken, Melott, and Miller (1999) present evidence that the existence of massive cooling flows is correlated with close proximity to other clusters. The results presented here may present a partial explanation for that result.
## 2 Data Analysis
We use a subset of the Abell (1958) and Abell,Corwin, and Olowin (1989) cluster catalogs with conservative cuts that enhance completeness. Clusters are retained with richness $`R1`$, redshift $`z0.10`$, which are not close to the galactic plane, and have redshifts measured (not estimated) for multiple galaxies. With these cuts, the remaining clusters constitute a 98% complete volume- limited sample. Briefly, this sample has only minimal projection effects and few line-of-sight anisotropies (similar in degree to the APM cluster sample (Dalton et al. 1994)). In addition, most (80%) of the clusters in our sample have three or more measured galaxy redshifts. Miller et al. (1999a), have shown that cluster redshifts, determined from only one galaxy, are in error by over 2500 km s<sup>-1</sup> at least 14% of the time. Also, magnitude-redshift relations typically have at least a 25% scatter. A nearest-neighbor study, such as the one presented here, requires accurate cluster redshifts which can only come through multiple-galaxy observations. Additional evidence towards the quality of this dataset come from the Voges et al. (1999) finding that 90% of $`R1`$ Abell clusters (to $`z=0.09`$) are X-ray bright. Finally, we point out that the cluster sample has a nearly constant number density to $`z=0.10`$. Therefore, even as additional dimmer clusters are eventually observed, very few will fall into the volume surveyed here.
### 2.1 Environmental Correlations and the FB Plane
FB kindly provided data which was used in their study. We examined the scatter about their FP. The FB study used the logarithm of optical luminosity $`L_o`$, of X-ray luminosity $`L_x`$, and of half-light radius $`R_o`$ for the clusters; the plane is actually in the space spanned by these axes. (We refer to the axes which lie in the plane and orthogonal to it as the principal axes; the measurables as the physical axes.) Distances from the plane are dimensionless and based on the $`log_{10}`$ of these measurables. This enabled us to associate a distance from the FP for each cluster. We found 23 clusters present in FB for which we were able to reliably define a nearest neighbor distance. Clusters originally in FB but not in our study were excluded for not having a neighbor closer than the survey boundary or for being an $`R=0`$ cluster.
We then looked for a correlation between displacement from the fundamental plane ($`d_p`$) in parameter space and distance from the nearest neighbor ($`d_n`$). Displacement from the fundamental plane was given a sign because the parameters are not symmetric about it. There are no published errors on the parameters used to define the fundamental plane in FB.
Distances in redshift space to all of the clusters in our parent sample were determined for a Friedman-Robertson-Walker Universe with $`q_0`$ = 0 and $`H_0=100`$ km s<sup>-1</sup> Mpc<sup>-1</sup>. (The choice of $`H_0`$ does not affect our correlations, and the effect of reasonable $`q_0`$ is much smaller than other uncertainties which exist.) To account for biasing effects caused by the survey geometry, we have excluded any cases where the distance to the edge of the volume was smaller than $`d_n`$. This left 248 clusters with a nearest-neighbor distance. When calculating $`d_n`$, we have allowed for errors in each cluster’s spatial coordinate according to:
$$\mathrm{}_i=\frac{7h^1Mpc}{\sqrt{N_{cl}}}$$
(1)
where $`i=x,y,z`$ and $`N_{cl}`$ is the number of galaxies used for the mean cluster redshift. We chose $`7h^1`$Mpc since it is very near the typical velocity dispersion ($`700`$ km s<sup>-1</sup>) of rich clusters (e.g. Zabludoff et al. 1993). We obtain $`N_{cl}`$ from a variety of sources including Struble and Rood (1991), Postman, Huchra and Geller (1992), Zabludoff et al. (1993), Katgert et al. (1996) and Slinglend et al. 1998. The error on each spatial coordinate is typically around $`1.5h^1`$Mpc, which is rather conservative considering an entire Abell radius is defined as $`1.5h^1`$Mpc. We then propagate through the errors on $`x,y,z`$ to calculate $`\sigma _i`$ for each nearest-neighbor distance, $`d_n`$.
We define a weighted correlation coefficient (Bevington, 1969)
$$K\frac{s_{d_n,d_p}^2}{s_{d_n}s_{d_p}}$$
(2)
where
$$s_{d_n,d_p}^2\frac{\frac{1}{N1}[\frac{1}{\sigma _{i}^{}{}_{}{}^{2}}(d_n^i\overline{d}_n)(d_p^i\overline{d}_p)]}{\frac{1}{N}\frac{1}{\sigma _i}^2}$$
(3)
and
$$s_{d_n}^2\frac{\frac{1}{N1}[\frac{1}{\sigma _{i}^{}{}_{}{}^{2}}(d_n^i\overline{d}_n)^2]}{\frac{1}{N}\frac{1}{\sigma _{i}^{}{}_{}{}^{2}}}$$
(4)
and
$$s_{d_p}^2\frac{1}{N1}(d_p^i\overline{d}_p)^2.$$
(5)
We find $`K=0.41`$, a fairly strong correlation. This is a 2.0$`\sigma `$ result indicating 97.6% confidence that it did not arise as a chance fluctuation. Ignoring the sign of $`d_p`$ considerably weakens the result. If we apply equal weights, the correlation increases to $`K=0.53`$ or a 2.5$`\sigma `$ result.
We also looked for correlations between $`d_n`$ and the three physical axes as well as the other two principal axes of the FP. To calculate $`K`$ in these cases, we replaced $`d_p`$ in equations 3,4,5 with the parameters under examination. We found no significant correlations between $`d_n`$ and any of the other quantities. Our main finding – that the distance a cluster lies from the fundamental plane (defined by FB) in parameter space correlates with it’s nearest neighbor distance – cannot be attributed to any single parameter alone. Nor do we find evidence that the the position projected onto the FP is dependent upon $`d_n`$.
### 2.2 Environmental Correlations and the FT Plane
FT examined and systematized data gathered by Mohr et al. (1999). We also examined the FT data, using $`\rho _1`$, core radius, and gas temperature; there were 14 clusters with nearest neighbors in our data set. In this case, $`d_p`$ has a rather small range, since the FT plane is thin, and we found the reasonable result that there were no significant correlations between it and $`d_n`$, nor are there between $`d_n`$ and the other two principal axes of this ribbon-like FP. Neither was $`d_n`$ signficantly correlated with core radius, temperature or converted gas densities (all $`\rho _1`$ as in FT) considered alone. However, when we examine $`d_n`$ vs. central gas density –using $`\rho _2`$ when a cooling flow is present and otherwise $`\rho _1`$– we find a 2.5$`\sigma `$ (99.4 % confidence) result - a correlation coefficient of $`K=0.67`$. In this case, we used errors on $`d_n`$ as described earlier and errors on $`\rho `$ as published in Mohr et al.. If we apply equal weights to each value for $`d_n`$ and $`\rho `$, the correlation falls to $`K=0.55`$. This property of X-ray clusters – high central gas density in clusters close to other clusters – may explain a key result of Loken et al. (1999): the tendency of a cooling flows to occur in clusters with near neighbors. It appears likely that these crowded clusters have gas densities high enough to give the prerequisite short cooling times.
### 2.3 Selection Effects
Although the Abell/ACO sample used to define nearest neighbors is the most complete of its kind available, there remains the possibility that there are undetermined optical selection effects. For instance, clusters in regions of low galactic neutral hydrogen density (n<sub>H</sub>) might appear brighter. In addition, in regions of low dust and n<sub>H</sub>, Abell (1958) may have been more likely to find a cluster neighbor in close proximity. However, neither $`L_o`$ or $`R_o`$ alone shows a strong correlation towards $`d_n`$. If it did, we might suspect it were a result of superposition or a similar artifact.
With the recent construction of the Schlegel et al. (1998) reddening maps, it is possible to determine whether our highest $`L_o`$ clusters happen to lie in abnormally low regions of galactic extinction. A strong anti-correlation between regions of low HI column density and richer Abell clusters has been found by Nichol & Connolly (1996). Such an effect could result in a greater number of close cluster pairs, which are also brighter, to contaminate our small sample. Therefore, we have compared the magnitude of reddening, E(B-V), as determined from the Schlegel et al. map for $`4000`$ random locations on the sky to those centered on our 23 clusters. A Kolmogorov-Smirnov (K-S) test shows that the HI distributions are nearly identical and we are not sampling regions of abnormally low HI column densities. In addition, we find no correlations between E(B-V) and $`L_o`$, $`d_n`$ or $`d_p`$.
Another possible selection effect that cannot explain our results is richness. Our sample of 23 clusters contains mostly (18/23) $`R=1`$ clusters and the remaining are $`R=2`$. The mean nearest neighbor distances as a function of richness for the entire complete catalog are R=1: 19.2, R=2: 16.4, and R=3: 22.3 (in $`h^1`$Mpc). Furthermore, clusters show no evidence of richness-dependence in terms of their displacement from FP.
## 3 Conclusions
Cluster properties depend on their environment as parameterized by the distance to their nearest external cluster. After ruling out a substantial number of possible selection effects, we find here that there are strong correlations between cluster properties and the proximity of other clusters.
From the FB study, we find that clusters far from and “below” the FP tend to be isolated. As we move “up” toward their plane in the direction of $`L_o^{0.81}R_o^{0.84}L_x^{0.21}`$, the clusters tend to be much closer in physical space to other clusters. This suggests optically brighter, more compact clusters are in more crowded environments.
From FT, we did not find any significant correlations along their principal axes. However, there was a strong tendency for central gas density to be higher in X-ray clusters which are close to other clusters. This is consistent with the previous paragraph, and may explain the propensity of such crowded clusters to initiate cooling flows (Loken et al. 1999).
We know from the FB analysis also that clusters close to their FP have much less substructure. Putting this together with our results we can summarize as follows: Clusters in crowded environments tend to have less substructure and higher central gas densities. Together this provides an explanation for the Loken et al. (1999) result: a relaxed cluster with little substructure provides the symmetry and high central density needed to set up a massive cooling flow, and these conditions are found in those clusters located in close proximity with other clusters.
This is reasonable on theoretical grounds: perturbations of a given mass scale (in this case clusters) which lie in a larger region of high amplitude are more likely themselves to be of high amplitude. A higher amplitude implies earlier collapse and more time to relax. Such relaxed clusters are more likely to take a more “universal” locus in parameter space, with less substructure (artifact of initial conditions including merger history) a higher central gas density and the ability to initiate a cooling flow. This picture fits together the cooling flow results of Loken et al. (1999), substructure correlations found by FB, and the correlations we found by environmental study of the FB and FT data groups.
Although our sample sizes (23 and 14) are rather small, results of fairly high confidence exist. We take this as evidence of the strength of the effect combined with the superior characteristics of the redshift catalog after the cuts were taken.
Acknowledgments
CM was funded in part by the National Aeronautics and Space Administration and the Maine Science and Technology Foundation. ALM acknowledges the support of the NSF-EPSCoR program, the hospitality of Carnegie Mellon University during part of this work, and authors FB for sharing their data. Jim Fry and David Batuski made helpful suggestions.
|
no-problem/9907/cond-mat9907233.html
|
ar5iv
|
text
|
# Ground-State Properties of a Rotating Bose-Einstein Condensate with Attractive Interaction
## Abstract
The ground state of a rotating Bose-Einstein condensate with attractive interaction in a quasi-one-dimensional torus is studied in terms of the ratio $`\gamma `$ of the mean-field interaction energy per particle to the single-particle energy-level spacing. The plateaus of quantized circulation are found to appear if and only if $`\gamma <1`$ with the lengths of the plateaus reduced due to hybridization of the condensate over different angular-momentum states.
The Hess-Fairbank effect —disappearance of the angular momentum (AM) of liquid helium 4 as it is cooled down to absolute zero with its container kept rotating slowly—is an analogue of the Meissner effect in superconductivity, and it may therefore be regarded as a hallmark of superfluidity. The essential requisites for the appearance of this effect are the single-valuedness of the wave function and the presence of a single Bose-Einstein condensate (BEC). Recent realization of BEC of lithium 7 has opened up new possibilities associated with the attractive interaction between atoms; here the Fock exchange interaction could energetically favor the formation of hybrid BECs, which might modify the quantization of circulation and the Hess-Fairbank effect. In this Letter we investigate these possibilities in terms of the conceptually simple geometry of a quasi-one-dimensional torus.
We consider a system of $`N`$ weakly interacting bosons with mass $`M`$, confined in an torus of radius $`R`$ and cross-sectional area $`S=\pi r^2`$, where for simplicity we assume $`rR`$. This condition justifies our assumption that the radial wave function is fixed and independent of $`\omega `$ —the angular frequency of rotation of the torus. At sufficiently low temperature, the interaction between dilute hard-core bosons is well approximated by Fermi’s contact interaction, which is characterized by the s-wave scattering length $`a`$. The associated mean-field interaction energy per particle is given by $`gN`$, where $`g=2a\mathrm{}^2/MRS`$. The positive (negative) sign of $`g`$ implies that the effective interaction between bosons is repulsive (attractive). The Hamiltonian of our system in the rotating frame is given, up to terms which are constant in our approximation, by
$`\widehat{H}(\omega )`$ $`=`$ $`{\displaystyle \underset{l}{}}\mathrm{}\omega _\mathrm{c}(l{\displaystyle \frac{\omega }{2\omega _\mathrm{c}}})^2\widehat{c}_l^{}\widehat{c}_l+{\displaystyle \frac{g}{2}}{\displaystyle \underset{l,m,n}{}}\widehat{c}_l^{}\widehat{c}_m^{}\widehat{c}_{l+n}\widehat{c}_{mn},`$ (1)
where $`\omega _\mathrm{c}=\mathrm{}/2MR^2`$ is the critical angular frequency, $`l,m,`$ and $`n`$ denote the projected angular momenta in units of $`\mathrm{}`$, and $`\widehat{c}_l^{}`$ and $`\widehat{c}_l`$ are the creation and annihilation operators of bosons with AM $`l`$. In Eq. (1), we have added the term $`_l\mathrm{}\omega _\mathrm{c}(\omega /2\omega _\mathrm{c})^2\widehat{c}_l\widehat{c}_l^{}=N\mathrm{}\omega ^2/(4\omega _\mathrm{c})`$ which is compensated for by the Lagrange multiplier $`\alpha `$ in Eq. (4) and therefore does not modify any result below.
We determine the minimum-energy state of the Hamiltonian (1) within a Hilbert subspace given by $`|\mathrm{\Psi }_{\mathrm{HF}}=|\mathrm{},n_l,\mathrm{},n_1,n_0,n_1,\mathrm{},n_l,\mathrm{}`$, where $`n_l`$ denotes the number of bosons that occupy the state with AM $`l`$. This is nothing but the Hartree-Fock (HF) approximation; other possibilities will be discussed later. Because the total number of bosons is $`N`$, $`n_l`$’s should satisfy
$`{\displaystyle \underset{l=\mathrm{}}{\overset{\mathrm{}}{}}}n_l=N.`$ (2)
The expectation value of the Hamiltonian with respect to the state $`|\mathrm{\Psi }_{\mathrm{HF}}`$ is given by
$`E(\{n_l\})`$ $`=`$ $`{\displaystyle \underset{l}{}}K_l(\omega )n_l{\displaystyle \frac{g}{2}}{\displaystyle \underset{l}{}}n_l^2+g\left(N^2{\displaystyle \frac{N}{2}}\right),`$ (3)
where $`K_l(\omega )\mathrm{}\omega _\mathrm{c}(l\omega /2\omega _\mathrm{c})^2`$. The distribution of $`\{n_l\}`$ is determined so as to minimize $`E(\{n_l\})`$ subject to condition (2).
Case of repulsive interaction.— We first show that our ansatz wave function $`|\mathrm{\Psi }_{\mathrm{HF}}`$ reproduces some well-known results. When $`g>0`$, it is possible to simultaneously minimize the kinetic energy and the interaction energy in Eq. (3) with $`n_l=N`$ if $`l=[(\omega +\omega _\mathrm{c})/2\omega _\mathrm{c}]`$ and $`n_l=0`$ otherwise, where the symbol $`[x]`$ denotes the maximum integer that does not exceed $`x`$. This result implies that a single BEC is energetically favorable whether or not it is rotated; Bogoliubov’s virtual-pair excitations only cause a depletion of the condensate and do not alter this conclusion. The single valuedness of the wave function dictates that the projected AM be quantized in units of $`\mathrm{}`$, but one needs something more to show that it is quantized in units of $`N\mathrm{}`$. The Onsager-Feynman condition for the quantization of circulation, in fact, requires the more stringent latter condition. For the case of repulsive interaction, the Fock exchange interaction favors a single BEC , thereby enforcing sharp transitions between different AM states and requiring that the circulation be quantized in a uniform system as considered in this Letter. (In a related context, Castin and Dum have recently considered the stability of vortices for the nonuniform case of parabolic potentials . See also Refs. .)
Case of attractive interaction.— When $`g<0`$, it is impossible to simultaneously minimize the kinetic energy and the interaction energy; the kinetic energy becomes minimal when the distribution $`\{n_l\}`$ peaks sharply at $`l=[(\omega +\omega _\mathrm{c})/2\omega _\mathrm{c}]`$, whereas the interaction energy becomes maximal for this case. Were it not for the kinetic term, the lowest-energy state would be the one in which the distribution of $`n_l`$ is maximally spread; no single state $`l`$ could then be macroscopically occupied, and there would be no BEC. When the system is spatially confined, however, the kinetic term competes with the attractive interaction, allowing a metastable condensate to be formed.
The minimal-energy distribution $`\{n_l\}`$ is determined so as to minimize $`E(\{n_l\})`$ in Eq. (3), subject to condition (2), giving
$`n_l={\displaystyle \frac{N}{\gamma }}\left[\alpha \left(l\omega /2\omega _\mathrm{c}\right)^2\right],`$ (4)
where $`\alpha `$ is a Lagrange multiplier, and $`\gamma |g|N/(\mathrm{}\omega _\mathrm{c})=4N|a|R/S`$ is the ratio of the mean-field interaction energy per particle to the single-particle energy-level spacing. To find an estimate of $`\gamma `$, we rewrite it as $`\gamma 4\times 10^4N|a|[\AA ]R[\mu m]/S[\mu m^2]`$. For the case of lithium 7 with $`|a|14.6`$Å, $`R=1\mu m`$, and $`r=0.2\mu m`$, we have $`\gamma 0.046N`$. With suitable choice of these parameters, it is possible to prepare the system both with $`\gamma <1`$ and with $`\gamma >1`$.
For $`n_l`$ to be positive, there must be minimum and maximum values of $`l`$, i.e., $`l_1`$ and $`l_2`$. Equation (2) then becomes $`_{l=l_1}^{l_2}n_l=N`$, which upon substitution of Eq. (4) for $`n_l`$ gives
$`\alpha `$ $`=`$ $`{\displaystyle \frac{\gamma }{l_1+l_2+1}}+\stackrel{~}{\omega }(\stackrel{~}{\omega }+l_1l_2)+{\displaystyle \frac{2(l_1^2+l_2^2l_1l_2)+l_1+l_2}{6}},`$ (5)
where $`\stackrel{~}{\omega }\omega /2\omega _\mathrm{c}`$. With the definitions of $`l_1`$ and $`l_2`$, we have $`(l_1+\stackrel{~}{\omega })^2<\alpha (l_1+1+\stackrel{~}{\omega })^2`$ and $`(l_2\stackrel{~}{\omega })^2<\alpha (l_2+1\stackrel{~}{\omega })^2`$, which lead to
$`(l_1+l_2)\mathrm{max}\{{\displaystyle \frac{4l_12l_21}{6}}+\stackrel{~}{\omega },{\displaystyle \frac{4l_22l_11}{6}}\stackrel{~}{\omega }\}<{\displaystyle \frac{\gamma }{l_1+l_2+1}}`$ (6)
$`(l_1+l_2+2)\mathrm{min}\{{\displaystyle \frac{4l_12l_2+3}{6}}+\stackrel{~}{\omega },{\displaystyle \frac{4l_22l_1+3}{6}}\stackrel{~}{\omega }\}`$ (7)
These inequalities uniquely determine the pair of integers $`(l_1,l_2)`$ for a given set of $`\gamma `$ and $`\omega `$.
When the torus is at rest (i.e., $`\omega =0`$), Eq. (4) becomes $`n_l=(N/\gamma )(\alpha l^2)`$, where $`\alpha `$ is given from Eq. (5) with $`l_1=l_2`$ as $`\alpha =\gamma /(2l_1+1)+l_1(l_1+1)/3`$, and Eq. (7) reduces to $`l_1(4l_1^21)/3<\gamma (l_1+1)[4(l_1+1)^21]/3`$. These inequalities uniquely determine the number $`2l_1+1`$ of macroscopically occupied AM states for a given $`\gamma `$. For example, $`\gamma 1`$, $`1<\gamma 10`$ and $`10<\gamma 35`$ correspond to $`2l_1+1=1`$, 3, and 5, respectively. Thus, there is a single BEC when $`\gamma 1`$. This condition agrees with the usual criterion for a metastable BEC to exist that is obtained for a parabolically confining potential using the Gross-Pitaevskii (GP) equation . A new finding in our analysis is that for $`\gamma >1`$ BEC becomes hybridized over different AM states.
At the continuum limit $`\omega _\mathrm{c}0`$ (i.e., $`R\mathrm{}`$) with $`|g|N`$ held constant, $`\gamma `$ and $`l_1`$ become infinite with $`\alpha /\gamma O(1/l_1)`$. It follows from the relation $`n_l=(N/\gamma )(\alpha l^2)`$ that all $`n_l^{}s`$ becomes vanishingly small, of the order of $`N/l_1`$. Thus, no BEC exists for an infinite system in accordance with the standard widsom .
The analysis for the case of $`\omega 0`$ is straightforward, and we describe here only the results that are relevant to later discussions.
1. The region in which a single BEC with $`n_l=N`$ exists is given from Eq. (7) with $`l_1=l,l_2=l`$ by
$`0<\gamma |{\displaystyle \frac{\omega }{\omega _\mathrm{c}}}2l|+1.`$ (8)
When $`\omega /\omega _\mathrm{c}`$ is an odd integer, condition (8) can never be met, so no unique BEC can exist no matter how weak the attractive interaction.
2. The region in which two states with AM $`l`$ and $`l+1`$ are macroscopically occupied is given from Eq. (7) with $`l_1=l,l_2=l+1`$ by
$`|{\displaystyle \frac{\omega }{\omega _\mathrm{c}}}2l1|<\gamma \mathrm{min}\{3{\displaystyle \frac{\omega }{\omega _\mathrm{c}}}6l+1,3{\displaystyle \frac{\omega }{\omega _\mathrm{c}}}+6l+7\},`$ (9)
and the corresponding distribution of bosons is given by
$`n_l={\displaystyle \frac{N}{2}}\left[1{\displaystyle \frac{\omega (2l+1)\omega _\mathrm{c}}{\gamma \omega _\mathrm{c}}}\right],n_{l+1}=Nn_l.`$ (10)
3. In general, the region in which $`k`$ states with AM $`l,l+1,\mathrm{},l+k1`$ are macroscopically occupied is given from Eq. (7) with $`l_1=l,l_2=l+k1`$ by
$`k(k1)\mathrm{max}\{{\displaystyle \frac{6l2k+1}{6}}+{\displaystyle \frac{\omega }{2\omega _\mathrm{c}}},{\displaystyle \frac{6l+4k5}{6}}{\displaystyle \frac{\omega }{2\omega _\mathrm{c}}}\}<\gamma `$ (11)
$`k(k+1)\mathrm{min}\{{\displaystyle \frac{6l2k+5}{6}}+{\displaystyle \frac{\omega }{2\omega _\mathrm{c}}},{\displaystyle \frac{6l+4k1}{6}}{\displaystyle \frac{\omega }{2\omega _\mathrm{c}}}\}.`$ (12)
The phase diagram is shown in Fig. 2. We have thus shown that there are regions of $`\gamma `$ and $`\omega `$ in which more than one AM state is macroscopically occupied. This prediction can be tested most directly by switching off the trap potential and let the system expand. Due to Heisenberg’s uncertainty relation, the tight radial confinement of the trap causes the gas to expand more rapidly in that direction than in other ones, and the superposition of BECs having different AM should result in an interference pattern with broken axisymmetry .
Partial quantization of circulation.— When we fix $`\gamma |g|N/\mathrm{}\omega _\mathrm{c}<1`$ and increase $`\omega `$ from $`0`$, we alternatively pass regions in which one or two AM states are macroscopically occupied (see Fig 2), and in the latter regions the distribution of bosons between the two AM states changes continuously with $`\omega `$, as can be seen from Eq. (10). What happens then to the circulation $`\kappa `$ of the system? When a single BEC with AM $`l`$ exists, $`\kappa `$ is given by $`hl/M`$. When two AM states are macroscopically occupied, $`\kappa `$ should be given by $`hl/M`$, where $`l`$ is the ensemble-averaged value of the AM. To find this value, let us restrict ourselves to the region
$`|(\omega \omega _\mathrm{c})/\omega _\mathrm{c}|<\gamma 3|(\omega \omega _\mathrm{c})/\omega _\mathrm{c}|+4,`$ (13)
where two states with AM $`l=0`$ and $`l=1`$ are macroscopically occupied, and the number of bosons in each condensate is given from Eq. (10) by $`n_0=N/2N(\omega \omega _\mathrm{c})/(2\omega _\mathrm{c}\gamma )`$ and $`n_1=Nn_0`$. Hence the ensemble-averaged AM $`l`$ is given by
$`l={\displaystyle \frac{1}{2}}+{\displaystyle \frac{\omega \omega _\mathrm{c}}{\omega _\mathrm{c}}}{\displaystyle \frac{\mathrm{}\omega _\mathrm{c}}{2|g|N}}={\displaystyle \frac{1}{2}}+{\displaystyle \frac{\omega \omega _\mathrm{c}}{\omega _\mathrm{c}}}{\displaystyle \frac{S}{8N|a|R}},`$ (14)
which does not show any sharp transition (see Fig. 2), in sharp contrast with the case of repulsive interaction. Suppose now that we perform the Hess-Fairbank experiment for the frequency of rotation and for the strength of interaction that satisfy the condition (13). Then the AM will not completely be “expelled” even at absolute zero and have a nonzero value given by Eq. (14). Only when those parameters are in the region (8) with $`l=0`$, the AM should vanish at absolute zero.
Hybrid BECs vs. a phase-coherent single BEC.— We have shown within the HF approximation that hybrid BECs exist for some ranges of parameters $`\gamma `$ and $`\omega /\omega _\mathrm{c}`$. Recently, Rokhsar has argued that hybrid or “fragmented” BECs are inherently unstable against the formation of a single BEC whose macroscopically occupied state is a linear combination of the “fragments” with definite relative phases . In our situation, the “fragments” refer to macroscopically occupied AM states. We first discuss the stability of binary BECs against forming such a phase-coherent single BEC. Because the properties of the system are periodic functions of $`\omega `$ with periodicity $`2\omega _\mathrm{c}`$, we may consider, without loss of generality, the region (13) in which two BECs with $`l=0`$ and $`l=1`$ coexist. The state vector of this binary BECs is given by
$`|\mathrm{\Psi }_{\mathrm{HF}}=|n_0,n_1={\displaystyle \frac{1}{\sqrt{n_0!n_1!}}}(\widehat{c}_0^{})^{n_0}(\widehat{c}_1^{})^{n_1}|\mathrm{vac},`$ (15)
where $`n_0`$ and $`n_1`$ are the numbers of bosons in the $`l=0`$ and $`l=1`$ states, which are given by Eq. (10). To be compared with this state is a single macroscopically occupied state whose creation operator $`\widehat{b}^{}`$ is given by $`\widehat{b}^{}=\alpha \widehat{c}_0^{}+\beta \widehat{c}_1^{}`$, where $`\alpha `$ and $`\beta `$ are determined so as to minimize the total energy, subject to $`|\alpha |^2+|\beta |^2=1`$. The corersponding single BEC is given by
$`|\mathrm{\Psi }_{\mathrm{single}}={\displaystyle \frac{(\widehat{b}^{})^N}{\sqrt{N!}}}|\mathrm{vac}={\displaystyle \frac{1}{\sqrt{N!}}}(\alpha \widehat{c}_0^{}+\beta \widehat{c}_1^{})^N|\mathrm{vac}.`$ (16)
The crucial observation here is that when only two states are macroscopically occupied, the expectation value $`\widehat{H}_{\mathrm{single}}`$ of the Hamiltonian (1) over the state (16) does not contain any non-HF terms that are of the same order of magnitude as the HF terms because of the conservation of AM. Therefore, the system cannot lower its energy by establishing a relative phase coherence between the different AM states. The minimum value of $`\widehat{H}_{\mathrm{single}}`$ is reached when
$`|\alpha |^2={\displaystyle \frac{1}{2}}\left[1{\displaystyle \frac{\omega \omega _\mathrm{c}}{\gamma \omega _\mathrm{c}}}{\displaystyle \frac{N}{N1}}\right],|\beta |^2=1|\alpha |^2,`$ (17)
and by a straightforward calculation, we find that
$`\widehat{H}_{\mathrm{HF}}\widehat{H}_{\mathrm{single}}{\displaystyle \frac{|g|N}{4}}\left[\left({\displaystyle \frac{\omega \omega _\mathrm{c}}{\gamma \omega _\mathrm{c}}}\right)^21\right]<0.`$ (18)
However, because the energy difference is of the order of $`1/N`$, the two states (15) and (16) are virtually degenerate. In real life, however, there are inhomogeneities in the container “walls” etc., which break the exact axisymmetry. Such a perturbation, however weak, could stabilize the single coherent BEC relative to a “fragmented” one. To show this, consider a symmetry-breaking perturbation that mixes the $`l=0`$ state and the $`l=1`$ state: $`\widehat{V}=t\widehat{c}_0^{}\widehat{c}_1+t^{}\widehat{c}_1^{}\widehat{c}_0`$. It is easy to see that while $`\widehat{V}`$ does not lower the energy of the system for the HF state ($`\widehat{V}_{\mathrm{HF}}=0`$), it does for the single coherent BEC; $`\widehat{V}_{\mathrm{single}}=2N\mathrm{Re}(\alpha ^{}\beta t)=2N|\alpha ^{}\beta t|`$, provided that arg$`\alpha `$-arg$`\beta `$-arg$`t=\pm \pi `$. Because both $`l=0`$ and $`l=1`$ states are macroscopically occupied, i.e. $`\alpha O(1)`$ and $`\beta O(1)`$, $`\widehat{V}_{\mathrm{single}}`$ is extensive. The single coherent BEC can therefore become energetically favorable due to a (possibly infinitesimal) symmetry-breaking perturbation. It should be noted, however, that the plot of $`l`$ versus $`\omega `$ in Fig. 2 remains basically unaltered because it does not depend on whether or not a phase coherence is established between two macroscopically occupied AM state.
The situation is different when more than two AM states are macroscopically occupied. Now the expectation value of the Hamiltonian contains non-HF terms that are of the same order of magnitude as the HF terms, so that without the need of the symmetry-breaking perturbation the system can lower its energy by establishing a relative phase coherence. To show this, let us consider the case of $`1+3|\omega /\omega _\mathrm{c}|<\gamma <106|\omega /\omega _\mathrm{c}|`$, where three AM states $`l=1,0,1`$ are macroscopically occupied. The hybrid BEC state is described by $`|\mathrm{\Psi }_{\mathrm{HF}}=|n_1,n_0,n_1`$ and the corresponding single coherent BEC is described by $`|\mathrm{\Psi }_{\mathrm{single}}=(\alpha \widehat{c}_1^{}+\beta \widehat{c}_0^{}+\gamma \widehat{c}_1^{})^N/\sqrt{N!}|\mathrm{vac}`$ with $`|\alpha |^2+|\beta |^2+|\gamma |^2=1`$. The expectation value of the Hamiltonian with respect to $`|\mathrm{\Psi }_{\mathrm{HF}}`$ is given by
$`\widehat{H}_{\mathrm{HF}}=K_1n_1+K_0n_0+K_1n_1`$ (19)
$`|g|(n_1n_0+n_0n_1+n_1n_1)|g|N(N1)/2,`$ (20)
which is minimized when $`n_1=N[1(3\omega \pm \omega _\mathrm{c})/(\gamma \omega _\mathrm{c})]/3`$ and $`n_0=N[1+2/\gamma ]/3`$. The expectation value of the Hamiltonian with respect to $`|\mathrm{\Psi }_{\mathrm{single}}`$ is given by
$`\widehat{H}_{\mathrm{single}}=N(K_1|\alpha |^2+K_0|\beta |^2+K_1|\gamma |^2)`$ (21)
$`|g|N(N1)(|\alpha |^2|\beta |^2+|\beta |^2|\gamma |^2+|\gamma |^2|\alpha |^2)`$ (22)
$`|g|N(N1)/2|g|N(N1)(\alpha \beta ^2\gamma +\alpha ^{}\beta ^2\gamma ^{}).`$ (23)
Because the last two terms are phase-dependent and of the same order of magnitude as the remaining terms, it is clear that the single coherent BEC can have a lower energy than the fragmented BEC state by, e.g., the following choice of amplitudes, $`\alpha =\sqrt{n_1}e^{i\theta _\alpha },\beta =\sqrt{n_0}e^{i\theta _\beta },\gamma =\sqrt{n_1}e^{i\theta _\gamma }`$ with the relative phase relation $`\theta _\alpha 2\theta _\beta +\theta _\gamma =0.`$ Thus the ternary BEC state is unstable against the formation of a single coherent BEC. Similar mechanisms should work when more than three AM states are macroscopically occupied.
To summarize, we have studied the ground-state properties of a rotating BEC with attractive interaction confined in a quasi-one-dimensional torus. When the condition (8) is met, only one AM state is macroscopically occupied. When the condition (9) is met, two BECs with different AM can, in principle, coexist. However, any deviation from the exact axisymmetry is shown to stabilize a single coherent BEC relative to a “fragmented” one. The plateaus of quantized circulation appear if $`\gamma <1`$, but the lengths of the plateaus are reduced. In other regions of parameters $`\gamma `$ and $`\omega `$, more than two AM states are macroscopically occupied, where non-HF terms stabilize a single coherent BEC even in the presence of the exact axisymmetry.
This work was supported in part by the National Science Foundation under grant no. DMR-96-14133 and by a Grant-in-Aid for Scientific Research (Grant No. 08247105) by the Ministry of Education, Science, Sports, and Culture of Japan.
|
no-problem/9907/cond-mat9907001.html
|
ar5iv
|
text
|
# Plateaux Transitions in the Pairing Model: Topology and Selection Rule
## Abstract
Based on the two-dimensional lattice fermion model, we discuss transitions between different pairing states. Each phase is labeled by an integer which is a topological invariant and characterized by vortices of the Bloch wavefunction. The transitions between phases with different integers obey a selection rule. Basic properties of the edge states are revealed. They reflect the topological character of the bulk. Transitions driven by randomness are also discussed numerically.
Quantum phase transitions between different superconducting states have attracted much interest recently. In refs., for example, its possible realization in a high-$`T_c`$ superconductor was proposed, which is accompanied by the time-reversal symmetry $`𝒯`$ breaking. Further, there is a recent observation that it has some similarity to the plateaux transition in the integer quantum Hall effect (IQHE) . One of the claims is that each phase is labeled by an integer (an analogue of the Hall conductance in the IQHE) and there can be transitions between phases with different integers.
In this paper, based on the lattice fermion model, we investigate the problem. The integer for each phase is defined by a topological invariant of the U(1) fiber bundle (the Chern number) . The U(1) fiber bundle is a geometrical object which is composed of the Brillouin zone (torus) and the Bloch wavefunctions (fiber). Due to its topological stability, a singularity in the U(1) fiber bundle necessarily occurs with the change of the Chern number. The singularity is identified with the energy-gap closing . The Chern number is closely related to the zero points (vortices) of the Bloch wavefunction. Focusing on the motion of the vortices near the singularity, we give a general proof of a selection rule of the transitions. Due to the intrinsic symmetry of the system, the selection rule differs from that of the IQHE. We also investigate the properties of the edge states and how they reflect the topological character of the bulk. The transition due to the change of randomness strength is a typical example of the problem. As emphasized in ref. , the symmetry effect leads to a new universality class and it is interesting as the Anderson localization problem. We also discuss the disorder-driven transition numerically.
The Hamiltonian is
$`={\displaystyle \underset{l,m}{}}𝐜_l^{}𝐇_{lm}𝐜_m={\displaystyle \underset{l,m}{}}𝐜_l^{}\left(\begin{array}{cc}t_{lm}& \mathrm{\Delta }_{lm}\\ \mathrm{\Delta }_{ml}^{}& t_{ml}\end{array}\right)𝐜_m`$ (1)
where $`𝐜_n^{}=\left(\begin{array}{c}c_n^{},c_n^{}\end{array}\right)`$, $`𝐜_n={}_{}{}^{t}\left(\begin{array}{c}c_n,c_n\end{array}\right)`$ and $`n=(n_x,n_y)𝐙^2`$. This is an extension of the lattice fermion model discussed in connection with the plateaux transition in the IQHE . Here we comment on the relation between this Hamiltonian and the superconductivity. Under the unitary transformation $`c_nd_n`$, $`c_nd_n^{}`$ (for $`n`$), the Hamiltonian (1) is equivalent to $`=_{l,m}[d_l^{}t_{lm}d_m+d_l^{}t_{lm}d_m+d_l^{}\mathrm{\Delta }_{lm}d_m^{}+d_m\mathrm{\Delta }_{lm}^{}d_l].`$ This is the pairing model for the singlet superconductivity. In the context of the superconductivity, the pair potential $`\mathrm{\Delta }_{lm}`$ should be determined by the self-consistent equation. Although the effect is interesting in itself, it is beyond the scope of this paper. Further, the conditions $`t_{lm}^{}=t_{ml}`$ and $`\mathrm{\Delta }_{lm}=\mathrm{\Delta }_{ml}`$ are imposed and they correspond to the hermiticity and the SU(2) symmetry respectively. The SU(2) symmetry leads to the condition
$`(\sigma _y𝐇_{lm}\sigma _y)^{}=𝐇_{lm}.`$ (2)
Due to the SU(2) symmetry, we can restrict ourselves to the sector $`_n𝐝_n^{}\sigma ^z𝐝_n=0`$ without loss of generality. This is equivalent to the half-filled condition for the Hamiltonian (1), which we impose in the following arguments.
Now let us define a topological invariant (the Chern number) for our model. It is a key concept in the following arguments. Put the system on a torus, which is $`L_x\times L_y`$ and periodic in both $`x`$ and $`y`$ directions. Define the Fourier transformation by $`𝐜_n=1/\sqrt{L_xL_y}_ke^{ikn}𝐜(k)`$ where $`k=(k_x,k_y)`$ is on the Brillouin zone $`(\pi ,\pi ]\times (\pi ,\pi ]`$. Assuming $`𝐇_{lm}`$ to be invariant under translations, we obtain $`=_k𝐜(k)^{}𝐇(k)𝐜(k)`$ where $`𝐇(k)=_{(lm)}e^{ik(lm)}𝐇_{lm}`$. The $`𝐇(k)`$ has two eigenvectors and eigenvalues. They correspond to the Bloch wavefunctions and the energy bands respectively. To satisfy the half-filled condition, the lower band is occupied for the ground state. We denote the Bloch wavefunction for the lower band by $`{}_{}{}^{t}\left(\begin{array}{c}a(k),b(k)\end{array}\right)`$. Then the topological invariant (the Chern number of the U(1) fibre bundle) is defined as:
$`C={\displaystyle \frac{1}{2\pi i}}{\displaystyle 𝑑k\widehat{z}(\mathbf{}_k\times 𝑨)}`$ (3)
where $`𝑨=\left(\begin{array}{c}a^{}(k),b^{}(k)\end{array}\right)\mathbf{}_k{}_{}{}^{t}\left(\begin{array}{c}a(k),b(k)\end{array}\right)`$ and $`\widehat{z}=(0,0,1)`$ . The integration $`𝑑k`$ is over the Brillouin zone which can be identified with a torus. For simplicity, we assumed $`𝐇_{lm}`$ to be invariant under translations. However a generalization to a multi-band system (including a random system), is possible. It is crucial for the following arguments to rewrite the above formula in terms of a zero point of the Bloch wavefunction (vortex) and the winding number (charge). To be explicit, let us perform the gauge fixing of the Bloch wavefunction for the lower energy band. We note that the Chern number itself does not depend on the gauge fixing. To define the gauge, we use the rule $`a(k)=1`$ and introduce a notation $`b(k)=b^{}(k)e^{i\zeta (k)}`$ ($`b^{}(k)𝐑`$). An ambiguity in the gauge fixing occurs when $`a(k)=0`$. Around the zero point (vortex) in the Brillouin zone, it is necessary to change the way of the gauge fixing, for example, as $`b(k)=1`$. Then the Chern number (3) is rewritten as
$`C={\displaystyle \underset{l}{}}C_l,C_l=1/2\pi {\displaystyle _{R_l}}𝑑k\zeta (k)`$ (4)
where the summation is taken over all vortices of $`a(k)`$ and $`R_l`$ is a region surrounding the $`l`$-th vortex which does not contain other zeros of either $`a(k)`$ or $`b(k)`$. Here $`C_l`$ is an integer and we call it the charge of the $`l`$-th vortex.
Let us discuss the $`d_{x^2y^2}+id_{xy}`$ model on a torus as an example . The model is defined by $`t_{n+e_x,n}=t_{n+e_y,n}=t`$, $`\mathrm{\Delta }_{n+e_x,n}=\mathrm{\Delta }_{n+e_y,n}=\mathrm{\Delta }_{x^2y^2}`$, $`\mathrm{\Delta }_{n+e_x+e_y,n}=\mathrm{\Delta }_{ne_x+e_y,n}=i\mathrm{\Delta }_{xy}`$ for $`n`$ ($`e_x=(1,0),e_y=(0,1),t>0,\mathrm{\Delta }_{x^2y^2},\mathrm{\Delta }_{xy}𝐑`$) and the other matrix elements are zero. Then $`𝐇(k)=\left(\begin{array}{cc}A(k)& B(k)\\ B^{}(k)& A(k)\end{array}\right)`$ where $`A(k)=2t(\mathrm{cos}k_x+\mathrm{cos}k_y)`$ and $`B(k)=2\mathrm{\Delta }_{x^2y^2}(\mathrm{cos}k_x\mathrm{cos}k_y)+2i\mathrm{\Delta }_{xy}[\mathrm{cos}(k_x+k_y)\mathrm{cos}(k_xk_y)]`$. The energy spectrum is given by $`E=\pm \sqrt{A(k)^2+|B(k)|^2}`$. When $`\mathrm{\Delta }_{xy}=0`$, the upper band and the lower band touch at four points $`(\pm \pi /2,\pm \pi /2)`$ in the Brillouin zone. The low-lying excitations around the gap-closing points are described by massless Dirac fermions. By turning on a finite $`\mathrm{\Delta }_{xy}`$, the mass generation occurs in the Dirac fermions. The vortex position is given by $`a(k)=0`$ and it is $`(k_x,k_y)=(0,0)`$. Using $`B(k)=B^{}(k)e^{i\zeta (k)}(B^{}(k)𝐑)`$, the charge of the vortex is $`1/2\pi _{(0,0)}𝑑k\zeta (k)=2\mathrm{sgn}(\mathrm{\Delta }_{xy}/\mathrm{\Delta }_{x^2y^2})`$. Since there is no other vortex, the Chern number $`C`$ is given by
$`C=2\mathrm{sgn}(\mathrm{\Delta }_{xy}/\mathrm{\Delta }_{x^2y^2}).`$ (5)
Here we note that, as in the case of the IQHE on the lattice, the Chern number can take various integer values in general cases e.g. a multi-band system and a model with a different $`𝒯`$-broken pairing symmetry.
As in the QHE , the edge states play a crucial role in the problem . The edge states reflect the bulk properties and it is possible to detect the topological character of the bulk through the edge states. In order to discuss the edge states, put the system on a cylinder which is $`L_x\times L_y`$ and periodic only in $`y`$ direction. Further we impose an open boundary condition in $`x`$ direction. Define the Fourier transformation by $`𝐜_n=1/\sqrt{L_y}_{k_y}e^{ik_yn_y}𝐜_{n_x}(k_y)`$ where $`k_y`$ is on $`(\pi ,\pi ]`$. Then $`=_{k_y}𝐜_{l_x}(k_y)^{}𝐇_{l_xm_x}(k_y)𝐜_{m_x}(k_y)`$ where $`𝐇_{lm}`$ is assumed to be invariant under translations in $`y`$ direction and $`𝐇_{l_xm_x}(k_y)=_{(l_ym_y)}e^{ik_y(l_ym_y)}𝐇_{lm}`$. The relation (2) can be rewritten as $`(\sigma _y𝐇_{l_xm_x}(k_y)\sigma _y)^{}=𝐇_{l_xm_x}(k_y)`$. Define an eigenvector $`𝐮`$ by $`_{m_x}𝐇_{l_xm_x}(k_y)𝐮_{m_x}=E𝐮_{l_x}`$. Now we show that there are two basic operations $`𝒫`$ and $`𝒬`$ on the vector. They are defined by $`(𝒫𝐮)_{n_x}=(\sigma _y𝐮_{n_x})^{}`$ and $`(𝒬𝐮)_{n_x}=𝐮_{L_xn_x+1}`$. Then $`_{m_x}𝐇_{l_xm_x}(k_y)(𝒫𝐮)_{m_x}=(E)(𝒫𝐮)_{l_x}`$. Further, when $`t_{lm}`$ is real and uniform, we can obtain another relation $`_{m_x}𝐇_{l_xm_x}(k_y)(𝒬𝐮)_{m_x}=E(𝒬𝐮)_{l_x}`$. Based on the symmetry, we shall discuss basic properties of the edge states. Consider a case when $`𝐮`$ is an eigenstate which is localized spatially on the left (or right) boundary i.e. a left (or right)-hand edge state. Then, from the above argument, $`𝐮`$,$`𝒫𝐮`$,$`𝒬𝐮`$,$`𝒫𝒬𝐮`$ are classified into two left-hand edge states and two right-hand edge states. Now, as in the argument of the IQHE, let us introduce a fictitious flux through the cylinder and change it from $`0`$ to flux quanta $`hc/e`$. Due to the symmetry, the number of edge states which move from one boundary to the other is necessarily even. We shall consider the $`d_{x^2y^2}+id_{xy}`$ model on a cylinder as an example. In Fig.1, the energy spectrum is shown. It can be confirmed that the energy spectra of the edge states appear in pairs and the number of the edge states which move from one boundary to the other as the fictitious flux is added, is even and coincides with the Chern number. In other words, the edge states directly reflect the topological character of the bulk.
As discussed above, each phase in our model is labeled by the Chern number. The vortices move as the Hamiltonian is perturbed. The Chern number, however, does not change in general. Due to the topological stability, the change of the Chern number is necessarily accompanied by a singularity in the U(1) fiber bundle. The singularity is identified with the energy-gap closing. Focusing on the singularity, we can prove a selection rule from a general point of view (see also ). The selection rule is closely related to the SU(2) symmetry in our model. Let us introduce a parameter $`g`$ in the Hamiltonian. Assume that, when $`g=g_0`$, the energy gap closes at several zero-energy points in the Brillouin zone. Next focus on the region near one of the gap-closing points $`(k_x^0,k_y^0)`$ i.e. $`𝐩={}_{}{}^{t}\left(\begin{array}{c}k_xk_x^0,k_yk_y^0,gg_0\end{array}\right)\mathrm{𝟎}`$. Then the leading part of the Hamiltonian is, generally, given by
$`𝐇_0(𝐩)=1𝐯_0𝐩+(\sigma _x,\sigma _y,\sigma _z)\mathrm{𝐯𝐩}`$ (6)
where $`𝐯_0`$ is a $`1\times 3`$ vector, $`\sigma _{x(y,z)}`$ is a $`2\times 2`$ Pauli matrix and $`𝐯`$ is a $`3\times 3`$ matrix. Now let us introduce the standard form, which is convenient for the following arguments. Choosing a unitary transformation $`𝒰`$ appropriately, one can obtain $`𝒰𝐇_0(𝐩)𝒰^1=1𝐯_0𝐩+(\sigma _x,\sigma _y,\sigma _z)\mathrm{𝐃𝐓𝐩}`$ where $`𝐃`$ is $`\mathrm{diag}(1,1,\mathrm{sgn}(\mathrm{det}𝐯))`$ and $`𝐓`$ is an upper triangle matrix with positive diagonal elements. Let us perform $`\mathrm{𝐓𝐩}𝐩`$ (the parity-conserving Affine transformation on $`(k_x,k_y)`$ and the rescaling on $`g`$) and the redefinition $`𝐯_0𝐓^1𝐯_0`$. Finally the standard form is obtained as
$`𝐇_1(𝐩)`$ $`=`$ $`1𝐯_0𝐩+\sigma _xp_x+\sigma _yp_y+\sigma _zp_z\mathrm{sgn}(\mathrm{det}(𝐯)).`$ (7)
This is equivalent to the Hamiltonian $`𝐇(k)`$ where $`A(k)=p_z\mathrm{sgn}(\mathrm{det}𝐯)`$ and $`B(k)=p_xip_y`$. Performing the same analysis, one can find that a vortex moves from one band to the other at the gap-closing, and the conclusion is that the change of the Chern number is practically determined by $`\mathrm{sgn}(\mathrm{det}𝐯)`$ and the change is $`+1`$ or $`1`$ . Next let us consider a dual gap-closing point $`(k_x^0,k_y^0)`$ which exists due to the symmetry. Here the relation derived from (2) plays a crucial role and it is given by $`𝐇(k)=(\sigma _y𝐇(k)\sigma _y)^{}`$. Therefore $`𝐇(k_x^0p_x,k_y^0p_y,g)=(\sigma _y𝐇(k_x^0+p_x,k_y^0+p_y,g)\sigma _y)^{}1𝐯_0𝐩+(\sigma _x,\sigma _y,\sigma _z)\mathrm{𝐯𝐩}.`$ To summarize, the linearized Hamiltonian near the dual gap-closing point $`(k_x^0,k_y^0)`$ is given by
$`𝐇(k_x^0+p_x,k_y^0+p_y,g)1𝐯_0𝐩+(\sigma _x,\sigma _y,\sigma _z)\mathrm{𝐰𝐩}`$ (8)
where $`𝐰=𝐯\mathrm{diag}(1,1,1)`$. It gives $`\mathrm{sgn}(\mathrm{det}𝐯)=\mathrm{sgn}(\mathrm{det}𝐰)`$. Therefore the change of the Chern number due to gap-closings always occurs in pair with the same sign and the total change is $`\mathrm{\Delta }C=\pm 2`$ generally. This is the selection rule. On the other hand, in the absence of the relation (2) (or SU(2) symmetry), the above argument does not hold and it leads to the rule $`\mathrm{\Delta }C=\pm 1`$. Now we note the results in ref. where the network model with the same symmetry as our model was investigated. In spite of the fact that their model is different from the lattice fermion model considered here, our selection rule still applies: this suggests the universality. Assuming that the system can belong to a phase with a vanishing Chern number by tuning parameters in the Hamiltonian, the selection rule implies that the Chern number is always even, which supports the result based on the edge states.
Finally, we comment on the disorder-driven transition based on the random $`d_{x^2y^2}+id_{xy}`$ model. It is defined by $`t_{ij}=t_{ij}^0+\delta t_{ij}`$ and $`\mathrm{\Delta }_{ij}=\mathrm{\Delta }_{ij}^0+\delta \mathrm{\Delta }_{ij}`$ where $`\delta t_{ij}`$ and $`\delta \mathrm{\Delta }_{ij}`$ denote the randomness. Here the hermiticity and the SU(2) symmetry are imposed on $`t_{ij}`$ and $`\mathrm{\Delta }_{ij}`$ respectively. It has an intimate connection with the random Dirac fermion problem . It is to be noted that the SU(2) symmetry is kept even in the presence of randomness and the model is interesting as the Anderson localization problem. As discussed above, the Chern number is $`\pm 2`$ in the absence of randomness. On the other hand, in the presence of sufficiently strong randomness, it is expected that all the vortices disappear through the pair annihilation of vortices with opposite charges and the Chern number vanishes . By the numerical diagonalization, we treated the disorder-driven transition for the $`\delta t_{ij}=\delta _{ij}f_i`$, $`\delta \mathrm{\Delta }_{ij}=\delta _{ij}g_i`$ where the $`f`$’s,$`g`$’s are uniform random numbers chosen from $`[W/2,W/2]`$. The model was also studied extensively in ref.. In Fig.2, the density of states is shown in the case $`\mathrm{\Delta }_{xy}^00`$. It can be seen that the two energy bands become closer and finally touch, as the randomness strength is increased. The transition $`C=\pm 20`$ with the gap-closing is a natural consequence from the selection rule. The exploration of the global phase diagram and the field-theoretical description are left as future problems.
The authors thank T. Fukui for valuable discussions and P.-A. Bares for careful reading of the manuscript. This work was supported in part by Grant-in-Aid from the Ministry of Education, Science and Culture of Japan and also Kawakami Memorial Foundation. The computation has been partly done using the facilities of the Supercomputer Center, ISSP, University of Tokyo.
|
no-problem/9907/cond-mat9907225.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Opposite charges attract in vacuum. Two ionizable objects in an electrolyte such as water form a more complex system, but nevertheless, in many situations a simple rule of thumb applies: As two planar surfaces approach from infinity they initially attract if oppositely charged, with a screened Coulomb potential .
The analysis of this paper was motivated by two sets of experimental observations that defy the familiar rule above . Bilayer vesicles were prepared from a mixture of cationic (positively charged) and neutral surfactants. In one case, vesicles were allowed to adhere to a negatively charged substrate , while in the other, negatively charged colloidal particles were introduced into suspensions of vesicles and the resulting self-assembled structures monitored . The puzzling observation was that despite the high charge on the vesicles, they were not uniformly attractive to the particles or surfaces, but instead separated macroscopically into adhesive and non-adhesive zones (Fig. 1). The vesicle diameter was typically 20$`\mu `$m; the Debye screening length $`1/\kappa `$ was much smaller, between 1 and 10 nm . Because membranes in living cells also include bilayers made from mixtures of negatively charged and neutral lipids , phenomena like the ones reported here might occur generally.
The observed behavior is disturbingly at odds with established intuition and raises several questions: Why should adhesion to one zone of the membrane affect adhesion hundreds of screening lengths away? More urgently, why should electrostatic adhesion saturate in this way? How can oppositely charged objects repel?
The key to the puzzle is a subtle interplay between the entropic and electrostatic effects of the mobile counterions and laterally mobile lipids, which leads to a thermodynamic instability: The equilibrium state involves the coexistence of adhesive and repulsive zones in the membrane. The latter repel incoming negative objects by recruiting negative counterions on the interior face. The effects of demixing on membrane adhesion have been studied recently by other groups (for example, see ). Our mechanism differs from earlier ones by including cooperative effects between counterions on both sides of an impermeable membrane. We will show how this effect can lead to adhesion saturation. A full discussion will appear elsewhere .
## 2 Physical picture
At least three possible equilibrium states could result when a mixed bilayer vesicle encounters charged surfaces: (i) The vesicle composition could remain uniform, and thus be uniformly attractive to the approaching surfaces. In this case, the vesicle should end up completely covered by particles (or tense and tightly adhering to the substrate). (ii) Alternately, binding could cause total lateral demixing of the charged and neutral surfactants in the membrane, and lead to a charge-depleted zone with no attraction to negative objects. One can easily show, however, that totally eliminating charged surfactants from the latter zone comes at a high cost in lateral distribution entropy; instead, enough residual charge will always remain to make the depleted zone quite attractive. Thus one would expect at most (iii) a coexistence between high charge density (tight-adhesion) and low charge density (weak-adhesion) zones.
In the experiments mentioned, however, often none of the above three expectations was realized: instead, adhesion saturated at some optimal coverage. Once this point was reached, the colloidal particles in are not seen to leave or join the vesicle. Indeed, particles in suspension are seen to approach, then bounce off, the vesicle. Similarly, the experiment in found “blistering” in the contact region instead of uniform tight contact.
To confront this paradox, we begin with Parsegian and Gingell’s classic analysis of the attraction of oppositely charged, planar surfaces . The authors studied the interaction of two infinite parallel planes with fixed bound surface charge densities $`\sigma _+>0`$ and $`\sigma _{}<0`$. Between the planes, a gap of width $`\mathrm{}`$ contains water, a dielectric medium with dielectric constant $`ϵ=80ϵ_0`$ with mobile point charges (ions) supplied by an external reservoir. We will consider all ions to be univalent, as in the experiment of . The reservoir has fixed density $`\widehat{n}`$ of positive ions and an equal number of negative ions. To either side of the gap lie infinite dielectrics with no free charges.
In this situation Parsegian and Gingell found that oppositely charged surfaces initially attract as they are brought in from infinite separation. The physical mechanism for the attraction is revealing: As the two surfaces’ counterion clouds begin to overlap, a positive counterion from the negative surface can join a negative counterion from the positive surface; the pair then escapes to the infinite reservoir, gaining entropy, without any net separation of charge. The process continues as the surfaces approach, until one counterion cloud is completely exhausted. If $`|\sigma _{}|>\sigma _+`$, then at this point only positive counterions remain in the gap. These residual ions cannot escape, because that would leave nonzero net charge in the gap (in the assumed infinite planar geometry, net charge carries an infinite cost in electric field energy).
At some separation $`\mathrm{}_{}`$ the osmotic pressure of the trapped residual counterions balances the electrostatic attraction of the plates. Nevertheless, the total free energy change for bringing the plates together is always negative: Oppositely charged surfaces always adhere . This adhesion energy per area is given by $`Wf(\mathrm{}_{})f(\mathrm{})=(\sigma _+)^2/ϵ\kappa `$, where $`\kappa `$ is the inverse screening length. Remarkably, $`W`$ is completely independent of the majority charge density $`\sigma _{}`$ . In light of the above physical picture, we can readily interpret that fact: The total counterion release is limited by the smaller of the two counterion populations.
Thus the physical situation studied by Parsegian and Gingell does not exhibit adhesion saturation. Fortunately, the general situation we wish to study differs in three key ways from theirs (see Fig. 2, A and B).
1. One of the surfaces is an infinite dielectric of fixed charge density $`\sigma _{}`$, as above, but the other contains a fluid mixture of charged and neutral elements. Thus the latter’s charge density $`\sigma _+`$ may vary, with surface average fixed to some value $`\sigma _{+,\mathrm{av}}`$.
2. The positive surface will be assumed to be a membrane bounding a closed vesicle of surface area $`A`$, not the boundary of a solid dielectric body. The membrane separates two regions with the same salt concentration $`\widehat{n}`$ far from the membrane.
3. We will study coexistence of two zones on the membrane: an attachment zone “a” similar to the one studied by Parsegian and Gingell, and a second zone “b”, which will eventually prove to be unattached (we do not assume this).
Because the sizes of the colloidal spheres and the vesicle are much bigger than the screening length, our geometry is essentially planar (Fig. 2B). (This idealization is self-consistent, as the equilibrium spacing $`\mathrm{}_{}`$ found below will prove to be of order the screening length.) For the same reason, we can neglect fringe fields at the boundaries of the zones “a” and “b”.
Before proceeding with any calculations, we now sketch the new physics which can arise in the general situation described by points 1 to 3 above. We will for concreteness suppose that half the vesicle’s counterions, with charge $`A\sigma _{+,\mathrm{av}}/2`$, are confined to the interior of the vesicle and half to the exterior.
One may be tempted to ignore the interior counterions altogether, in light of the fact that bilayer membranes are highly impermeable to ions . Indeed, counterions trapped inside the vesicle cannot participate directly in the mechanism described above for electrostatic adhesion, since they cannot pair with exterior counterions and escape together to infinity. Accordingly, let us momentarily suppose that the density of interior counterions is fixed.
In this situation (fixed interior counterions), Nardi et al. noted that zone “a” can recruit additional charged surfactants from zone “b”, in order to liberate their counterions and improve the adhesion (Fig. 2B and C). The entropic tendency of the charged and uncharged surfactants to remain mixed opposes this redistribution, however, and the resulting adhesion is a compromise between the two effects. Zone “b” will still have nonnegative charge, and will still remain quite attractive to further colloidal particles; there is no adhesion saturation.
The argument in the previous paragraph, however, neglects the ability of interior counterions to move laterally. As shown in Fig. 2C, the approaching exterior negative object will push negative interior counterions out of zone “a” and into zone “b”, where they can overwhelm the residual positive membrane charge and effectively reverse its sign.
This rearrangement liberates exterior counterions from both zones as shown in Fig. 2D, enhancing the adhesion. The capacitive energy cost of separating charge across a membrane in this way is significant, because of the low dielectric constant $`ϵ_m2ϵ_0`$ of the hydrocarbon tails of lipids and other surfactants. Nevertheless, the cost is initially zero, being proportional to the square of the charge separated, and hence there will always be some lateral rearrangement inside the vesicle, as indicated by the dashed arrows in Fig. 2C; Fig. 2D is the result. If this rearrangement reverses the effective membrane charge, it will lead to active repulsion in the nonadhesion “b” zone.
We will now show that the scenario just sketched can actually occur under a broad range of experimentally-realizable conditions.
## 3 Calculations
Consider the coexistence of two homogeneous zones “a” and “b” with area $`\gamma A`$ and $`(1\gamma )A`$ respectively. We will for simplicity assume that all surfactants have the same fixed area per headgroup $`a_0`$ and charge either $`+e`$ or 0. We must compute the equilibrium value $`\gamma _{}`$ of the fractional area coverage in terms of the ambient salt concentration $`\widehat{n}`$, the dielectric charge density $`\sigma _{}`$, the average membrane composition $`\sigma _{+,\mathrm{av}}`$, and the headgroup area $`a_0`$. We will show that the effective charge in zone “b” is negative.
Examining Fig. 2, we see that each zone freely exchanges two independent conserved quantities with the other. We may take these to be the net counterion charge $`Q_1`$ below the membrane and the total surfactant charge $`Q_+`$ of the membrane itself, with corresponding areal charge densities $`\sigma _1`$ and $`\sigma _+`$ respectively . We express all densities in dimensionless form, letting $`\sigma _{\mathrm{max}}=2e/a_0`$ and $`\overline{\sigma }_1=\sigma _1/\sigma _{\mathrm{max}}`$, etc. Thus $`\overline{\sigma }_+`$ must obey the important conditions $`0<\overline{\sigma }_+<1`$, while $`\overline{\sigma }_1`$ is in principle unbounded.
### 3.1 Thin membrane limit
To make the formulæ as transparent as possible, we first study the hypothetical case of a very thin membrane. We must compute the free energy density of a homogeneous region at fixed charge density, and then apply the usual phase coexistence rules. To get $`f`$, we simply add three terms, letting $`f=f_1+f_0+f_\mathrm{m}`$, where
1. $`f_1`$ is the free energy of the half-infinite space inside the vesicle. This space sees a plane of charge density $`\sigma _1`$, so in Debye-Hückel theory its free energy cost is $`f_1=(\sigma _{\mathrm{max}}^{}{}_{}{}^{2}/2\kappa ϵ)\overline{\sigma }_{1}^{}{}_{}{}^{\mathrm{\hspace{0.17em}2}}`$.
2. $`f_0`$ is the free energy of the gap region. This space sees a plane of charge $`\sigma _{}`$, a gap of width $`\mathrm{}`$, and another plane of total charge $`\sigma _\mathrm{t}\sigma _++\sigma _1`$. Minimizing the free energy over $`\mathrm{}`$ gives in Debye-Hückel theory $`f_0=\frac{\sigma _{\mathrm{max}}^{}{}_{}{}^{2}}{2\kappa ϵ}\left[\overline{\sigma }_{}^{}{}_{}{}^{\mathrm{\hspace{0.17em}2}}+\overline{\sigma }_{1}^{}{}_{}{}^{\mathrm{\hspace{0.17em}2}}+\overline{\sigma }_{\mathrm{t}}^{}{}_{}{}^{\mathrm{\hspace{0.17em}2}}+\overline{W}\right]`$. As discussed above, the nondimensional adhesion energy $`\overline{W}`$ equals $`2\overline{\sigma }_{}^{}{}_{}{}^{\mathrm{\hspace{0.17em}2}}`$ if $`\overline{\sigma }_\mathrm{t}>|\overline{\sigma }_{}|`$, $`2\overline{\sigma }_{\mathrm{t}}^{}{}_{}{}^{\mathrm{\hspace{0.17em}2}}`$ if $`0<\overline{\sigma }_\mathrm{t}<|\overline{\sigma }_{}|`$, or zero if $`\overline{\sigma }_\mathrm{t}<0`$. The third case corresponds to the possibility of a charge-reversed state with equilibrium spacing $`\mathrm{}_{}=\mathrm{}`$ (zone “b” of Fig. 2D).
3. $`f_\mathrm{m}`$ is the free energy density of the membrane itself. We retain only the entropy of mixing of charged and neutral surfactants, and neglect any other entropic or enthalpic packing effects in the membrane’s free energy. Thus, we have the simple form $`f_\mathrm{m}=\frac{2}{a_0}k_\mathrm{B}T\left[\overline{\sigma }_+\mathrm{log}\overline{\sigma }_++(1\overline{\sigma }_+)\mathrm{log}(1\overline{\sigma }_+)\right]`$.
The mixing entropy term $`f_\mathrm{m}`$ opposes phase decomposition, whereas the electrostatic terms $`f_1+f_0`$ promote it. The dimensionless ratio $`\beta 2\kappa ϵk_\mathrm{B}T/e\sigma _{\mathrm{max}}`$ describes the relative importance of these effects. Because typical surfactants have $`\sigma _{\mathrm{max}}=e/0.6`$nm<sup>2</sup>, a 1 mM NaCl solution with $`\kappa ^110`$nm gives $`\beta 0.006`$. We may thus expect to find two-phase coexistence and indeed inspection of the free energy density reveals such an instability (Fig. 3). (In the figures we have plotted the exact Poisson-Boltzmann theory result ; these results are qualitatively similar to those derived from the simple, linearized Debye-Hückel formulæ given above .)
Using for illustration the values $`\overline{\sigma }_{+,\mathrm{av}}=0.5`$ and $`\overline{\sigma }_{}=1.5`$ then gives coexistence between an adhesion zone with $`\overline{\sigma }_+^{(a)}=0.95`$, covering a fraction $`\gamma _{}=36`$% of the vesicle, and a charge-reversed zone with $`\overline{\sigma }_+^{(b)}=0.25`$. The latter zone presents total charge density $`\overline{\sigma }_\mathrm{t}=0.12`$ to the outside of the vesicle (thus reversed in sign), or about $`45`$% of the value $`\sigma _{+,\mathrm{av}}/2`$ presented to the outside when the adhering dielectric is far away.
### 3.2 Realistic membrane
To treat a realistic (finite-thickness) membrane, we must distinguish the two halves of the bilayer. Intuitively, one may expect that for a very thick membrane the energy cost of putting electric field lines in the dielectric interior of the membrane would become prohibitive, so that the system stops at Fig. 2C instead of proceeding to Fig. 2D. We now show that realistic membranes are not so thick, and do exhibit the same charge reversal (to a reduced degree) as the thin case just discussed.
Let the inner monolayer have charge fraction $`u\overline{\sigma }_+`$ and the outer $`(1u)\overline{\sigma }_+`$. Bilayer membranes have capacitance per area of around $`c=0.01`$pF/$`\mu `$m<sup>2</sup> , so we modify our free energy density $`f`$ by adding a capacitive term $`f_\mathrm{c}=\frac{\sigma _{\mathrm{max}}^{}{}_{}{}^{2}}{2\kappa ϵ}\tau \left(\overline{\sigma }_1+u\overline{\sigma }_+\right)^2`$, where the dimensionless ratio $`\tau \kappa ϵ/c`$ measures the importance of membrane thickness. Using for illustration a 1 mM NaCl electrolyte then gives $`\tau 7`$. We must also replace $`f_\mathrm{m}`$ by the corresponding formula for two layers.
For given $`(\overline{\sigma }_+,\overline{\sigma }_\mathrm{t})`$, we first minimize $`f(\overline{\sigma }_+,\overline{\sigma }_\mathrm{t},u,\mathrm{})`$ over $`\mathrm{}`$ and $`u`$, then repeat the phase-coexistence analysis. The free energy surface is then qualitatively similar to Fig. 3, though the extent of coverage in equilibrium $`\gamma _{}`$ is larger, around 63% . The degree of charge-reversal is now smaller, about $`1.2`$% of the value $`\sigma _{+,\mathrm{av}}/2`$. Even this small effect causes vigorous rejection of additional adhering objects: Increasing the adhesion area beyond its preferred value $`A\gamma _{}`$ by 1$`\mu `$m<sup>2</sup> on a vesicle of radius 10$`\mu `$m comes at a net free energy cost of more than $`3000k_\mathrm{B}T`$. Decreasing the area below $`A\gamma _{}`$ by removing a ball comes at a similar cost. Thus, realistic membranes can partition into an adhesion zone and a charge-reversed, repulsive, zone.
Our result is relatively insensitive to the values of the charge densities $`\sigma _{}`$ and $`\sigma _{+,\mathrm{av}}`$, though we must have $`|\sigma _{}|>\sigma _{+,\mathrm{av}}/2`$ in order to obtain the instability. Increasing the salt concentration beyond $`\widehat{n}=20`$mM, however, eliminates charge reversal by increasing $`\tau `$, a prediction in qualitative agreement with the experiments of . If $`\widehat{n}`$ lies between 20 mM and about 150 mM, we still find an instability, this time to partitioning into strong- and weak-adhesion zones .
In retrospect our mechanism is reminiscent of the chemiosmotic principle in bioenergetics : In this context it is well known that electrostatic effects can be transmitted over many screening lengths with the help of a semipermeable membrane. Besides entering into an explanation of the experiments in , our mechanism predicts that flaccid charged vesicles can adhere to oppositely charged substrates while remaining flaccid. Our analysis also makes testable predictions about the dependence of the equilibrium area fraction $`\gamma _{}`$ on the system parameters, notably the bilayer composition and salt concentration. Perhaps most strikingly, the charge-reversed zone found here should prove attractive to same-charge objects — a phenomenon not yet seen.
#### Acknowledgments
We thank R. Bruinsma and S. Safran for discussions, J. Crocker, K. Krishana, and E. Weeks for experimental assistance, and J. Nardi for communicating his results to us before publication. ND was supported in part by NSF grant CTS-9814398; TCL, LR, and DAW were supported in part by NSF Materials Research and Engineering Center Program under award number DMR96-32598 and equipment grants DMR97–04300 and DMR97-24486; PN was supported in part by NSF grant DMR98-07156; LR was supported in part by a Bourse Lavoisier from the Ministère des affaires Etrangères de France.
|
no-problem/9907/astro-ph9907288.html
|
ar5iv
|
text
|
# Chaos and order in a finite universe
\[
## Abstract
All inhabitants of this universe, from galaxies to people, are finite. Yet the universe itself is often assumed to be infinite. If instead the universe is topologically finite, then light and matter can take chaotic paths around the compact geometry. Chaos may lead to ordered features in the distribution of matter throughout space.
Contribution to the conference proceedings for “The Chaotic Universe”, ICRA, Rome.
\]
In cosmology as well as string theory, compact spaces have received renewed attention. Most discussions evade the chaos inherent in many of these spaces. Here we pursue the consequences of chaos on a compact hyperbolic space by isolating the fractal set of closed loop orbits. We also discuss the implications this may have for the distribution of large-scale structure in our own cosmos.
Compact hyperbolic spaces are known to induce chaotic mixing of trajectories as they wrap around the space. The closed loop orbits, though seemingly special, define the entire structure of the chaotic dynamics. The dense and abundant periodic orbits pack themselves into the finite space by collectively forming a fractal.
For simplicity we view the closed loop null-geodesics on a $`2D`$ finite space. Consider the double donut built by cutting a regular octagon out of a hyperbolic $`2D`$ space and identifying the opposite sides in pairs. The fundamental domain in fig.1 is drawn on the Poincaré unit sphere with the metric
$$ds^2=d\eta ^2+\frac{4}{(1r^2)^2}(dr^2+r^2d\varphi ^2).$$
(1)
Geodesics are semi-circles which are orthogonal to the boundary at $`r=1`$. The shortest closed loop orbits are also drawn in fig. 1. The null geodesics are completely specified by the angular momentum $`L=4(1r^2)^2r^2\dot{\varphi }`$ and the angular coordinate $`\theta `$ on the boundary at which the geodesics originated. As geodesics exit and re-enter the fundamental domain, they are chaotically mixed. A re-entry map can be found given the rules for identifying the faces of the octagon . The closed loop orbits can be found systematically order by order in the number of windings around the space.
In fig. 2, all of the periodic orbits are shown which execute $`5`$ windings or less around the octagon. There are $`19,624`$ such orbits. We find the box counting dimension of the set by covering it with boxes of size $`ϵ`$ on a side and counting the growth in the number of boxes needed to cover the set as $`ϵ`$ gets smaller. The dimension is found to be $`D_0=lim_{ϵ0}\mathrm{ln}N(ϵ)/\mathrm{ln}(1/ϵ)=2`$. The fact that the dimension is $`2`$ reflects the complete filling of the allowed area. The geodesics of the octagon form a self-affine fractal . We find the topological entropy, the number and location of fixed points, and the spectrum of dimensions in Ref. .
We suggest the underlying tangle of geodesics could be reflected in the distribution of large-scale structure . The largest structures in the universe have their origin in quantum fluctuations. The phenomenon of scarring in quantum chaos along the periodic orbits could lead to an enhanced filamentary structure along the shortest loops through the finite space. The web of galaxies and clusters of galaxies and the vibration modes excited by gravitational waves on the largest scales could reflect these scars. The presence of negative spatial curvature provides a natural scale with which to associate the finite topology. In flat universes, which have been extensively studied but where chaotic geodesics do not occur, there is no natural length scale on which to produce topological identifications and it is entirely ad hoc to create a fundamental topological identification scale so close to the present Hubble length. However, if a non-zero cosmological constant exists, as recent observations of distant supernova may be indicating , then the cosmological constant provides another fundamental length scale close to the current Hubble scale with which to associate topological identifications even in a zero curvature universe.
In the absence of inflation, there is no dynamical mechanism to generate large-scale fluctuations. They are simply an initial condition. A universe created finite and hyperbolic can be thought of as a realization from an ensemble of finite spaces with a spectrum of fluctuations atop a nearly constant negative curvature manifold. The spectrum of fluctuations will then be shaped according to the predictions of quantum chaos.
The tenents of quantum chaos imply that the ordered remnants of classical chaos are washed out in the transition to quantum mechanics . This expectation is based on two conjectures. As suggested by Berry , the quantum eigenmodes are well described as concentrated on the region of phase space traced out by a typical orbit over infinite times. For a completely chaotic system the orbits cover the entire space which seems to argue for a featureless distribution of the quantum modes. The amplitude of quantum fluctuations are also conjectured to be drawn from a Gaussian random ensemble with a flat spectrum, consistent with the predictions of Random Matrix Theory. While these assumptions seem to argue for uniformity in the quantum fluctuations, they are not inconsistent with striking geometric features. Typical eigenstates in a chaotic quantum system have shown scars of enhanced probability along short period orbits . The scars are consistent with Berry’s conjecture as typical orbits will spend the most time tracing short period loops. The scars can be related to the classical fractal of closed loops. For a completely chaotic system the fractal will fill the space with a box counting dimension equal to the dimension of the space, as we found to be the case for the compact octagon. However, if regions of the fractal are visited more frequently than others, as the shortest closed loops are in a compact space, then the scars might result .
Scars can be regions of underdensity as well as overdensity. The consequence for the build up of structure on the largest scales could be tendency to align with some short period orbits. in a $`2d`$ universe, we might see a web-like distribution of clusters aligned along the orbits of fig. 1, while structure on smaller scales would look featureless.
The scars would have little effect on the cosmic microwave background (CMB) although evidence of topology will be conspicuous through patterns or correlated circles . The surface of last scattering is also not likely to cut right through a scar. As a result, it is reasonable that the CMB will appear smooth when the distribution of galaxies does not. The galaxies might be marking the path of the short period orbits, providing a map of the shortest route around a finite cosmos.
JDB is supported by a PPARC Senior Fellowship. JL is supported by PPARC.
|
no-problem/9907/physics9907021.html
|
ar5iv
|
text
|
# 2D Numerical Simulation of the Resistive Reconnection Layer.
## Abstract
In this paper we present a two-dimensional numerical simulation of a reconnection current layer in incompressible resistive magnetohydrodynamics with uniform resistivity in the limit of very large Lundquist numbers. We use realistic boundary conditions derived consistently from the outside magnetic field, and we also take into account the effect of the backpressure from flow into the the separatrix region. We find that within a few Alfvén times the system reaches a steady state consistent with the Sweet–Parker model, even if the initial state is Petschek-like.
PACS Numbers: 52.30.Jb, 96.60.Rd, 47.15.Cb.
Magnetic reconnection is of great interest in many space and laboratory plasmas , and has been studied extensively for more than four decades. The most important question is that of the reconnection rate. The process of magnetic reconnection, is so complex, however, that this question is still not completely resolved, even within the simplest possible canonical model: two-dimensional (2D) incompressible resistive magnetohydrodynamics (MHD) with uniform resistivity $`\eta `$ in the limit of $`S\mathrm{}`$ (where $`S=V_AL/\eta `$ is the global Lundquist number, $`L`$ being the half-length of the reconnection layer). Historically, there were two drastically different estimates for the reconnection rate: the Sweet–Parker model gave a rather slow reconnection rate ($`E_{\mathrm{SP}}S^{1/2}`$), while the Petschek model gave any reconnection rate in the range from $`E_{\mathrm{SP}}`$ up to the fast maximum Petschek rate $`E_{\mathrm{Petschek}}1/\mathrm{log}S`$. Up until the present it was still unclear whether Petschek-like reconnection faster than Sweet–Parker reconnection is possible. Biskamp’s simulations are very persuasive that, in resistive MHD, the rate is generally that of Sweet–Parker. Still, his simulations are for $`S`$ in the range of a few thousand, and his boundary conditions are somewhat tailored to the reconnection rate he desires, the strength of the field and the length of layer adjusting to yield the Sweet–Parker rate. Thus, a more systematic boundary layer analysis is desirable to really settle the question.
We believe that the methods developed in the present paper are rather universal and can be applied to a very broad class of reconnecting systems. However, for definiteness and clarity we keep in mind a particular global geometry presented in Fig. 1 (although we do not use it explicitly in our present analysis). This Figure shows the situation somewhere in the middle of the process of merging of two plasma cylinders. Regions I and II are ideal MHD regions: regions I represent unreconnected flux, and region II represents reconnected flux. The two regions I are separated by the very narrow reconnection current layer. Plasma from regions I enters the reconnection layer and gets accelerated along the layer, finally entering the separatrix region between regions I and II. In general, both the reconnection layer and the separatrix region require resistive treatment.
In the limit $`S\mathrm{}`$ the reconnection rate is slow compared with the Alfvén time $`\tau _A=L/V_A`$. Then one can break the whole problem into the global problem and the local problem . The solution of the global problem is represented by a sequence of magnetostatic equilibria, while the solution of the local problem (concerning the narrow resistive reconnection layer and the separatrix region) determines the reconnection rate. The role of the global problem is to give the general geometry of the reconnecting system, the position and the length of the reconnection layer and of the separatrix, and the boundary conditions for the local problem. These boundary conditions are expressed in terms of the outside magnetic field $`B_{y,0}(y)`$, where $`y`$ is the direction along the layer. In particular, $`B_{y,0}(y)`$ provides the characteristic global scales: the half-length of the layer $`L`$, defined as the point where $`B_{y,0}(y)`$ has minimum, and the global Alfvén speed, defined as $`V_A=B_{y,0}(0)/\sqrt{4\pi \rho }`$.
In the present paper we study the local problem using the boundary conditions provided by our previous analysis of the global problem . Our main goal is to determine the internal structure of a steady state reconnection current layer (i.e., to find the 2D profiles of plasma velocity and magnetic field), and the reconnection rate represented by the (uniform) electric field $`E`$. We assume incompressible resistive MHD with uniform resistivity. Perfect mirror symmetry is assumed with respect to both the $`x`$ and $`y`$ axes (see Fig. 2).
This physical model is described by the following three steady state fluid equations: the incompressibility condition, $`𝐯=0`$, the $`z`$ component of Ohm’s law, $`\eta j_z=E+[𝐯\times 𝐁]_z`$, and the equation of motion, $`𝐯𝐯=p+[j_z\widehat{z}\times 𝐁]`$ (with the density set to one).
Now we take the crucial step in our analysis. We note that the reconnection problem is fundamentally a boundary layer problem, with $`S^1`$ being the small parameter. This allows us to perform a rescaling procedure inside the reconnection layer, to make rescaled resistivity equal to unity. We rescale distances and fields in the $`y`$-direction by the corresponding global values ($`L`$, $`B_{0,y}(0)`$, and $`V_A`$), while rescaling distances and fields in the $`x`$-direction by the corresponding local values: $`xx\delta _0`$, $`v_xv_xV_A\delta _0/L`$, $`B_xB_xB_{y,0}(0)\delta _0/L`$, $`EEB_{y,0}(0)V_A\delta _0/L`$. Here, $`\delta _0LS^{1/2}`$ is the Sweet-Parker thickness of the current layer. Thus, one can see that the small scale $`\delta _0`$ emerges naturally. Then, using the small parameter $`\delta _0/L=S^{1/2}1`$, one obtains a simplified set of fluid equations for the rescaled dimensionless quantities:
$$𝐯=0,$$
$`(1)`$
$$E=\frac{B_y}{x}v_xB_y+v_yB_x,$$
$`(2)`$
(where the first term on the right hand side (RHS) is the resistive term) and
$$𝐯v_y=\frac{p}{y}+B_x\frac{B_y}{x}.$$
$`(3)`$
In the last equation (representing the equation of motion in the $`y`$-direction, along the current layer) the pressure term can be expressed in terms of $`B_y(x,y)`$ and the outside field $`B_{0,y}(y)`$ by using the vertical pressure balance (representing the $`x`$-component of the equation of motion, across the current layer):
$$p(x,y)=\frac{B_{y,0}^2(y)}{2}\frac{B_y^2(x,y)}{2}.$$
$`(4)`$
We believe that this rescaling procedure captures all the important dynamical features of the reconnection process.
The problem is essentially two-dimensional, and requires a numerical approach. Therefore, we developed a numerical code for the main reconnection layer, supplemented by another code for the separatrix region. The solution in the separatrix region is needed to provide the downstream boundary conditions for the main layer (see below).
The steady state was achieved by following the true time evolution of the system starting with initial conditions discussed below. The time evolution was governed by two dynamical equations:
$$\dot{\mathrm{\Psi }}=(𝐯\mathrm{\Psi })+\frac{^2\mathrm{\Psi }}{x^2}+\left(\eta _y\frac{^2\mathrm{\Psi }}{y^2}\right),$$
$`(5)`$
$$\dot{v_y}=(𝐯v_y)\frac{d}{dy}\left[\frac{B_{y,0}^2(y)}{2}\right]+(𝐁B_y)+\left(\nu _y\frac{^2v_y}{y^2}\right).$$
$`(6)`$
(Small artificial resistivity $`\eta _y`$ and viscosity $`\nu _y`$ were added for numerical stability.) The natural unit of time is the Alfvén time $`\tau _A=L/V_A`$. The magnetic flux function $`\mathrm{\Psi }`$ is related to $`𝐁`$ via $`B_x=\mathrm{\Psi }_y`$, and $`B_y=\mathrm{\Psi }_x`$. At each time step, $`v_x`$ was obtained by integrating the incompressibility condition: $`v_x(x,y)=_0^x(v_y/y)𝑑x`$. Note that this means that we do not prescribe the incoming velocity, and hence the reconnection rate: the system itself determines how fast it wants to reconnect.
We used the finite difference method with centered derivatives in $`x`$ and $`y`$ (second order accuracy). The time derivatives were one-sided. The numerical scheme was explicit in the $`y`$ direction. In the $`x`$ direction the resistive term $`^2\mathrm{\Psi }/x^2`$ was treated implicitly, while all other terms were treated explicitly. Calculations were carried out on a rectangular uniform grid. We considered only one quadrant because of symmetry (see Fig. 2). More details can be found in Ref. .
The boundary conditions on the lower and left boundaries were those of symmetry (see Fig. 2). On the upper (inflow) boundary $`x=x_{\mathrm{lim}}`$ the boundary conditions were $`v_y/x=0`$ (which worked better than $`v_y=0`$) and $`B_y(x_{\mathrm{lim}},y)=B_{0,y}(y)`$ — the prescribed outside magnetic field. In our simulations we chose $`B_{0,y}(y)=B_0+(1B_0)\sqrt{1y^2}`$ with $`B_0=0.3`$, consistent with the global analysis of our previous paper .
The boundary conditions on the right (downstream) boundary cannot be given in a simple closed form. Instead, they require matching with the solution in the separatrix region, which itself is just as complicated as the main layer. Therefore, we have developed a supplemental numerical procedure for the separatrix region. Noticing that in the separatrix region the resistive term should not qualitatively change the solution, we adopt a simplified ideal-MHD model for the separatrix. This model is expected to give a qualitatively correct picture of the dynamical influence of the separatrix region on the main layer, and thus a sufficiently reasonable downstream boundary conditions for the main layer. In particular, our model includes the effects of the backpressure that the separatrix exerts on the main layer.
The advantages of our approach are: (i) use of the rescaled equations takes us directly into the realm of $`S\mathrm{}`$; (ii) we do not prescribe the incoming velocity $`v_x(x_{\mathrm{lim}},y)`$ as a boundary condition: $`v_x`$ is determined not by the $`x`$-component of the equation of motion, but rather by $`v_y`$ via the incompressibility condition. As a result, we do not prescribe the reconnection rate; (iii) the use of true time evolution guarantees that the achieved steady state is two-dimensionally stable; (iv) we have a realistic variation of the outside magnetic field along the layer, with the endpoint $`L`$ of the layer clearly defined as the point where $`B_{0,y}(y)`$ has minimum (see Ref. ).
Let us now discuss the results of our simulations. We find that, after a period of a few Alfvén times, the system reaches a Sweet–Parker-like steady state, independent of the initial configuration. In particular, when we start with a Petschek-like initial conditions (see Fig. 3a), the high velocity flow rapidly sweeps away the transverse magnetic field $`B_x`$ (see Fig. 4). This is important, because, for a Petschek-like configuration to exist, the transverse component of the magnetic field on the midplane, $`B_x(0,y)`$, must be large enough to be able to sustain the Petschek shocks in the field reversal region. For this to happen, $`B_x(0,y)`$ has to rise rapidly with $`y`$ inside a very short diffusion region, $`y<y_{}L`$ (in the case $`E_{\mathrm{init}}=2E_{\mathrm{SP}}`$, presented in Fig. 3a, $`y_{}=L/4`$), to reach a certain large value ($`B_x=2`$ for $`E_{\mathrm{init}}=2E_{\mathrm{SP}}`$) for $`y_{}<y<L`$. While the transverse magnetic flux is being swept away by the plasma flow, it is being regenerated by the merging of the $`B_y`$ field, but only at a certain rate and only on a global scale in the $`y`$-direction, related to the nonuniformity of the outside magnetic field $`B_{y,0}(y)`$, as discussed by Kulsrud . As a result, the initial Petschek-like structure is destroyed, and the inflow of the magnetic flux through the upper boundary drops in a fraction of one Alfvén time. Then, after a transient period, the system reaches a steady state consistent with the Sweet–Parker model.
We believe that the fact that we rescaled $`x`$ using the Sweet–Parker scaling does not mean that we prescribe the Sweet–Parker reconnection rate. Indeed, if the reconnecting system wanted to evolve towards Petschek’s fast reconnection, it would then try to develop some new characteristic structures, e.g. Petschek-like shocks, which we would be able to see. Note that, if Petschek is correct, then there should be a range of reconnection rates including those equal to any finite factor greater than one times the Sweet–Parker rate $`E_{\mathrm{SP}}`$. However, in our simulations we have demonstrated that there is only one stable solution and that it corresponds to $`E=E_{\mathrm{SP}}`$. In this sense we have demonstrated that Petschek must be wrong since reconnection can not even go a factor of two faster than Sweet–Parker, let alone almost the entire factor of $`\sqrt{S}`$. There seems no alternative to the conclusion that fast reconnection is impossible.
It is interesting that in Petschek’s original paper the length of the central diffusion region $`y_{}`$ is an undetermined parameter, and the reconnection velocity $`v_{\mathrm{rec}}`$ depends on this parameter as $`V_A(L/y_{})^2/\sqrt{S}`$. If $`y_{}`$ is taken as small as possible then Petschek finds that $`v_{\mathrm{rec}}V_A/\mathrm{log}(S)`$. However, $`y_{}`$ should be determined instead by balancing the generation of the transverse field $`B_x`$ against its loss by the Alfvénic flow (it should be remarked that Petschek did not discuss the origin of this transverse field in his paper). As we discussed above, this balance yields $`y_{}L`$, with the resulting unique rate equal to that of Sweet–Parker. This results are borne out by our time dependent numerical simulations.
The final steady state solution is represented in Fig. 3b. It corresponds to $`x_{\mathrm{lim}}=5.0`$, $`y_{\mathrm{lim}}=1.0`$, $`\eta _y=\nu _y=0.01`$. We see that the solution is consistent with the Sweet–Parker picture of reconnection layer: the plasma parameters change on the scale of order $`\delta _0`$ in the $`x`$ direction and on a global scale $`L`$ in the $`y`$-direction. The reconnection rate in the steady state is surprisingly close to the typical Sweet–Parker reconnection rate $`E_{\mathrm{SP}}=\eta ^{1/2}V_AB_{y,0}(0)`$. The solution is numerically robust: it does not depend on $`x_{\mathrm{lim}}`$, $`y_{\mathrm{lim}}`$ or on the small artificial resistivity $`\eta _y`$ and viscosity $`\nu _y`$.
Several things should be noted about this solution. First, $`j(x,y)0`$ (and $`B_y(x,y)B_{0,y}(y)`$) monotonically as $`x\mathrm{}`$, meaning that there is no flux pile-up. Second, as can be seen from Fig. 4, $`B_x(x=0,y)y`$ near $`y=0`$, contrary to the cubic behavior predicted by Priest–Cowley . This is due to the viscous boundary layer near the midplane $`x=0`$ and the resulting nonanalytic behavior in the limit of zero viscosity, as explained in Ref. . Third, there is a sharp change in $`B_x`$ and $`j`$ near the downstream boundary $`y=y_{\mathrm{lim}}=1`$, due to the fact that in the separatrix region we neglect the resistive term (which is in fact finite).
It appears that the destruction of the initially-set-up Petschek-like configuration and its conversion into a Sweet-Parker layer happens so fast that it is determined by the dynamics in the main layer itself and by its interaction with the upstream boundary conditions (scale of nonuniformity of $`B_{0,y}`$), as outlined above. Therefore, the fact that our model for the separatrix does not describe flow in the separatrix completely accurately seems to be unimportant. However, for the solution of the problem to be really complete, a better job has to be done in describing the separatrix dynamics, and, particularly, the dynamics in the very near vicinity of the endpoint of the reconnection layer. A proper consideration of the endpoint can not be done in our rescaled variables, and a further rescaling of variables and matching is needed.
To summarize, in this paper we present a definite solution to a particular clear-cut, mathematically consistent problem concerning the internal structure of the reconnection layer within the canonical framework (incompressible 2D MHD with uniform resistivity) with the outside field $`B_{0,y}(y)`$ varying on the global scale along the layer. Petschek-like solutions are found to be unstable, and the system quickly evolves from them to the unique stable solution corresponding to the Sweet–Parker layer. The reconnection rate is equal to the (rather slow) Sweet–Parker reconnection rate, $`E_{\mathrm{SP}}1/\sqrt{S}`$. This main result is consistent with the results of simulations by Biskamp and also with the experimental results in the MRX experiment .
Finally, because the Sweet–Parker model with classical (Spitzer) resistivity is too slow to explain solar flares, one has to add new physics to the model, e.g., locally enhanced anomalous resistivity. This should change the situation dramatically, and may even create a situation where a Petschek-like structure with fast reconnection is possible (see, for example, Refs. ).
We are grateful to D. Biskamp, S. Cowley, T. Forbes, M. Meneguzzi, S. Jardin, M. Yamada, H. Ji, S. Boldyrev, and A. Schekochihin for several fruitful discussions. This work was supported by Charlotte Elizabeth Procter Fellowship, by the Department of Energy Contract No. DE-AC02-76-CHO-3073, and by NASA’s Astrophysical Program under Grant NAGW2419.
|
no-problem/9907/astro-ph9907188.html
|
ar5iv
|
text
|
# Spatially Resolved Hopkins Ultraviolet Telescope Spectra of NGC 1068
## 1. Introduction
The proximity, brightness, and rich phenomenology of NGC 1068 have made it a key source for our current understanding of the structure and physics of active galactic nuclei (AGN). The bright, narrow, high-excitation emission lines of NGC 1068 define it as the prototype of the Seyfert 2 class. In polarized light, however, it shows the blue continuum and broad permitted emission lines typical of Seyfert 1s (Antonucci & Miller (1985); Miller et al. (1991); Code et al. (1993); Antonucci, Hurt, & Miller (1994)). These characteristics inspired the “unified model” of AGN in which the different types of Seyfert galaxy result from a combination of orientation, obscuration, and reflection of light from the continuum source and the broad-line region (BLR). (See the review by Antonucci (1993).) For Seyfert 2 galaxies, the observer’s line of sight lies near the plane of an opaque torus that blocks a direct view of the continuum source and the BLR. Electrons in clouds of hot gas and dust in cooler clouds above and below the plane of the torus reflect radiation from central regions into the observer’s line of sight. For Seyfert 1 galaxies, the observer’s line of sight lies well above the plane of the torus, resulting in an unobstructed view of the interior.
The obscuring torus not only blocks radiation from reaching an observer, but it also shadows gas in the surrounding regions of the galaxy. This anisotropic illumination can produce conical emission-line regions frequently referred to as “ionization cones” (Pogge (1989); Tsvetanov (1989); Evans et al. (1991), 1993, 1994). This standard interpretation presumes that photoionization by radiation from the central source is the primary energy input into the narrow-line region (NLR). The observed line ratios corroborate this interpretation when compared to photoionization models (e.g., Ferland & Osterbrock (1986); Veilleux & Osterbrock (1987); Binette, Courvoisier, & Robinson (1988)). However, in addition to radiation, kinetic energy in the form of outflowing winds and radio jets may also play a significant role in transferring energy from the nuclear region into the surrounding galaxy along the axis of the torus. The principal reflecting region for Seyfert 2s is most likely a wind of hot ($`10^5`$ K) electrons driven off the torus by X-rays from the central source (Krolik & Begelman (1986); Krolik & Lepp (1989)). Radio jets in Seyferts are also preferentially aligned with the axis of the ionization cones (Wilson & Tsvetanov (1994)). Kinetic energy from these sources may be a significant input to the energy budget of the NLR. A number of authors have suggested that shocks from such interactions may power a large fraction of the line emission. Morse, Raymond, & Wilson (1996) review the status of shocks for ionizing gas in the NLR. In cases like the bow shock models of Wilson & Ulvestad (1987) and Taylor, Dyson, & Axon (1992), shocks compress the gas and enhance its radiative output, but nuclear radiation drives the ionization. In the “autoionizing shock” models of Sutherland, Bicknell, & Dopita (1993) and Dopita & Sutherland (1995), ionizing photons generated in the primary shocks themselves photoionize the surrounding gas.
A key observational feature of the autoionizing shock models is the strength of collisionally excited far-UV emission lines. Lines such as O vi $`\lambda \lambda 1032,1037`$, C iii $`\lambda 977`$, and N iii $`\lambda 991`$ have high excitation temperatures and are thus prime coolants in the high temperature regions of fast shocks. These lines are particularly strong in NGC 1068 as seen in HUT observations during the Astro-1 mission (Kriss et al. 1992), and the temperature-sensitive ratios I(C iii\] $`\lambda 1909`$)/I(C iii $`\lambda 977`$) and I(N iii\] $`\lambda 1750`$)/I(N iii $`\lambda 991`$) implied temperatures exceeding 50,000 K— temperatures far higher than those characteristic of thermally stable photoionized gas. However, Ferguson, Ferland, & Pradhan (1995) argued that strong C iii $`\lambda 977`$ and N iii $`\lambda 991`$ could arise from fluorescence in photoionized gas if turbulent velocities exceeded $``$1000 $`\mathrm{km}\mathrm{s}^1`$.
Radio structures in NGC 1068 show a strong spatial correlation with the emission line gas at visible wavelengths, e.g. \[O iii\] $`\lambda 5007`$ (Wilson & Ulvestad (1987); Evans et al. (1991); Gallimore et al. (1996); Capetti, Axon, & Macchetto (1997)), and in the near-infrared, e.g., \[Fe ii\] 1.6435 $`\mu `$m (Blietz et al. (1994)). The Astro-1 HUT spectra of NGC 1068 lacked spatial resolution on scales smaller than the $`18^{\prime \prime }`$ and $`30^{\prime \prime }`$ circular apertures used for the observations. Neff et al. (1994) presented far-UV images with $`2`$″ resolution obtained with the Ultraviolet Imaging Telescope (UIT) on the Astro-1 mission, but this broad band (1250–2000 Å) image did not separate line and continuum emission. To obtain information on the spatial distribution of the far-UV emission lines and continuum flux, we carried out the observations described in this paper during the Astro-2 mission. We compare our spatially resolved spectra to the far-UV UIT images and to emission line and continuum images at longer wavelengths obtained with HST (Dressel et al. (1997)). We find that the far-UV emission lines observed with HUT are more extended than the \[O iii\] $`\lambda 5007`$ emission observed with HST, and that it is offset to the northeast along the direction of the radio jet. At wavelengths greater than 1200 Å, the UV continuum has a greater spatial extent than the emission lines. At shorter wavelengths, it becomes more spatially concentrated.
In sections 2 and 3 we describe the HUT Astro-2 observations and our data reduction process. We then discuss in §4 the spatial information that can be gleaned from the HUT observations. In §5 we compare the HUT observations to the UIT and HST images. We discuss the implications of our observations for the excitation of the line emission in NGC 1068 and for the origin of the ultraviolet continuum in §6. We summarize our conclusions in §7.
## 2. HUT Observations
During the course of the 16-day Astro-2 space shuttle mission in 1995 March we used HUT to obtain one-dimensional spectra through a 12″ aperture at three distinct spatial locations in the nuclear region of NGC 1068. HUT uses a 0.9-m primary mirror in conjunction with a prime-focus, Rowland-circle spectrograph to obtain spectra with a resolution of $``$3 Å spanning the 820–1840 Å band. The primary mirror and the concave grating are both coated with SiC to provide high UV reflectivity at wavelengths shortward of 1200 Å. Light dispersed by the prime-focus grating is focussed onto a photon-counting detector consisting of a micro-channel-plate intensifier with a CsI photocathode and a phosphor-screen anode. A 1024-diode linear Reticon array is used to detect the intensified pulses on the anode. Events are centroided to a half-diode precision, producing a 2048-pixel, one-dimensional spectrogram. Davidsen et al. (1992) provide a detailed description of HUT. Improvements made to HUT for the Astro-2 mission and HUT’s in-flight performance are described by Kruk et al. (1995).
The HUT guidance system relies on a slit-viewing video camera and guide stars. Using this information HUT is manually pointed by a payload specialist aboard the space shuttle. Video frames taken during the observation enable later reconstruction of HUT’s pointing during the observation. Positions of guide stars in the video frames can be centroided to an accuracy of $`0.5^{\prime \prime }`$ (about 0.5 pixels). The aperture position in the field of view also changes slightly between observations as the slit wheel rotates. Its position can be measured to an accuracy of $`0.5^{\prime \prime }`$ by measuring the apparent hole it leaves in images with a bright background, such as those obtained near the earth limb in orbital daylight. In Figure 1 we show the pointing errors derived from the video images during our observations of NGC 1068. These errors give the location of the optical nucleus relative to the center of the HUT $`12^{\prime \prime }`$ aperture.
Three dominant groupings of pointing errors are apparent in Figure 1, and we use these as the basis for the three separate spectra we discuss here. During the course of the first observation (A), the payload specialist moved the aperture $`3\mathrm{}`$ southwest to place the optical nucleus definitely within the slit. As this was a fairly substantial pointing correction, we have split this observation into two separate pieces. During A1 the optical nucleus is on the very edge of the slit and in A2 the optical nucleus is well within the slit. The second observation (B) occurred one day later. The ionization cone was centered within the aperture while the optical nucleus was near the southwest edge of the aperture.
## 3. HUT Data
Due to the differences between night and day airglow we separated the night and day portions of observations A1 and B for independent processing. All of observation A2 occurred during orbital night. These data groupings are summarized in Table 1. We reduced the data using the standard procedures described by Kruk et al. (1995; 1999). We determined background from regions free of airglow at wavelengths shortward of the 912 Å Lyman limit. With the exception of the extremely strong geocoronal Ly$`\alpha `$ line at 1216 Å we fitted the airglow lines with symmetric Gaussians and then subtracted the fitted profiles. Fitting the geocoronal Ly$`\alpha `$ line is more difficult due to its broad scattering wings. Thus we constructed Ly$`\alpha `$ templates from blank field observations taken during the Astro-2 mission. We then subtracted the appropriately scaled Ly$`\alpha `$ profile, including its broad scattering wings, from each spectrum. We flux calibrated the background-subtracted spectra by applying a time-dependent inverse-sensitivity curve derived from HUT observations and atmospheric models of white dwarfs (Kruk et al. (1995); Kruk et al. (1999)). A minor correction for second-order light in the 1824-1840 Å range was made based on the measured intensity of the 912–920 Å region. The errors for the raw count spectra were calculated by assuming a Poisson distribution. These errors were propagated through the reduction process. Figure 2 shows our final calibrated spectra for each of the HUT aperture locations.
We use the IRAF<sup>1</sup><sup>1</sup>1 The Image Reduction and Analysis Facility (IRAF) is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy,Inc. (AURA), under cooperative agreement with the National Science Foundation. task specfit (Kriss (1994)) to fit the continuum and emission lines in our spectra. For the continuum we assume a power law shape of the form $`\mathrm{F}_\lambda =\mathrm{F}_0(\lambda /1000)^\alpha `$ modified by the extinction curve of Cardelli, Clayton, & Mathis (1989). Keeping $`\mathrm{R}_\mathrm{v}`$ fixed at 3.1, we found $`\mathrm{E}(\mathrm{B}\mathrm{V})`$ = 0.02 provides the best fit for all three spectra. The best fit power law normalization and spectral indices are listed in Table 2. Our best fit extinction is in rough agreement with those used in previous ultraviolet studies of NGC 1068 (Kriss et al. (1992), Snijders et al. (1986), and Ferguson et al. (1995)). Fortunately, due to the comparative nature of our study, our results are insensitive to uncertainties in the extinction.
Table 2
Continuum Parameters for the HUT Spectra of NGC 1068<sup>a</sup> Observation $`\mathrm{F}_0`$ $`\alpha `$ A1 4.939 0.240 A2 5.834 0.294 B 8.070 0.415 <sup>a</sup>The parameters describe a power law $`\mathrm{F}_\lambda =\mathrm{F}_0(\lambda /1000)^\alpha `$ with $`\mathrm{F}_0`$ in units of $`10^{14}\mathrm{ergs}\mathrm{cm}^2\mathrm{s}^1\mathrm{\AA }^1`$, assuming an extinction correction of $`\mathrm{E}(\mathrm{B}\mathrm{V})`$ = 0.02.
To measure the emission lines, we fit symmetric Gaussians to the spectral features of NGC 1068. In our fits we have not included the wavelength region from 1200-1217 Å due to contamination from the Ly$`\alpha `$ geocoronal line. However NGC 1068’s intrinsic Ly$`\alpha `$ emission line was sufficiently bright and redshifted to disentangle it from geocoronal emission. We detected and fit broad components to the Ly$`\alpha `$ and C IV $`\lambda \lambda 1548,1551`$ emission lines as described by Kriss et al. (1992). In our fits we linked the widths of the Ly$`\alpha `$ and C IV broad line components so that they were identical. We also linked the widths and redshifts of the blended Ly$`\beta \lambda 1026`$ and O VI $`\lambda \lambda 1032,1038`$ lines.
Tables 35 give the best fit fluxes, velocities, and full-width at half-maximum (FWHM) for the emission lines in observations A1, A2, and B. The fluxes are as observed, with no correction for extinction. The velocities are relative to a systemic redshift of $`z=0.0038`$ (Huchra et al. (1992)). The quoted error bars are the formal $`1\sigma `$ uncertainties derived from the error matrix of our fit. We caution that errors for observation A1 could be underestimated due to a high percentage of day airglow contamination and relatively large pointing deviations for this observation. Our flux measurements and Gaussian widths are consistent with earlier data from Astro-1 (Kriss et al. (1992)) and IUE (Snijders et al. (1986)). The widths of our Ly$`\alpha `$ and C IV broad lines are in excellent agreement with the H$`\beta `$ width measured by Miller et al. (1991; FWHM=3030 $`\mathrm{kms}^1`$). As in the earlier observations by Kriss et al. (1992), we were unable to detect the O III\] $`\lambda \lambda 1660,1666`$ lines in any of our observations.
## 4. Intercomparison of the HUT Observations
Aperture location B, centered on the ionization cone, shows both brighter far-UV line and continuum emission than either of the other two aperture locations. This is readily seen in Figure 3, which compares the short wavelength portions of the HUT spectra. To make a more quantitative comparison, we calculated emission line and continuum flux ratios using the data from location B as a fiducial reference. Figures 4 and 5 show the relative emission line and continuum fluxes, respectively.
We can use the relative intensities seen at the various aperture locations to infer the location and extent of the far-UV line and continuum emission in NGC 1068. For this analysis we also use the Astro-1 data from Kriss et al. (1992). Since these data were obtained through even larger apertures ($`18^{\prime \prime }`$ and $`30^{\prime \prime }`$ diameters), they provide an overall normalization for the total flux. (We assume that the fluxes have not varied since our 1990 Astro-1 observations.) Qualitatively, it is helpful to first consider how one would expect the ratios to be behave in some simple, limiting cases. Referring to the relative aperture locations illustrated in Figure 1, one can see that a point source at the location of the optical nucleus would show roughly the same intensity at all three aperture locations. If one offset a point source to the northeast, toward the center of aperture B, its intensity in aperture B would brighten slightly as vignetting due to the aperture edge decreased. Conversely, its intensity as seen through aperture locations A1 and A2 would decrease drastically as the source moved beyond the edge of the aperture. Comparisons to the observed Astro-1 intensities mainly constrain the extent of the emission source. In the limit of a large, uniformly bright extended source, the intensities seen through apertures A1, A2, and B would all be comparable, but they would be fainter than that seen in the Astro-1 observation by the ratio of the angular areas, $`(12^{\prime \prime }/18^{\prime \prime })^2=0.44`$. For sources with angular extents smaller than the aperture sizes, the ratio relative to Astro-1 would gradually approach unity, depending on how close the center of emission was to the aperture edge.
As one can see in Figure 4 and Table 6, the ratios of the line fluxes among the observations is remarkably uniform. We find error-weighted averages for the emission line flux ratios to be $`\mathrm{A1}/\mathrm{B}=0.42\pm 0.03`$ and $`A2/B=0.67\pm 0.02`$. Within this range none of the measured flux ratios differ appreciably (Fig. 4). The far-UV line emission distribution appears largely independent of species or degree of ionization. In particular, high-excitation-temperature lines like C III $`\lambda 977`$, N III $`\lambda 991`$, and O VI $`\lambda \lambda 1034,1037`$ appear to have comparable spatial distributions, and their relative brightness seen through aperture B suggests that most of this emission originates from a region northeast of the optical nucleus, closer to the center of aperture B.
Fig. 3.— Comparison of the three HUT spectra in the 950–1100 Å range. The spectra have been smoothed with a box-car filter of width 5 pixels, but they have have not been renormalized or scaled. The high intensity of the B spectrum, where the aperture was centered on the ionization cone, relative to A1 and A2, where the ionization cone was largely outside the aperture, is apparent. Note also the greater relative brightness of the emission lines in aperture B compared to the continuum.
The intensity ratios of the narrow to broad components for the Ly$`\alpha `$ and C IV $`\lambda 1549`$ emission lines show little variance among the three observations or between the Ly$`\alpha `$ and C IV lines themselves (Table 7). A weighted average of the three pointings and both the Ly$`\alpha `$ and C IV emission lines yields $`I_{narrow}/I_{broad}=1.09\pm 0.06`$. In contrast, previous observations by Kriss et al. (1992) through an 18″ aperture found a more dominant contribution from the narrow lines: $`I_{\mathrm{Ly}\alpha ,\mathrm{narrow}}/I_{\mathrm{Ly}\alpha ,\mathrm{broad}}=4.28\pm 0.19`$ and $`I_{\mathrm{CIV},\mathrm{narrow}}/I_{\mathrm{CIV},\mathrm{broad}}=2.19\pm 0.20`$. This is likely due to the smaller aperture used in the Astro-2 observations. These new observations include less of the extended narrow line region. Also, in Kriss et al. (1992) the Ly$`\alpha `$ and C IV narrow-to-broad line ratios are very different, not uniform as in our observations. A possible explanation could be that the C IV narrow-line region is less extended than the Ly$`\alpha `$ narrow-line region.
Fig. 4.— Flux ratios of selected HUT emission lines are shown for aperture location A1 relative to B (A1/B) and A2 relative to B (A2/B). The dotted lines are the weighted mean ratios for A1/B and for A2/B.
Fig. 5.— HUT continuum flux ratios are shown for aperture locations A1 and A2 relative to B.
Table 7
Intensity Ratios of Narrow to Broad Lines Observation Line A1 A2 B Ly$`\alpha `$ $`1.05\pm 0.38`$ $`0.58\pm 0.13`$ $`1.21\pm 0.16`$ C IV $`1.18\pm 0.33`$ $`1.08\pm 0.17`$ $`1.11\pm 0.10`$
In contrast to the emission lines, Figure 5 and Table 8 show that the far-UV continuum has a very different distribution from that of the far-UV emission lines. The continuum is concentrated in location B, but not nearly as strongly as the emission lines. Also, unlike the emission lines, the continuum ratios are wavelength dependent. The continuum appears to have a broader spatial distribution covering all three aperture locations at wavelengths longward of 1200 Å. The continuum flux becomes increasingly concentrated in the ionization cone region at wavelengths shortward of 1200 Å.
To quantify these inferences, we fit a simple model of the line and continuum flux surface brightness distributions to the relative intensities as seen at the different aperture locations. The total flux from Astro-1 and the three Astro-2 measurements provide us with four data points. If we model the emission with a Gaussian surface brightness distribution, we have four free parameters that are exactly constrained by our data— total intensity, location (two coordinates), and the full-width at half maximum (FWHM). We determine the parameters by using a $`\chi ^2`$ fit to a series of Gaussian images with the FWHM incremented in steps of $`0.5^{\prime \prime }`$. At each step we vary the location freely and measure the total fluxes inside circular apertures $`12^{\prime \prime }`$ in diameter whose centers were fixed at the relative locations determined by the HUT pointing errors shown in Figure 1.
Table 9 gives the best-fit locations relative to the optical nucleus and the sizes for the emission line and continuum flux emitting regions. The near-zero values of $`\chi ^2`$ in our fits result from the zero degrees of freedom— four data points precisely determine four free parameters. Using $`\chi ^2`$ statistics gives the added value of permitting us to determine error bars for our results. The quoted error bars are $`1\sigma `$ (assuming $`\mathrm{\Delta }\chi ^2=1`$ relative to $`\chi _{min}^2`$). These errors are appropriate for internal comparisons of relative locations and sizes; however, they do not include the systematic errors of $`0.5^{\prime \prime }`$ in determining the location of the optical nucleus in the HUT acquisition video frames.
The results of the fits roughly correspond to our qualitative conclusions. Since the point-spread function (PSF) of HUT is $`4^{\prime \prime }`$ (Davidsen et al. (1992)), we see that the emission line region, with FWHM=$`5.5^{\prime \prime }`$, is slightly resolved. In contrast, the far-UV continuum comes from an extended region spanning many arc seconds. At wavelengths longer than 1200 Å, the center of the emission is consistent with that of the optical nucleus. At shorter wavelengths, the peak of the emission shifts in the direction of the emission line region, and its size becomes comparable to that of the emission line region.
The relative offsets of the different regions are a bit more puzzling. We see that the center of the emission line region lies $`1.5^{\prime \prime }`$ east of the long wavelength UV continuum, in the direction of the center of aperture B and the ionization cone. The puzzling aspect is the $`0.7^{\prime \prime }`$ offset to the south. Given the geometry of the ionization cone, we would have expected a slight shift to the north. The puzzle is most likely explained by a more detailed consideration of the actual morphology. As one can see in Fig. 6, the northern edge of aperture A1 lies near the starburst knots visible in the HST and UIT images. Some flux from these knots falls in the aperture due to the broad PSF and pointing jitter. This contributes to the large size inferred for the continuum regions at wavelengths longward of 1200 Å, and also shifts the fitted centroid to the north. This conclusion is made clearer through a detailed comparison to the HST images that we discuss in the next section.
## 5. Comparison to HST and UIT Images
To reference the coarse spatial resolution of our far-UV 1-D aperture spectra to higher resolution, longer wavelength images, we use archival HST WFPC2 images of NGC 1068 obtained by H. Ford, as reduced and analyzed by Dressel et al. (1997). We also compare our results to the far-UV image obtained by Neff et al. (1994) using UIT on the Astro-1 mission. Table 10 summarizes nine images covering a range of emission lines and continuum bands from the far-UV to the visible.
To compare the HUT spectra to the HST and UIT images, the images must be registered on a common coordinate system, convolved to the HUT spatial resolution, and integrated over the spectrograph entrance aperture. We start with the F218W ultraviolet continuum image as this overlaps the long wavelength end of the HUT bandpass and is devoid of strong emission lines. As noted in the last section, the HUT PSF has FWHM$`4^{\prime \prime }`$. To obtain a more precise estimate specifically for the NGC 1068 observations that also takes into account the pointing jitter, we convolved the F218W image with a series of Gaussians, incrementing the FWHM in steps of $`0.5^{\prime \prime }`$. As in our fits in the last section, we measured total fluxes inside circular apertures $`12^{\prime \prime }`$ in diameter corresponding to the HUT aperture locations. By comparing the intensity ratios at the three locations measured in the convolved HST image to those in the HUT spectra in the 1674–1842 Å continuum region, we are able to determine the best fit registration for the HUT apertures on the HST images as well as the best matching PSF. The best-fit Gaussian has FWHM=$`3.5^{\prime \prime }\pm 0.5^{\prime \prime }`$. The registration places the peak of the UV emission in the F218W image at a position relative to the optical nucleus as determined from the HUT video frames of $`\mathrm{\Delta }\alpha =+0.23^{\prime \prime }\pm 0.14^{\prime \prime }`$, $`\mathrm{\Delta }\delta =0.36^{\prime \prime }\pm 0.14^{\prime \prime }`$. Note that this reverses the situation encountered in the last section, where the fit to the HUT data alone placed the centroid of the 1800 Å continuum flux north of the optical nucleus. Figure 6 shows the HUT aperture locations determined in this process superimposed on the F218W image of NGC 1068 . By examining Fig. 6b, where the HST image is convolved with the HUT PSF, one can see that the starburst knots northwest of the nuclear region contribute a significant amount of flux to aperture A1. This is the likely explanation for the width inferred from the simple Gaussian fit in the last section as well as the bias to the north for the centroid of that fit.
Table 10
UIT and HST/WFPC2<sup>a</sup> Images of NGC 1068 Name Notes Exposure Time ($`s`$) UIT/B1 1250–2000 Å 1629 WFPC2/F218W continuum 1200 WFPC2/F336W continuum 450 WFPC2/F343N \[Ne V\] $`\lambda 3425`$ 900 WFPC2/F375N \[O II\] $`\lambda 3727`$ 900 WFPC2/F502N \[O III\] $`\lambda 5007`$ 450 WFPC2/F547M continuum 220 WFPC2/F656N $`\mathrm{H}\alpha `$ 450 WFPC2/F791W continuum 220 <sup>a</sup>HST/WFPC2 images courtesy of Zlatan Tsvetanov.
The flux distribution of the F218W image also provides a good fit to the 1450 Å region flux in the HUT data. With a FWHM of $`5.0^{\prime \prime }\pm 0.5^{\prime \prime }`$, we obtain $`\chi ^2=0.07`$. The F218W spatial distribution, however, gives a significantly worse fit to the short wavelength fluxes in the 1004–1020 Å band. Here we obtain $`\chi ^2=2.80`$ for a best-fit FWHM of $`2.0^{\prime \prime }\pm 0.5^{\prime \prime }`$. This is consistent with the results of our simpler, single Gaussian model that showed the short-wavelength radiation arising in a more compact region— the flux distribution in the F218W image has a greater spatial extent than is seen through the HUT apertures at 1000 Å. The F218W spatial distribution is a very poor match to the emission-line flux distribution— $`\chi ^2=113.36`$ for the best fit. This rules out any substantial contribution of the northwest starburst knots to the far-UV line emission seen with HUT.
If we do a fit to optimize the registration and resolution of the \[O III\] image, we obtain $`\chi ^2=0.03`$ for FWHM=$`5.0\pm 0.4`$ arc sec and an offset relative to the position of the F218W image of $`\mathrm{\Delta }\alpha =+1.40^{\prime \prime }\pm 0.10^{\prime \prime }`$, $`\mathrm{\Delta }\delta =0.59^{\prime \prime }\pm 0.10^{\prime \prime }`$. Panel (c) of Figure 6 shows the HUT aperture locations superimposed on the \[O III\] image of NGC 1068 . These results imply the UV line emission has a broader spatial distribution than that of the \[O III\] $`\lambda 5007`$ emission, and it is offset to the northeast.
The UIT image is described by Neff et al. (1994). We obtained the six $`2048\times 2048`$ images made using the far-UV detector through the B1 Sr$`\mathrm{F}_2`$ filter from the National Space Sciences Data Center (NSSDC). These images had exposure times ranging from 22 s to 752 s, with the shortest images providing unsaturated exposures of the brightest portions of the nuclear region. We registered and stacked these images to produce the image shown in the lower right, Fig. 6d. Comparing the UIT image to the convolved HST F218W image, one sees that while similar, the UIT image has distinctly brighter emission to the NE of the nucleus in the ionization-cone region. This is likely due to the fact that the UIT image includes bright line emission in its bandpass from Si iv$`\lambda 1400`$, C iv$`\lambda 1549`$, He ii$`\lambda 1640`$, and C iii\]$`\lambda 1909`$. The enhanced emission to the NE resembles the cone defined by the \[O iii\] $`\lambda 5007`$ emission shown in Fig. 6c.
Finally, we convolved the other seven HST images to the HUT spatial resolution using a Gaussian with FWHM=$`3.5^{\prime \prime }`$ and measured relative intensities by integrating over the three HUT aperture locations using the registration inferred from our fit to the F218W image. The resulting ratios are shown in Figure 7 and listed in Table 11. The continuum flux ratios computed from the HST images continue the trend with wavelength apparent in the HUT data. The observed ratios are consistent with a broader, less concentrated source of radiation becoming more dominant at longer wavelengths. The distribution of line emission viewed with HST is qualitatively, but not precisely, similar to what we see with HUT. The very broad spatial distribution of the low ionization emission lines, H$`\alpha `$ and \[O II\] $`\lambda 3727`$, readily apparent in the HST images, also shows up in the ratios illustrated in Figure 7 and Table 11. The closest resemblance is for the highest ionization lines, \[O III\] $`\lambda 5007`$ and \[Ne V\] $`\lambda 3425`$, but even here we do not get a good match to the flux distribution measured with HUT.
Fig. 7.— Emission line and continuum flux ratios derived from HST images convolved to the HUT resolution with a Gaussian of 3.5<sup>′′</sup> FWHM. Fluxes are measured within 12<sup>′′</sup> circular apertures at the HUT observation locations determined from the fit to the F218W image.
## 6. Discussion
### 6.1. Line Emission and Potential Excitation Mechanisms
The greater extent and larger offset to the northeast of the far-UV line emission relative to the \[O III\] $`\lambda 5007`$ emission are at odds with a photoionized origin for the far-UV emission lines. As argued by Kriss et al. (1992), the strengths of the temperature-sensitive C III $`\lambda 977`$ and N III $`\lambda 991`$ emission lines imply high temperatures for the line-emitting gas. For photoionized excitation, this implies high ionization parameters. Hence, one would expect the C III $`\lambda 977`$ and N III $`\lambda 991`$ emission to come from a more compact region closer to the central ionizing source than the \[O III\] $`\lambda 5007`$ emission. Instead, the offset to the northeast along the direction of the radio jet suggests that interaction of the jet with the line emitting clouds has more importance for the production of these emission lines than photoionization by the central engine.
Table 11
HST Line and Continuum Intensity Ratios Line/Filter Wavelength A1/B A2/B \[Ne V\] 3430 0.729 0.800 \[O II\] 3736 0.664 0.745 \[O III\] 5012 0.623 0.744 H$`\alpha `$ 6562 0.676 0.805 F218W 2189 0.700 0.776 F336W 3342 0.779 0.855 F547M 5476 0.893 0.968 F791W 7926 0.909 0.988
The fluxes measured in the C III $`\lambda 977`$ and N III $`\lambda 991`$ emission lines in our Astro-2 observations are consistent with those seen in Astro-1, given the geometrical uncertainties. These uncertainties, the shorter observation times, and the complicating effects of airglow in the Astro-2 observations actually make the Astro-1 measurements a more reliable measure of the total flux contributing to the temperature-sensitive ratios I($`\lambda 1909`$)/I($`\lambda 977`$) and I($`\lambda 1750`$)/I($`\lambda 991`$). Using the values with 90% confidence errors as given by Kriss et al. (1992), improved atomic physics calculations since then allow us to update the inferred temperature for the line-emitting gas. For I($`\lambda 1909`$)/I($`\lambda 977`$) = $`3.15\pm 0.51`$, the diagnostic diagrams in McKenna et al. (1999) give $`T=21,700_{1000}^{+1100}\mathrm{K}`$. For I($`\lambda 1750`$)/I($`\lambda 991`$) = $`1.46\pm 0.34`$, we obtain $`T=28,700_{2400}^{+3800}\mathrm{K}`$. The revisions to the relevant atomic parameters have moved both estimates in opposite directions— the C III temperature is now lower, while the N III value is higher.
As noted by Kriss et al. (1992), both temperature estimates are lower limits since they rely on observed values and do not include any corrections for extinction, which may be substantial. Our fits to the continuum give a low extinction, $`\mathrm{E}(\mathrm{B}\mathrm{V})=0.02`$, but, as we argue below, this is not very reliable due to the wide variety of actual sources for the continuum light and its subsequent shape. The He ii recombination lines ($`\lambda 4686`$, $`\lambda 1640`$, and $`\lambda 1085`$) give $`\mathrm{E}(\mathrm{B}\mathrm{V})=0.10.15`$ (Kriss et al. (1992); Koski (1978)), and other indicators suggest the extinction to the line emitting regions may be as high as $`\mathrm{E}(\mathrm{B}\mathrm{V})=0.4`$ (Malkan & Oke (1983)). At values this high, the inferred C III temperature is 59,300 K, and the N III temperature is $`>100,000`$ K.
The high temperatures implied by the observed C III and N III line ratios are easily matched by the autoionizing shock models computed by Allen, Dopita, & Tsvetanov (1998). The diagnostic diagrams they developed to discriminate between shocks and photoionization show a clear separation between shock and photoionization models when one compares the ratios \[O III\]/H$`\beta `$ and either C III\] $`\lambda 1909`$/C III $`\lambda 977`$ or C III\] $`\lambda 1750`$/C III $`\lambda 991`$. For either ratio, the temperatures are higher than can be produced in typical photoionization models. Neither is a precise match to any of the simple shock models. Given that our large apertures encompass a wealth of unresolved complex spatial structure, it is easy to see that any simple model could easily fail. A clear resolution of this problem will require sub-arcsecond observations in far-UV lines. An experiment optimized for far-UV imaging and long-slit spectroscopy in the 900–1200 Å band could provide the key data for this problem and many others.
Ferguson et al. (1995) argue that higher-than-expected intensities of C III $`\lambda 977`$ and N III $`\lambda 991`$ can be produced in photoionized gas by fluorescent processes. To produce the intensities observed in NGC 1068, however, their models require turbulent velocities of $`>1000\mathrm{km}\mathrm{s}^1`$. While line widths this high are observed in NGC 1068, it is hard to see how one can avoid fast shocks in clouds with such high internal turbulence. The fluorescent enhancement of photoionized emission is formally possible, but it is also certainly an incomplete physical picture of the excitation and emission process under such extreme hydrodynamic conditions. One can also get a velocity spread of $`1000\mathrm{km}\mathrm{s}^1`$ in an accelerating wind, but one would expect this to occur in a very small spatial region, essentially a point source with a size on the order of the electron scattering mirror. The spatial distribution inferred from our data definitely excludes a point-like source, and so we rule out this alternative.
Another frequently cited indicator of shock excitation is the 1.6435 $`\mu `$m transition of \[Fe II\]. Given the close spatial correlation between the morphology of the radio jet in NGC 1068 and of the \[Fe II\] emission, Blietz et al. (1994) concluded that the emission was produced in gas irradiated by nuclear X-rays, or in shocks excited by an outflow or jet from the nucleus. Examining the morphology of the \[Fe II\] emission in their Figure 1, one can see that it peaks near the optical/IR nucleus, but that a significant fraction of the emission arises from a more extended region $`1^{\prime \prime }`$ to the northeast along the axis of the radio jet. This is quite similar to the morphology we infer for the far-UV emission lines from our own observations.
### 6.2. The Origin of the Continuum Radiation
The strong wavelength dependence of the morphology of the continuum emission suggests that a variety of sources contribute to the continuum light seen in the HUT spectra. This supports the conclusions of Neff et al. (1994) that “several sources probably contribute to the integrated UV emission of NGC 1068 ,” including dust and electron-scattered nuclear radiation, starlight, and line and continuum emission from the NLR. On spatial scales of 30″ and at wavelengths longward of 1200 Å, starlight is a major contributor. Heckman et al. (1995) discuss the aperture-size dependence of the nuclear UV flux from NGC 1068 and conclude that apertures that include the starburst ring contain substantial amounts of UV flux from starlight. The broader spatial extent we observe for radiation at wavelengths $`>1200`$ Å (see §4) suggests that much of this light has a stellar origin. This stellar flux is directly visible in the spectra from the $`30^{\prime \prime }`$-aperture Astro-1 observations (Kriss et al. (1992)). As discussed in §5 and shown in Fig. 6b, portions of the starburst ring also contribute to the flux seen in the A1 spectrum.
Below 1200 Å the increasing concentration of the continuum emission in the vicinity of the ionization cone can be attributed to a stronger relative contribution of scattered nuclear radiation. This is likely due to two factors that cannot be disentangled at the spatial resolution of our observations. First, some of this light must be due to the electron scattering mirror inferred from the spectropolarimetric observations (Antonucci & Miller (1985); Miller et al. (1991); Code et al. (1993); Antonucci, Hurt, & Miller (1994)). Second, the NE dust cloud visible in the imaging polarimetry of Miller, Goodrich, & Matthews (1992) and in the UV spectropolarimetry of Code et al. (1993) falls squarely within the HUT aperture location B. Since the scattering cross section of dust rises rapidly with shorter wavelengths, the intensity of scattered light from this cloud will form an increasingly large fraction of the signal in aperture B at wavelengths shortward of 1200 Å. The roughly $`5^{\prime \prime }`$ extent we infer for the 1000 Å continuum light is comparable to the $`5^{\prime \prime }`$ distance of the NE cloud from the optical nucleus, and could account entirely for the size we infer from our observations.
To assess the extent to which our observed fluxes are due to scattered radiation from the obscured AGN, we compare our results to spectropolarimetric observations obtained with HST (Antonucci, Hurt, & Miller (1994)) and with the Wisconsin Ultraviolet Photo-Polarimeter Experiment (WUPPE) (Code et al. (1993)). At 1800 Å, the portion of the UV flux attributable to direct reflection of the central engine by the electron scattering mirror is $`3.3\times 10^{14}\mathrm{ergs}\mathrm{cm}^2\mathrm{s}^1\mathrm{\AA }^1`$ (Antonucci, Hurt, & Miller (1994)). The NE dust cloud contributes $`1.1\times 10^{14}\mathrm{ergs}\mathrm{cm}^2\mathrm{s}^1\mathrm{\AA }^1`$ (Code et al. (1993)). The sum accounts for only $``$80% of the flux we observe at 1800 Å at aperture position B in our observations. At aperture locations A1 and A2, this total accounts for 89% and 73% of the observed flux, respectively. As locations A1 and A2 largely exclude the NE dust cloud, the fraction of the observed UV radiation we see at these locations attributable to scattered AGN radiation is likely even less. Given that the A1 and A2 spectra are significantly redder than the B spectrum, we conclude that diffusely distributed starlight and/or additional scattered (but reddened) nuclear flux is present within the central $`6^{\prime \prime }`$ surrounding the nucleus.
## 7. Conclusions
We have described three spatially distinct HUT observations of NGC 1068. During observation A1 the optical nucleus was near the eastern edge of the aperture while in A2 the nucleus was near the northeastern edge. In the third observation, B, we centered the aperture on the ionization cone with the optical nucleus near the southwestern edge.
The observed fluxes and emission line ratios are consistent with those seen in our Astro-1 observations. The far-UV emission lines are brightest in aperture B in the vicinity of the ionization cone and the radio jet. All the far-UV emission lines have similar spatial distributions, including the high-excitation-temperature lines C III $`\lambda 977`$, N III $`\lambda 991`$, and O VI $`\lambda \lambda 1032,1037`$. We found observation B to be brighter than A1 and A2 in far-UV emission lines by factors of $`2.4`$ and $`1.5`$ respectively. From comparison to HST images, we find that the far-UV emission lines have a spatial distribution relative to the \[O III\] $`\lambda 5007`$ emission that is more extended and offset further to the northeast along the direction of the radio jet.
Using updated atomic physics (McKenna et al. (1999)) to re-evaluate the temperatures implied by the ratios I($`\lambda 1909`$)/I($`\lambda 977`$) and I($`\lambda 1750`$)/I($`\lambda 991`$), we find a lower limit from the C III ratio of 21,200 K, and a lower limit of 27,200 K from the N III ratio. Given the high ionization parameter normally required to produce such high temperatures in photoionized gas, we would have expected the spatial distribution inferred from our observations to be more compact and more concentrated near the nucleus. Since it appears more extended and offset along the axis of the radio jet, we conclude that our Astro-2 observations provide more evidence for this emission to arise in shock-heated rather than photoionized gas.
The continuum appears to have a broader spatial distribution than the emission lines, but it grows progressively more concentrated in the ionization cone region at wavelengths shorter than 1200 Å. At longer wavelengths an increasing portion of the flux appears to come from starlight. Within aperture location A1, this arises in the starburst knots northwest of the optical nucleus. Within aperture B, $``$80% of the flux can be attributed to scattered nuclear radiation from the electron scattering mirror and from the NE dust cloud. The remaining flux must come from more diffusely distributed starlight or scattered (but reddened) nuclear radiation within the central $`6^{\prime \prime }`$.
We are grateful to Z. Tsvetanov for providing the reduced HST images and to D. Ubol for help in producing Fig. 6. J. G. appreciates helpful conversations with B. Greeley and R. Telfer. This research was supported in part by NASA contract NAS 5-27000 to the Johns Hopkins University and by NASA Long-Term Space Astrophysics grant NAG 5-3255.
|
no-problem/9907/astro-ph9907387.html
|
ar5iv
|
text
|
# Emission-line Helium Abundances in Highly Obscured Nebulae
## 1 INTRODUCTION
Helium abundances are fundamental tests of galactic nucleosynthesis (Pagel 1997, Shaver et al. 1983), the relationship between helium and metal abundance (dY/dZ), and its extrapolation to the primordial He/H ratio (Torres-Peimbert et al. 1989; Skillman et al. 1998). The latter is a basic test of Big Bang nucleosynthesis (Olive et al. 1997) and so has a cosmological imperative. Great precision is needed for cosmological tests since the range of He/H produced in different models of the Big Bang is not large (Olive & Steigman 1995). High accuracy is possible since ratios of recombination lines are relatively straightforward to convert into ionic abundance ratios (Peimbert 1975; Benjamin et al. 1999).
Several complications enter when great precision is needed. Collisional excitation of optical helium lines can be important in denser objects (Kingdon & Ferland 1996; Benjamin et al. 1999). Accurate quantal calculations of the relevant rates now exist, and this can be taken into account if the density can be determined. Line transport effects can be significant for higher n radio lines (Goldberg 1966; Brocklehurst 1970). The last, and most serious, complication is the “Ionization Correction Factor” (ICF), the correction for the fact that (unobservable) atomic helium can be present in regions of a nebula where hydrogen is ionized (Osterbrock 1989, Peimbert 1975). Again, other spectroscopic evidence, especially optical forbidden lines (Mathis 1982), can determine the ICF.
Unfortunately it is not possible to convert the ionic He<sup>+</sup>/H<sup>+</sup> ratio into a total He/H abundance without knowing the ICF, and so far only optical lines have been used for this. Radio and IR emission lines make it possible to map He<sup>+</sup>/H<sup>+</sup> across the Galaxy, or within heavily obscured environments such as starburst galaxies (Shaver et al. 1983; Peimbert et al. 1988; Peimbert et al. 1992; Simpson et al. 1995; Afflerbach et al. 1996; 1997; Rubin et al. 1998). It will soon be possible to routinely obtain high quality mid IR spectra of heavily obscured objects. Methods of determining the He ICF from IR data alone, when other details of the source (density, geometry, stellar properties) are unknown, are needed. The following investigates several line ratios that help make this possible
## 2 CALCULATIONS
### 2.1 The Helium ICF
Relative intensities of HeI to HI emission lines are proportional to the ratio He<sup>+</sup>/H<sup>+</sup>, i.e.,
$$\frac{I(HeI)}{I(HI)}=f(n_e,T_e)\frac{He^+}{H^+}$$
(1)
(Osterbrock 1989), where $`f(n_e,T_e)`$ incorporates the microphysics of the line formation process (most recently summarized by Benjamin et al. 1999). The helium ionization correction factor (ICF) accounts for the presence of atomic helium in regions where hydrogen is ionized, and can be defined such that
$$\frac{He}{H}=\frac{He^+}{H^+}(1+\mathrm{ICF}).$$
(2)
The ionic pair on the right hand side of Equation 1 can be measured with great precision; the problem is to find a method to predict the ICF. The ICF is significant when the stellar continuum is too soft to maintain the helium ionization across the hydrogen Strömgren sphere. We expect the ICF to be positive in most cases, since the He<sup>+</sup> volume is generally smaller than the H<sup>+</sup> volume due to the higher ionization potential of He. It can be negative for a very hard continuum source since the photoionization cross section of atomic helium is larger than hydrogen at high energies (Shields 1974). Photoionization models can predict the ionic fractions $`A^+/A`$ for any set of parameters and geometry. Defining a volume mean ionization fraction as $`<A^+/A>`$, the helium ICF is given by
$$\mathrm{ICF}=\frac{H^+}{H}/\frac{He^+}{He}1.$$
(3)
Note that this allows for the possible presence of He<sup>+2</sup> in addition to He<sup>o</sup>. This will be the y-axis in most of the figures below.
### 2.2 An Approach
The aim of this paper is to identify line ratios that can indicate the ICF using IR data alone. To be robust a method must not depend on details. Our approach is to consider photoionization simulations over the broadest possible range of physical parameters, and then look for trends that are present in the resulting data set. We took a similar approach to identify ways to determine the bolometric luminosity using IR data alone (Bottorff et al. 1998).
We use the development version of the plasma simulation code Cloudy, last described by Ferland et al. (1998). The code used here is in excellent agreement with the methods, approximations, and results described in that review.
We concentrate on HII regions, since these best track the chemical evolution of the ISM. We use Orion abundances (Rubin et al. 1991; Osterbrock et al. 1992, Baldwin et al. 1991). A few of the abundances by number are He/H = 0.095, C/H = 3.0(-4), N/H = 7.0 (-5), O/H=4.0 (-4), Ne/H=6.0 (-5) and Ar/H =3.0 (-6), although all of the lightest thirty elements are included in our calculation.
The assumed composition has little effect on our conclusions. As a test we computed a series of models in which the abundances were decreased to the very low metalicities appropriate for low-mass galaxies. The main effect of the composition on the emission line spectrum is to change the equilibrium electron temperature (see Shields and Kennicutt 1995). The infrared lines we use arise within the ground term of each ion, and so have very small excitation potentials and little temperature dependence (essentially $`T^{1/2}`$ for each line, so that temperature effects cancel in the ratio). We compare only ratios involving different ionization stages of the same element.
We model a blister geometry, in which the HII region is an illuminated layer on the face of a molecular cloud. Specifically we assume constant density, composition, and dust to gas ratio, across this layer. We use the Orion grains and the physics described by Baldwin et al. (1991). These assumptions about the grains introduce dependencies that are small compared to the dispersion of results introduced by parameters we do vary, which are described next.
### 2.3 Free Parameters
The remaining parameters are the effective temperature and atmosphere of the ionizing star, the gas density, and either the distance of the star to the layer or the ionization parameter $`U`$. The ionization parameter is the dimensionless ratio of the density of hydrogen-ionizing photons at the illuminated face of the cloud, $`\mathrm{\Phi }(H)/c`$, to the hydrogen density $`n_H`$,
$$U=\frac{\mathrm{\Phi }(H)}{n_Hc}.$$
(4)
The gas is more ionized, and the ICF tends to be smaller, for larger values of $`U`$.
We expect that the helium ICF will be most sensitive to the stellar temperature and $`U`$, since these can selectively change the He<sup>+</sup>/H<sup>+</sup> ratio. The density has little affect on the ionization of helium relative to hydrogen when $`U`$ is held constant, although density does affect intensities of the IR lines when they are collisionally deexcited.
The ICF is most sharply dependent on the intensity of the helium-ionizing stellar continuum ($`h\nu >1.8`$ Ryd) relative to the hydrogen ionizing continuum ($`h\nu >1`$ Ryd). Figure 1 shows the four types of ionizing continua we use here, all at an effective temperature of 35,000 K and normalized to the same number of hydrogen ionizing photons. The first is a simple black body, which we take as a reference. The Atlas LTE plane parallel atmosphere (Kurucz 1991) is considerably softer than the blackbody at helium-ionizing energies. The Mihalas (1972) NLTE plane parallel atmosphere is harder than the blackbody at the highest energies, and depressed at intermediate energies, due to the opacity of its heavy pseudo-element. Finally, the CoStar NLTE extended atmosphere (Schaerer et al. 1996a, b) is the hardest of the set, with a flux equal to or exceeding the blackbody over much of the helium-ionizing continuum. Of these four, the CoStar atmospheres are the only that include winds and full NLTE, and so may be most realistic.
### 2.4 Two Pairs of Line Rations
The most likely candidates for robust ICF indicators are pairs of IR lines of different ionization stages of the same element. The ionization potentials of both stages must be greater than that of hydrogen to ensure that the lines form in the HII region (where the HeI and HI lines form) and not the background PDR (CI recombination lines form here, Tielens & Hollenbach 1985). Ideally the element should be abundant (and not sharply depleted in the ISM or HII Regions) and the lines in the pair should have similar critical densities (so that the electron density need not be well known).
The most promising possibilities are the two noble gasses Ne and Ar. The first ion of each has an $`np^5{}_{}{}^{2}P`$ ground term and ionization potentials of 1.585 Ryd and 1.158 Ryd for Ne and Ar respectively. These produce the lines \[Ne II\] $`\lambda `$ 12.8 $`\mathrm{\mu m}`$ and \[Ar II\] $`\lambda `$ 6.9 $`\mathrm{\mu m}`$ with critical densities ($`\mathrm{log}n`$ cm<sup>-3</sup>) of 5.8 and 5.6. The second ion of each has an $`np^4{}_{}{}^{3}P`$ ground term, ionization potentials of 3.010 Ryd and 2.031 Ryd respectively, with each ion producing a pair of lines: \[NeIII\] $`\lambda \lambda `$ 36 $`\mathrm{\mu m}`$, 15.6 $`\mathrm{\mu m}`$ (critical densities, $`\mathrm{log}n`$, of 4.7 and 5.4) and \[ArIII\] $`\lambda \lambda `$ 9 $`\mathrm{\mu m}`$, 21 $`\mathrm{\mu m}`$ (critical densities 5.5 and 4.7). We expect the \[NeIII\] 15.6 $`\mathrm{\mu m}`$/\[NeII\] 12.8 $`\mathrm{\mu m}`$ and \[ArIII\] 9 $`\mathrm{\mu m}`$/\[Ar II\] 6.9 $`\mathrm{\mu m}`$ ratios to be the best ICF indicators because of the similarities in their critical densities, with the Ar ratio the very best in this respect. The Ne ions have larger ionization potentials than the Ar ions, so the Ne line ratios may be a sharper indicator of the level of helium ionization.
## 3 RESULTS
### 3.1 Results for Single Stellar Temperatures
Figure 2 shows representative results for two stellar temperatures, 30,000 K (the upper pair of panels) and 40,000 K (the lower pair). CoStar atmospheres were used. This temperature range is representative of objects with non-trivial He ICFs — stars much hotter than 40,000 K easily sustain the helium ionization, while stars cooler than 30,000 K do not ionize helium at all. The ionization parameter was varied between $`U=10^4`$ and 10<sup>-1</sup> to more than encompass the range represented by spectra of observed HII regions. The HII region density was varied between a density low enough to be in the asymptotic low-density limit (10 cm<sup>-3</sup>) and the high density of 10<sup>6</sup> cm<sup>-3</sup>. Most HII regions lie below this high density, and so this range should encompass nearly all objects. The range of densities is broad enough to ensure that the full consequences of collisional deexcitation of the forbidden lines occur in many of the models.
The helium ICF is shown in the left pair of panels. The dependence on the stellar temperature is striking. The ICF ranges between 2 and 20 for the cool star, but is small (and often negative) for the hot star. A negative ICF occurs when the incident continuum is so hard that penetrating high-energy radiation sustains the ionization of He in regions where H is neutral (Shields 1974). The ICF has a sharp dependence on $`U`$ and is nearly independent of the density. This is because the ionization parameter sets the overall ionization of the nebula. The very weak density dependence is due to collisional effects altering the fraction of helium triplet decays to ground - these produce hydrogen-ionizing radiation and details can slightly change the resulting geometry.
The predicted \[NeIII\] 36 $`\mathrm{\mu m}`$/\[Ne II\] 12.8 $`\mathrm{\mu m}`$ intensity ratios are shown in the right pair of panels. Only one of the four line pairs we consider is shown since the dependencies it exhibits are typical of the others. The line ratio has an overall dependence on $`U`$ that is similar to the ICF — higher $`U`$ produces higher ratios. This is the underlying correlation we will exploit in the following.
Unfortunately the line ratio also has a significant density dependence for $`n>10^4`$ cm<sup>-3</sup>, densities high enough to de-excite one but not both of the lines. The line ratio is density independent at densities substantially lower or higher than the critical densities of both lines. It is clear from Figure 2 that gas with densities $`n10^4`$ cm<sup>-3</sup> will introduce dispersion in the correlations we show next.
### 3.2 Results for a Wide Range of Stellar Continua
Grids similar to those shown in Figure 2 were computed for all four types of stellar continua shown in Figure 1. A broad range of temperatures was used for each stellar atmosphere — 25,000 K to 50,000 K for the black body, 30,000 K to 50,000 K for CoStar, 30,000 K to 45,000 K for Atlas, and 30,000 K to 40,000 K for Mihalas. The ionization parameter and density were varied over the ranges shown in Figure 2. This resulted in a set of well over 10<sup>4</sup> independent photoionization simulations. The results for \[NeIII\]/\[NeII\] are shown in Figure 3 while Figure 4 shows \[ArIII\]/\[ArII\]. Results are presented with the observable line ratio increasing to the right, and with the helium ICF, the quantity we want to predict, as the dependent variable.
All four of the line ratios show a negative correlation: the helium ICF generally increases as the line ratio decreases below a certain value. The line ratio decreases with decreasing levels of ionization, which in turn can be due to either lower $`U`$ or stellar temperature (Figure 2). This decreased ionization correlates with larger amounts of atomic helium in the HII region, and so a larger He ICF.
The range of densities we consider causes a large part of the scatter that is present in the figures. The line ratio can be small but the ionization of the gas high if the double ionized species is collisionally de-excited. This explains why the \[Ar III\] 9 $`\mathrm{\mu m}`$/\[Ar II\] 6.9 $`\mathrm{\mu m}`$ ratio has the least scatter — these lines happen to have nearly identical critical densities. This line ratio could actually be used to deduce pathologically high values of the helium ICF.
For each line pair the ICF is small above some critical value of the ratio. This suggests an observational approach. One could easily identify those objects for which this ratio is exceeded, and so, for whatever reason, have a small ICF. For these the helium abundance is nearly equal to the measured He<sup>+</sup>/H<sup>+</sup> ratio. The critical line ratios are greater than unity. In some cases the weaker line might be unobservable, but the lower limit to the ratio is still of value to this approach.
The ICF must be known to much better than 5% accuracy to test Big Bang nucleosynthesis. Figure 5 shows an expanded view of all four line ratios, with a vastly expanded scale for the helium ICF. It is clear that a significant amount of dispersion, at the several percent level, is still present. Note also that accurate determination of line ratios greater than ten may be difficult. It is also clear that the harder continua, the Mihalas and CoStar continua (both plotted as open symbols), generally produce a *negative* ICF. If the CoStar continua are representative of the spectra of windy stars, then negative ICFs could be a major concern — the helium abundance would be overestimated if this were not taken into account.
## 4 Conclusions
* We have identified four IR line pairs that are sensitive to the helium ionization correction factor. These track the ionization of helium because they are formed from adjacent stages of ionization of the same element. These lines could be combined with radio or IR recombination lines to deduce a total He/H abundance ratio.
* For each line pair the helium ICF is smaller than several percent when the ratio is above a certain critical intensity ratio. Below this intensity ratio the helium ICF may still be small in some cases (usually high density), but other information, from other emission lines, would be needed to make progress.
* Three of the line ratio/ICF correlations have large dispersion for a given line ratio when the ICF is significant, due to differences in the critical densities for each line. The exception is the \[ArIII\] 9 $`\mathrm{\mu m}`$/\[ArII\] 6.9 $`\mathrm{\mu m}`$ ratio since these lines happen to have very similar critical densities. For this pair a rough estimate of the ICF can be made from this line ratio alone. In all four cases, the estimated ICF could be made more accurate were other spectroscopic evidence available.
* The prediction continua of the most recent generation of windy stellar atmospheres (CoStar: Schaerer et al. 1996a, b), are sufficiently hard as to produce a *negative* helium ICF. An observational analysis that did not allow for this possibility would overestimate the helium abundance.
GJF thanks CITA for its hospitality during a sabbatical year, and acknowledges support from the Natural Science and Engineering Research Council of Canada through CITA. DRB also acknowledges financial support from NSERC. Research in Nebular Astrophysics at the University of Kentucky is supported by grants from NSF and NASA. We thank the referee for a careful reading of the manuscript.
|
no-problem/9907/nucl-th9907085.html
|
ar5iv
|
text
|
# Shapes and limit-from𝛽-decay in proton rich Ge, Se, Kr and Sr isotopes
## I Introduction
The field devoted to the study of exotic nuclei is nowadays one of the most fruitful in Nuclear Physics. Experimental work on nuclei far from stability is providing a wealth of new information that is a challenge to theory. Of prime importance is to test the predictions on unstable nuclei of theoretical models that are trusted for their achievements on stable nuclei.
The interest on exotic nuclei is manifold. On a first step there is the intrinsic appeal to know better and better those regions of the nuclear chart unexplored yet. In addition to that, one has particular interesting problems still open such as the delimitation of the drip lines, the appearance of new phenomena, absent in stable nuclei, from which we can learn new aspects of the nuclear structure, or the decay properties of these radioactive species that are crucial to understand various phases in the stellar evolution . Concerning the last point, nuclear astrophysics is essential to understand the energy generation, the nucleosynthesis, and the abundance of elements in stars. Nuclear astrophysics provides the input (decay properties and cross sections for nuclear reactions of radioactive nuclei), that are needed to model the late phases of the stellar life. Since this input cannot be determined experimentally for the extreme conditions of temperature and density that hold in the interior of the star, reliable theoretical calculations for these processes are absolutely necessary.
Reliable predictions of $`\beta `$decay strength distributions are also necessary. These are needed for the calculation of beta decay half-lives as well as for all kinds of $`\beta `$delayed processes like $`\beta `$delayed particle emission or $`\beta `$delayed fission. The strength distribution depends on the microscopic structure of the initial and final nuclear wave functions as well as on the interaction which mediates the decay, it can be used to infer information on the nuclear structure or to test different models or approximations. A reliable description of the ground state of the parent nucleus and of the states populated in the daughter nucleus is necessary to obtain a good description of the $`\beta `$strength distribution, and vice versa, failures to describe such distributions would indicate that an improvement of the theoretical formalism is needed.
Among the microscopic nuclear models designed to describe the properties of the nuclear excitations we can distinguish basically two types of approaches. 1) One is a phenomenological approach where one takes an empirical mean field and assumes a simple separable residual interaction. In this case there is a severe constraint of the method when applied to exotic nuclei, connected to the empirical choice of the potential well and residual force. Since such models are based on parameters locally fitted to the available data on stable nuclei, their extrapolation to exotic nuclei is at least questionable. 2) The other approach is the selfconsistent approach. Here the consistency of the picture is stressed using an effective interaction, usually a Skyrme interaction, that describes successfully the ground state properties of the nuclei along the periodic table within a Hartree-Fock calculation, and it is also able to describe the excited states from an RPA calculation with residual interactions obtained from the same force. The main difficulty is that the complexity of the calculation increases rapidly with the size of the configuration space and one has to work within limited spaces for nonseparable forces. The practical advantage of approach 1) is that it is possible to calculate nuclear excitations in very large configuration spaces since there is no need to diagonalize matrices whose dimensions grow with the size of the configuration space .
One way to combine the good features of both approaches is to construct first the quasiparticle basis selfconsistently from a Hartree-Fock calculation with density-dependent Skyrme forces and pairing correlations in BCS, and then to solve the RPA (or QRPA) equations with a separable residual interaction derived from the same Skyrme force. The separable residual interaction is obtained from the exact particle-hole residual interaction corresponding to the Skyrme force after averaging over the nuclear volume. In this way the consistency (mean field and residual interaction determined from the same effective interaction) and the manageability (the size of the RPA problem does not increase with increasing configuration space), are both exploited. One preserves the reliability of a selfconsistent treatment without loosing the capability of using large configuration spaces. This is the framework where our calculations are done.
Our procedure can be viewed as an approximation to the method recently proposed by Van Giai et al. . In Ref. the exact particle-hole residual interaction is first reduced to its Landau-Migdal form and then the RPA matrix is expanded into a finite sum of $`n`$ separable terms.
In a previous paper we already applied this method to <sup>74</sup>Kr with the aim of identifying those elements of the theory to which $`\beta `$decay may be particularly sensitive. We found that the Gamow Teller (GT) strength distribution was especially sensitive to the nuclear shape and RPA correlations, and we also noted the important role played by the two-body effective interaction, as well as by pairing correlations. Therefore, it was concluded that deformation, pairing and the RPA treatment are ingredients that one cannot avoid in a description of the $`\beta `$decay in this mass region. In this paper we use this knowledge to calculate the decay properties of a series of isotopes that are being presently measured or are considered as candidates for experimental studies . They are proton rich nuclei in the mass region around A=70 (<sup>64,66,68,70</sup>Ge, <sup>68,70,72,74</sup>Se, <sup>72,74,76,78</sup>Kr, <sup>76,78,80,82</sup>Sr), where deformation including shape coexistence plays an important role.
The study of these isotope chains is worth for several reasons. First of all, this mass region is characterized by a very rich structure giving rise to a large variety of coexistent nuclear shapes. Thus, this region is a good laboratory to test nuclear structure models. In addition to that, the study of various isotope chains opens the possibility to distinguish what is general and what is particular in the behaviour of these nuclei. The systematics also allows one to observe whether the agreement with experiment breaks down as we approach the $`N=Z`$ isotopes (<sup>64</sup>Ge, <sup>68</sup>Se, <sup>72</sup>Kr, <sup>76</sup>Sr), that are expected to have some peculiarities because of the $`T=0`$ pairing correlations . Another interesting point to discuss is whether the strength distributions of $`\beta `$decay can be used to extract information on the nuclear shape since clear differences in these distributions could appear depending on the shape of the parent nucleus. It would be interesting to find the most favorable cases for this purpose. Finally, since this mass region is at the border or beyond the scope of the full shell model calculations, predictions for the strength distributions, half-lives, and summed strengths in this mass region obtained from selfconsistent mean field approaches are of especial relevance since they will be probably the most reliable calculations. These results could be used to guide the experimental searches and to compare with other kind of calculations when available.
The paper is organized as follows. In Section 2 we remind briefly the main aspects of our approach and establish our choice for the force and pairing gap parameters. In Section 3 we present the results obtained for the energy distribution of the Gamow Teller strength in those isotopes, as well as integrated quantities that are especially relevant because they can be measured directly such as half-lives or total Gamow Teller strength contained within the $`Q_{EC\text{ }}`$ energy window, which in these proton rich nuclei is quite large. Finally in Section 4 we point out some final conclusions and remarks.
## II Summary of the Theory
In this section we summarize briefly the theory involved in the microscopic calculations presented in the next Sections. More details can be found in Refs. . Our method consists in a selfconsistent formalism based on a deformed Hartree-Fock (HF) mean field obtained with a Skyrme interaction including pairing correlations in the BCS approximation. The single particle energies, wave functions and occupations are generated from this mean field. We add to the mean field a spin-isospin residual interaction with a coupling strength derived by averaging over the nuclear volume the Landau-Migdal force, obtained from the same energy density functional (and Skyrme interaction) as the HF equation. The residual force is therefore consistent with the mean field. The equations of motion are solved in the proton-neutron quasiparticle random phase approximation (QRPA) .
The merits of the density-dependent HF approximation to describe the ground-state properties of both spherical and deformed nuclei are well known . We consider in this paper the force SG2 of Van Giai and Sagawa although we also show results in some instances obtained with the most traditional Skyrme force Sk3 . We use Sk3 in its density dependent two-body version that has better spin-isospin properties than the three-body one . The two forces were designed to fit ground state properties of spherical nuclei and nuclear matter properties but, in addition, the force SG2 gives a good description of Gamow Teller excitations in spherical nuclei . It also provides a good description of spin excitations in deformed nuclei .
For the solution of the HF equations we follow the McMaster procedure that is based in the formalism developed in Ref. as described in Ref.. The single-particle wave functions are expanded in terms of the eigenstates of an axially symmetric harmonic oscillator in cylindrical coordinates. We use eleven major shells. The method also includes pairing between like nucleons in the BCS approximation with fixed gap parameters for protons $`\mathrm{\Delta }_p,`$ and neutrons $`\mathrm{\Delta }_n`$, which are determined phenomenologically from the odd-even mass differences through a symmetric five term formula involving the experimental binding energies .
In the next Section we will discuss shape coexistence. To that end we perform constrained HF calculations with a quadratic quadrupole constraint , and analyze the energy surfaces as a function of deformation. The curves are obtained by minimizing the HF energy under the constraint of holding the nuclear deformation fixed. This is carried out over a large range of deformations. When more than one local minimum occurs for the total energy as a function of deformation, shape coexistence results.
For the study of $`\beta `$decay the relevant residual interactions are the spin-isospin contact forces generating the allowed Gamow Teller transitions. Following Bertsch and Tsai the particle-hole interaction consistent with the HF mean field is obtained as the second derivative of the energy density functional with respect to the one-body density. Neglecting momentum dependent terms, this gives a local interaction that can be put in the Landau-Migdal form
$$V_{ph}=N_0^1\underset{\mathrm{}=0,1}{}\left[F_{\mathrm{}}+G_{\mathrm{}}𝝈_\mathrm{𝟏}\mathbf{}𝝈_\mathrm{𝟐}+\left(F_{\mathrm{}}^{}+G_{\mathrm{}}^{}𝝈_\mathrm{𝟏}\mathbf{}𝝈_\mathrm{𝟐}\right)𝝉_\mathrm{𝟏}\mathbf{}𝝉_\mathrm{𝟐}\right]\delta \left(\text{r}_1\text{r}_2\right).$$
(1)
Retaining only the $`\mathrm{}=0`$ spin-isospin term and averaging the contact interaction over the nuclear volume, we end up with a separable residual ph interaction
$$V_{GT}=2\chi _{GT}\underset{K}{}\left(1\right)^K\beta _K^+\beta _K^{}$$
(2)
in terms of the Gamow Teller operator $`\beta _K^\pm =\sigma _Kt^\pm \left(K=0,\pm 1\right).`$ The coupling strength is given by
$$\chi _{GT}=\frac{3}{4\pi R^3}\left(\frac{1}{2}\right)\left\{t_0+\frac{1}{2}k_F^2\left(t_1t_2\right)+\frac{1}{6}t_3\rho ^\alpha \right\}=N_0^1\frac{3G_0^{}}{2\pi R^3}$$
(3)
as a function of the Skyrme parameters $`t_0,t_1,t_2,t_3,\alpha ,`$ the nuclear radius $`R,`$ and the Fermi momentum $`k_{F\text{ }}.`$
The proton-neutron QRPA phonon operator for Gamow Teller excitations in even-even nuclei is written as
$$\mathrm{\Gamma }_{\omega _K}^+=\underset{pn}{}\left[X_{pn}^{\omega _K}\alpha _n^+\alpha _{\overline{p}}^+Y_{pn}^{\omega _K}\alpha _{\overline{n}}\alpha _p\right]$$
(4)
where $`\alpha ^+\left(\alpha \right)`$ are quasiparticle creation (annihilation) operators, $`\omega _K`$ are the excitation energies, and $`X_{pn}^{\omega _K},Y_{pn}^{\omega _K}`$ the forward and backward amplitudes, respectively. The advantages of using a separable residual interaction are well known, the RPA problem can be easily solved no matter how many two-quasiparticle (2qp) configurations are included. The RPA eigenvalues are obtained as the root of a single secular equation and then the corresponding RPA amplitudes can be calculated by performing summations over 2qp states. Explicit expressions of the secular equations that we solve for the $`K=0`$ and $`K=1`$ Gamow Teller modes are given in Ref. .
In the intrinsic frame the Gamow Teller $`\beta _K^+`$ strengths connecting the ground state $`0^+`$ and the excited states $`1_{\omega _K}^+`$ are obtained as
$$\omega _K\left|\beta _K^+\right|0=\underset{pn}{}\left(u_nv_pX_{pn}^{\omega _K}+v_nu_pY_{pn}^{\omega _K}\right)n\left|\sigma _K\right|p$$
(5)
where the $`v^{}`$s are the occupation amplitudes $`\left(u^2=1v^2\right)`$. From the RPA equations it is easy to go back to simpler approximations: The Tamm Dancoff approximation (TDA) is recovered by neglecting all the terms involving the backward amplitudes $`Y`$. The uncorrelated two-quasiparticle excitations are obtained in the limit of zero residual interaction. The Ikeda sum rule is fulfilled in all of these approximations. For each component $`K=0,\pm 1,`$ we get
$$\underset{\omega _K}{}\left|\omega _K\left|\beta _K^{}\right|0\right|^2\left|\omega _K\left|\beta _K^+\right|0\right|^2=NZ,$$
(6)
and summing over $`K`$, we obtain $`3\left(NZ\right)`$ as expected.
In the laboratory frame the transition probability for $`\beta ^+`$ decay from the $`0^+`$ to a $`1_\omega ^+`$ state is given by
$$B_{GT}^+\left(\omega \right)=\frac{g_A^2}{4\pi }\left\{\underset{\omega _0}{}\left|\omega _0\left|\beta _0^+\right|0\right|^2\delta \left(\omega \omega _0\right)+2\underset{\omega _1}{}\left|\omega _1\left|\beta _1^+\right|0\right|^2\delta \left(\omega \omega _1\right)\right\}.$$
(7)
Finally the half-lives are obtained from the $`B_{GT}^+`$ strengths within the theoretical energy window $`Q_{EC}.`$
## III Results
### A Ground State Properties
The constrained HF method allows one to get the best solution for each value of the mass quadrupole $`Q_0`$. In Figs. 1 to 4 we show the HF energy as a function of the mass quadrupole moment for the two interactions SG2 (solid) and Sk3 (dashed) in Ge, Se, Kr, and Sr isotopes, respectively. The best HF solution at each $`Q_0`$ value is obtained by varying the size and deformation parameters of the deformed harmonic oscillator basis containing 286 states (plus their time reverse). One should note that in these figures the origin of the vertical axis varies for each plot but the unit length (distance between ticks) corresponds always to 1 MeV. Tables 1-4 contain the values of the binding energies obtained in the various cases from which one can deduce the appropriate vertical scale.
As it is seen in Figs. 1-4 in most cases there are two minima close in energy, indicating shape coexistence. Fig. 1 for Ge-isotopes shows that the two solutions are one in the prolate sector and one in the oblate sector in the four isotopes studied. The two forces agree in their predictions on the position of the minima with the only exception of <sup>70</sup>Ge, where Sk3 produces a prolate solution at a larger $`Q_0`$ value than SG2. The energies of the two minima are quite close (less than 1 MeV apart in all cases), indicating a very favorable case to find shape coexistence in any of these four Ge-isotopes studied. It is remarkable in this case the similarity among the four isotopes. Table 1 contains various ground state properties of these Ge-isotopes for the oblate and prolate solutions of the forces SG2 and Sk3. The first columns contain the pairing gap parameters for neutrons $`\mathrm{\Delta }_n`$ and protons $`\mathrm{\Delta }_p`$ as derived from the experimental masses . Besides the Skyrme force, they are the only input parameters in our calculation. In the next columns we can find the Fermi energies for neutrons $`\lambda _n`$ and protons $`\lambda _p`$, the charge radii $`r_C`$, the charge $`\left(Q_{0,p}\right)`$ and mass $`\left(Q_0\right)`$ quadrupole moments, the quadrupole deformations $`\beta _0`$, the values of $`J^2`$, the cranking moments of inertia $`_{cr}`$, the gyromagnetic ratios $`g_R`$, the binding energies $`E_T`$, the coupling constant of the residual interaction $`\chi _{GT}`$, and the $`Q_{EC}`$ values.
Fig. 2 and Table 2 are the analogous to Fig. 1 and Table 1 for Se-isotopes. In this case, we can see from Fig. 2 the existence again of two solutions in each isotope but now there is a tendency to favor the oblate solution as the ground state. This is true in the four isotopes and with the two forces considered. It is also worth mentioning that with the force SG2 the prolate solution tends to disappear as the number of neutrons increases. When one reaches <sup>74</sup>Se, only an oblate and a spherical solution survive. Fig. 3 and Table 3 contain the results for the Kr-isotopes. Here, we still find shape isomerism but the situation now changes considerably from one isotope to another as well as from one force to another. <sup>72</sup>Kr exhibits a pronounced oblate ground state shape and a prolate isomer with both forces SG2 and Sk3. The next isotope <sup>74</sup>Kr exhibits shape isomerism as well, but its characteristics depend on the force. While SG2 favors an oblate shape, Sk3 favors a prolate one. The situation changes again in <sup>76</sup>Kr, where SG2 clearly indicates a spherical ground state while Sk3 predicts an oblate/prolate coexistence. This is also the case in <sup>78</sup>Kr. Thus, while the two isomers oblate and prolate survive in all cases with Sk3, the force SG2 predicts an oblate ground state and a prolate isomer in the $`N=Z`$ isotope <sup>72</sup>Kr. Little by little the oblate ground state collapses into a spherical solution as the number of neutrons increases and the prolate solution finally disappears. Fig. 4 and Table 4 show the results for Sr-isotopes. We can see in this case that the two forces agree in describing <sup>82</sup>Sr and <sup>80</sup>Sr as spherical but they differ in <sup>78</sup>Sr and <sup>76</sup>Sr. Sk3 produces a prolate ground state in these two isotopes and a shape isomer which is oblate but almost spherical. On the other hand, SG2 favors a spherical ground state in <sup>78</sup>Sr with a prolate isomer and a shape coexistence oblate/prolate in <sup>76</sup>Sr.
Numerical comparison with experiment of binding energies shows that the SG2 force gives a small overbinding ($`2\%`$), while the Sk3 force gives a small underbinding ($`0.7\%`$), systematically in all the nuclei considered. Consistently, we find that the nuclear size, as represented by the $`r_C`$ values, are systematically somewhat larger with Sk3 than with SG2, both being in good agreement with the available experimental values. This comparison of binding energies and $`r_C`$ values does not point out to any particular difference between the $`N=Z`$ and the $`N>Z`$ even-even isotopes.
Experimental $`\left|\beta _0\right|`$ values, as extracted from $`B(E2)`$ measurements , are also in good agreement with most of our microscopically calculated $`\beta _0`$ values. We would like to recall here that nonzero experimental $`\beta `$ values in spherical nuclei correspond to vibrational excitations rather than to the stable deformations calculated here, thus the experimental $`\left|\beta _0\right|`$ value for <sup>82</sup>Sr in table 4, corresponds to a vibrational $`E2`$ transition. The moments of inertia and collective gyromagnetic ratios are given for possible future comparison to theory and experiment. The coupling strengths $`\chi _{GT}`$ are obtained from Eq.(3) using the Sk3 and SG2 Skyrme parameters and $`R=1.2A^{1/3}`$ fm. The $`Q_{EC}`$ values in Tables 1-4 are calculated from our theoretical binding energies,
$$Q_{EC}=m_pm_n+m_e\left(\lambda _n+E_n\right)_{(N,Z)}+\left(\lambda _pE_p\right)_{(N,Z2)}$$
(8)
### B Gamow Teller Strength Distributions
Before starting to comment the results obtained for the strength distributions, a discussion concerning the residual interaction is in order. As we have already mentioned, the coupling strength of our spin-isospin residual interaction $`\chi _{GT\text{ }}`$is obtained from the Skyrme parameters (Eq.(3)) and therefore, the mean field and residual interaction are consistently derived from the same force without any free parameter left. Nevertheless, one could ask how this coupling strength compares with other values previously used in the literature and how well it describes the position of the experimental Gamow Teller resonance (GTR) as obtained from $`(p,n)`$ reactions. Such comparisons to experiment have been a common method to adjust the coupling strength of the residual spin-isospin force. By this procedure a value of $`\chi _{GT}=23/A`$ MeV was obtained for the coupling strength in Eq.(2). The fit corresponds to the GTR in <sup>208</sup>Pb, which is centered at an excitation energy in the daughter nucleus of 15.5 MeV. The value for $`\chi _{GT\text{ }}`$ was obtained by using the experimental values for the particle and hole energies as explained in Ref. and then it would change if one uses, instead of those experimental energies, the single particle energies as obtained from a selfconsistent mean field calculation as in our case. That means that one should be careful when using this value for the coupling strength because it also implies the use of the experimental energies. As soon as one uses a different set of single particle energies, the fitting procedure should be repeated to extract a new value of $`\chi _{GT}`$ able to reproduce the excitation energy of the GTR within the new framework.
It should also be mentioned that the value of the coupling strength that reproduces the position of the GTR in <sup>208</sup>Pb varies if one considers a different mass region. It is known that one needs different values of $`\chi _{GT}`$ to reproduce the GTR in different mass regions. In an attempt to improve the systematics of the dependence of the strength $`\chi _{GT}`$ with the mass number $`A`$, more sophisticated dependences than $`\kappa /A`$ have been tried . A dependence of the type $`\chi _{GT}=\kappa /A^\mu `$ has been adjusted to data in Ca, Zr, and Pb. It has been found that $`\chi _{GT}=5.2/A^{0.7}`$ is able to reproduce in a reasonable way those data. Again, this parametrization would be dependent on the mean field and single particle energies used.
The value we obtain for the coupling strength is 27/A MeV for SG2 and 26/A MeV for Sk3, which are quite close and a little bit higher than the value 23/A, mentioned above. A similar value, 28/A MeV, was obtained in Ref. to reproduce the systematics of the energy differences between the GTR and the isobaric analog state observed in $`(p,n)`$ reactions. With our value we obtain for <sup>208</sup>Pb the position of the GT resonance at 19 MeV which is a few MeV larger than experiment. This result was already known for the force SG2. In Ref. it was found, within a TDA calculation with a contact Landau force in <sup>208</sup>Pb, that SG2 gives the GTR at 18 MeV while the other force considered in that paper (SG1) produces the peak at 21 MeV. In Ref. the resonance was also found at 19 MeV within an RPA calculation with a Skyrme-Landau interaction. Our results from a separable equivalent force confirm those results. The value of $`\chi _{GT}`$ needed to reproduce in our case the GTR in <sup>208</sup>Pb is $`\chi _{GT}=19/A`$ MeV.
To illustrate further the comparison of the calculated and experimental positions of the GTR, it is interesting to compare the predictions of our approach using the consistent separable residual interaction in the mass region of our interest here. Unfortunately there is not much experimental information available. One exception is the case of the Fe isotopes, which are probably the most extensively studied nuclei in this mass region because of the interest in astrophysics. The isotopes <sup>54,56</sup>Fe have been measured by $`(p,n)`$ and $`(n,p)`$ reactions to obtain the GT$`{}_{}{}^{}{}_{\text{ }}{}^{}`$ and GT<sup>+</sup> strengths, respectively. We can see in Fig. 5 the result of this comparison, where the strength ($`L=0`$ forward-angle cross section or GT strength) has been plotted versus the excitation energy of the daughter nucleus. The agreement with the experimental position of the GTR is quite reasonable, especially if one takes into account that the theoretical calculation has no free parameters. It is also clear that one could improve this agreement by reducing a little bit the coupling strength.
Experimental information on $`(n,p)`$ reactions is also available for <sup>70,72</sup>Ge , which are much closer to the nuclei of our interest in this paper. Fig. 6 contains this comparison between the experimental $`L=0`$ cross sections and the Gamow Teller strength distribution calculated with the force SG2 in RPA and for the two shapes (prolate and oblate in <sup>70</sup>Ge, spherical and oblate in <sup>72</sup>Ge) that produce HF energy minima. As in the case of the Fe isotopes, the agreement with the experimental excitation energy of the GTR is not bad. The peak in <sup>70</sup>Ge is well reproduced while the peak in <sup>72</sup>Ge is at the correct energy although experimentally it appears as a broad resonance.
It is not the aim of this paper to fit experimental data on GT strength distributions, but rather to provide results obtained from the consistent value of $`\chi _{GT}`$ –as obtained from the Skyrme force– to avoid playing around with free parameters. In any event, the comparison on Fe and Ge above discussed, shows that the method gives reasonable results. On the other hand, the strength below the $`Q`$window, which is the relevant energy region for $`\beta `$decay, is in practically all the cases considered here much below the peak of the GTR and therefore not influenced directly by its position within a few MeV.
The Gamow Teller $`\beta ^+`$ strength distributions calculated in the selfconsistent HF+RPA scheme with the force SG2 are shown in Figs. 7-10 for the Ge, Se, Kr, and Sr-isotopes, respectively, as a function of the excitation energy of the daughter nucleus. We have folded the calculated GT strengths with $`\mathrm{\Gamma }=1`$ MeV width Gaussians converting the discrete spectrum into a continuous curve. In these figures, the GT strength of the various isotopes are compared among themselves in a different panel for each nuclear shape. In this way one can appreciate the magnitude of the various strengths on the same scale.
Fig. 7 shows the GT distributions in Ge-isotopes. If one concentrates first on the comparison of the strengths for a given shape, the first thing to notice is that the main peaks of the strength occur at lower energies when one increases the number of neutrons. This is accompanied with a reduction of the strength with increasing neutron number. Now, if we compare the strength distributions of a given isotope obtained from the two shapes, we find that the profiles are in this case quite similar. They are peaked at about the same energy and contain comparable strengths, the oblate ones being a little bit smaller. This is true for the four Ge-isotopes considered and therefore, we can conclude that the Ge-isotopes are not among the best candidates to look for deformation effects based on the GT strengths distributions.
Fig. 8 contains the GT strength distributions for the Se-isotopes. In this case we also find a clear reduction of the GT strength with the increasing number of neutrons in the oblate and prolate solutions. However, contrary to what happened with the Ge-isotopes, we observe now that the position of the main peaks does not become systematically lower with increasing neutron number, on the contrary, the energy of the main peaks is quite similar for all the isotopes except for the $`N=Z`$ one that is shifted to higher energies in both cases oblate and prolate. A comparison between the oblate and prolate strength distribution for a given isotope shows that there are not substantial differences between them. The position of the peaks appear at about the same energy and only a slightly smaller strength in the oblate case is worth mentioning. It should also be mentioned that for <sup>74</sup>Se the curve shown on the left panel (prolate label) corresponds actually to the spherical solution in Fig. 2 since there is no prolate solution for this nucleus with the SG2 force.
Fig. 9 is the analogous for Kr isotopes. Here one should also take into account that for <sup>78</sup>Kr there is a single spherical solution and that the results shown in the panels labelled oblate and prolate correspond actually to this spherical solution. Similarly, the results under the label oblate for <sup>76</sup>Kr corresponds actually to the spherical solution in Fig. 3. In this chain of isotopes the changes are more dramatic, especially in what concerns the oblate and prolate differences. The strength again increases as we approach the $`N=Z`$ isotope and the position of the bumps is also displaced to higher energies. The important new feature here is the strong difference between the calculated strength distributions obtained for the two different shapes. The most remarkable differences are those between the oblate and prolate solutions in <sup>74</sup>Kr and between the prolate and spherical solutions in <sup>76</sup>Kr. Here we have found firm candidates to study the shapes from their decay properties. Note that the figures corresponding to <sup>74</sup>Kr are slightly different from those shown in Ref. . This is simply due to the different values of $`\chi _{GT}`$ used that correspond to a different choice of the nuclear radius $`R`$ in Eq. (3). In this paper $`R=1.2A^{1/3}`$fm.
The strength distributions in Sr isotopes can be seen in Fig. 10. The trend observed within the prolate solutions is similar to the above mentioned behavior, the strength increases and is shifted to higher energies as we approach $`N=Z`$. In the right hand side panel, the strength for <sup>76</sup>Sr corresponds actually to decay from the oblate solution in Fig. 4. For the true spherical cases (<sup>78,80,82</sup>Sr), the strengths are noticeably smaller as compared to the deformed shapes. Therefore, this fact can be exploited to study nuclear shapes from $`\beta `$decay properties as in the previous case for Kr isotopes. We will come back to this point when discussing the strengths summed up to the accessible experimental window in the next subsection.
In the next set of figures, Figs. 11-14, we compare the GT strength distributions obtained in RPA, TDA, and in the uncorrelated two-quasiparticle case with the force SG2. The general trend seen in these figures is similar to that observed in our previous work on <sup>74</sup>Kr and can be summarized as follows: Compared to the uncorrelated two quasiparticle response, RPA produces two types of effects. First, there is a shift of the GT strength to higher energies due to the repulsive character of the spin-isospin residual interaction and second, there is a reduction of the total strength. While the shifting effect is already contained in the TDA description, the quenching effect is not.
Fig. 11 shows this comparison among different approximations for the oblate and prolate shapes of the Ge isotopes. We can see explicitly on the example of this figure the two effects just described. The displacement of the strength to higher excitation energies in TDA and RPA with respect to the uncorrelated case, and the suppression of the strength in RPA. We can also study the dependence on deformation of the GT strength distributions in the uncorrelated basis. If we compare the uncorrelated prolate and oblate distributions (dotted lines) for a given isotope, we arrive to the same conclusion as in the discussion of Fig. 7. There is not a strong dependence on deformation for these Ge isotopes, although now some differences become more apparent. For example there is a first bump at very small energies in all the oblate cases that is almost suppressed in the prolate ones. These bumps are redistributed by the action of the residual force and a much smoother strength distribution is found in RPA. Nevertheless, there is still a small bump, reminiscent of the peak in the uncorrelated case, that appears at small energies in the oblate cases and that, as we shall see later on, plays an important role because it is a signature of an oblate shape in the parent nucleus that can be identified by measuring the GT strength at low excitation energies below the $`Q_{EC}`$ window. Thus, although the RPA strength distributions are smooth out in comparison to the more sensitive uncorrelated distributions, there are still traces of that sensitivity which can be exploited to probe the shape of the nucleus.
The effect of residual interaction and RPA correlations for Se, Kr, and Sr isotopes are shown in Figs 12,13, and 14, respectively. The case of Se is very similar to that of Ge, there are no strong deformation effects. On the contrary, for Kr and Sr isotopes the dependence on deformation of the uncorrelated strength distributions is huge and this is the origin of the deformation dependence in RPA discussed earlier for these isotopes.
Figs. 15-18 show the dependence of the HF+RPA Gamow Teller strength distributions to the Skyrme interaction used in the calculations. The results are for SG2 (solid line) and Sk3 (dashed line) in all the isotopes considered. In Fig. 15 for Ge isotopes we can see that there is almost no difference in going from one interaction to another and thus, the conclusions have a general validity. Fig. 16 for Se isotopes shows the same characteristics. The profiles obtained with both interactions are quite similar. The larger discrepancies occur in the prolate solutions of <sup>74</sup>Se and <sup>72</sup>Se, but this is mainly due to the different minima obtained for these two nuclei with the two interactions (see Fig. 2), while Sk3 produces well deformed prolate solutions, SG2 has an almost spherical solution for these two isotopes. Fig. 17 shows the results for Kr isotopes. Here again, the strength distributions obtained with the two interactions are quite similar in the cases where the HF solutions appear at about the same deformation. On the other hand, when the HF solutions occurred at different deformations in Sk3 and SG2, the strength distributions obtained from those solutions are also quite different. This is clearly the case in <sup>78</sup>Kr, where Sk3 has two solutions oblate and prolate while SG2 has a single spherical solution. This is also true to a lesser extent in the oblate solutions of <sup>76</sup>Kr and <sup>74</sup>Kr and in the prolate solution of <sup>72</sup>Kr that occur at different deformations. On the other hand, the rest of cases have very similar strength distributions and they also have very similar deformations in the HF solutions (see Fig. 3). In Fig. 18 we can see the results for Sr isotopes. The profiles of the strength distributions are in this case practically the same in accordance with the situation in Fig. 4, where the HF solutions with the two interactions occur at the same deformations.
### C Half-lives and Summed Strengths
In this subsection we present the results obtained for other quantities of interest such as the half-lives or the GT strengths summed up to the $`Q_{EC}`$ window $`\left(_{EC}\right)`$. One of the points to discuss in this context is whether there are substantial differences in these quantities depending on the shape of the parent nucleus. In the affirmative case, this implies that experimental data on $`\beta `$decay can be taken as a signature of the nuclear shape. Thus, it is instructive to see the predictions for the half-lives and summed strengths corresponding to the different stable shapes.
The total half-life $`T_{1/2}`$ for allowed $`\beta `$ decay from the ground state of the parent nucleus is given by summing over all the final states involved in the process
$$T_{1/2}^1=\frac{\kappa ^2}{D}\underset{\omega }{}f(Z,\omega )\left|1_\omega ^+\beta ^+0^+\right|^2$$
(9)
The Fermi integrals $`f(Z,\omega )`$ are taken from Ref. . We use $`D=6200`$ s and include effective factors
$$\kappa ^2=\left[\left(g_A/g_V\right)_{eff}\right]^2=\left[0.77\left(g_A/g_V\right)_{free}\right]^2=0.90$$
(10)
to take into account in an effective way the quenching of the GT coupling constant $`g_A`$ in the nuclear medium. Note that the value of the standard quenching factor (0.77) used here is a little bit different than the value (0.7) used in Ref. , and consequently the values of $`T_{1/2}`$ have changed in accordance.
Tables 5-7 show the results obtained from bare $`2qp`$, TDA, and RPA calculations for the GT strength summed up to an energy cut of 30 MeV (Table 5), for the GT strength summed up to excitation energies below $`Q_{EC}`$ (Table 6), and for the total $`\beta ^+/EC`$ half-life (Table 7). The cut of 30 MeV corresponds to the excitation energy for which the Ikeda sum rule is fulfilled up to a few per thousand. Results are shown for the two Skyrme forces Sk3 and SG2, as well as for the different shapes oblate (o), prolate (p), or spherical (s), where the minima occur for each isotope.
One can see in Table 5 that the summed GT strengths up to the energy cut are conserved in going from $`2qp`$ to TDA calculations, but this is no longer true in RPA where the strengths are reduced. The energy weighted sums, not shown here, have the opposite behavior, the two quasiparticle values are conserved by RPA while TDA produces larger energy weighted sums (see also Ref. ). Focussing on the RPA total strengths, one can see on the table that the prolate shape tends to give a larger total strength, this is more noticeable in Kr and Sr isotopes when the other equilibrium shape is spherical. In comparing RPA results for Sk3 and SG2 one can see that the GT summed strengths are practically equal when the predicted shapes have similar $`\beta _0`$ values. This implies that there is a strong correlation between the nuclear shape and the total GT strength within the $`Q_{EC}`$ window $`\left(_{EC}\right)`$.
As a matter of consistency the GT excitations and $`Q_{EC}`$ values from which we obtain the summed strengths $`_{EC}`$ in Table 6 and the half-lives $`T_{1/2\text{ }}`$ in Table 7 have been calculated in the parent nucleus for each force, shape and approach. In particular for the $`Q_{EC}`$ value in RPA and TDA approximations the energy of the lowest two-quasiparticle state is replaced by the energy $`\omega `$ of the lowest RPA or TDA state, respectively. In most cases however the $`Q_{EC}`$ values in various approaches differ at most by a few percent.
In tables 6-7, we do not include stable nuclei (<sup>70</sup>Ge, <sup>74</sup>Se, <sup>78</sup>Kr) or nuclei near to stability with very small $`Q_{EC}`$ values (<sup>68</sup>Ge, <sup>72</sup>Se, <sup>82</sup>Sr, having $`Q_{EC}<0.4`$ MeV, see Tables 1-4). Quantities depending on the value of the cut $`Q_{EC}`$, such as those in Tables 5-6, are extremely sensitive to the cut when $`Q_{EC}`$ is very small. In this case only very few low energy excitations contribute to $`_{EC}`$ and $`T_{1/2}`$ and a small change in the $`Q_{EC}`$ value may lead to large variations in these quantities, specially in the half-lives that can change by orders of magnitude. Generally, when $`Q_{EC}`$ is large enough small changes in $`Q_{EC}`$ are followed by small changes in $`T_{1/2}`$. This is especially true in the deformed case, where the excitation energies are very much fragmented and appear in an almost continuous distribution. In contrast, in the spherical case the existence of large strengths at well located excitation energies can make the half-lives much more dependent on fine details of the calculations.
Note that while the half-lives in Table 7 contain already the expected quenching factor (see Eqs. (9,10)), the strengths in Table 6 are in units of $`\left[g_A^2/4\pi \right]`$, and therefore a reduction of about a 50% is expected in these strengths before comparison to experiment is made due to the effective $`g_A`$ value.
A common feature to both tables is that the calculated half-lives $`T_{1/2}`$ (summed strengths $`_{EC}`$) increase (decrease) in going from $`2qp`$ to TDA to RPA. One finds variation factors of the order of ten between 2qp and RPA calculations in $`_{EC}`$ and $`T_{1/2}`$. TDA is in all cases much closer to RPA than to 2qp calculations but still observable differences appear in some cases. Therefore, from these calculations one concludes that in order to achieve a reliable description of $`\beta `$decay properties, an RPA calculation must be performed.
Comparison to the experimental half-lives in Table 7 shows that the RPA results agree in general with experiment. The only exception are the $`N=Z`$ isotopes of Ge (<sup>64</sup>Ge) and Se (<sup>68</sup>Se), where we overestimate the half-lives by a factor between 2.5 and 5. Even in the cases not included in the table because of their small $`Q_{EC}`$ value (<sup>68</sup>Ge, <sup>72</sup>Se, <sup>82</sup>Sr ), we obtain half-lives of the order of days as in experiment.
In a more detailed analysis we can see that the RPA summed strengths within the $`Q_{EC}`$ window (half-lives) in the Ge isotopes are smaller (larger) for the prolate shapes than for the oblate ones. This is due to the small peak that appears in the distribution of the strength in the oblate cases, absent in the prolate ones (see Figs. 7,11,15). This could lead to an observable effect in <sup>64,66</sup>Ge. Although the total strength (sums up to 30 MeV) are in most cases larger for the prolate shapes, the opposite happens in the sums cut at $`Q_{EC}`$. It is also important to mention that SG2 and Sk3 agree in their predictions for the summed strengths. The results obtained for the Se isotopes do not show any remarkable pattern and then the Se isotopes are not good candidates to look for sizeable effects on the GT strengths due to deformation.
Special attention deserve the cases of Kr and Sr isotopes. The summed GT strengths up to the $`Q_{EC}`$ window are not conclusive to distinguish between oblate or prolate shapes in <sup>72,76</sup>Kr. The situation is different in <sup>74</sup>Kr. In this nucleus one obtains the same strength with the two forces in the oblate case, strength which is much smaller than the strength obtained in the calculation with the prolate shape. This fact makes <sup>74</sup> Kr a suitable candidate to measure its GT strength and from this measurement to infer the ground state shape. In a similar way, $`_{EC}`$ are about the same in the case of <sup>76</sup>Sr, where a coexistence between oblate and prolate shapes is predicted. On the other hand, in the other two cases <sup>78,80</sup>Sr where a prolate and spherical shape coexistence appears, $`_{EC}`$ calculated in the prolate shape is clearly larger than the corresponding strength calculated from the spherical shape. Therefore, these two nuclei are again very interesting cases to look for these deformation effects on the GT strengths.
### D The particle-particle residual interaction
It has often been claimed (see for instance Ref. and refs. therein) that for a complete description of the $`\beta ^+`$ and $`\beta \beta `$ strengths, the inclusion of the particle-particle (pp) residual interaction is required. Therefore the question may arise as to why this interaction was not included in the present work. The usual way to include this force is in terms of a separable force with a free coupling constant $`\kappa _{pp}`$, which is fitted to the phenomenology. Since the peak of the GTR is almost insensitive to the pp force, $`\kappa _{pp}`$ is usually adjusted to reproduce the half-lives.
One of the features of the pp force is that, being an attractive force, the GT strength is pushed down to lower energies with increasing values of $`\kappa _{pp}`$. If $`\kappa _{pp}`$ is strong enough it may happen that the RPA collapses, because the condition that the ground state be stable against the corresponding mode is not fulfilled. Inconsistencies between mean field and residual interactions are a source of problems, particularly when discussing single $`\beta `$ or double $`\beta `$ decays. There is work in progress to include the pp residual force in a consistent way, starting from Hartree-Fock-Bogoliubov calculations, where proton-neutron pairing is included. Until this project is carried out, we have adopted in this work the value $`\kappa _{pp}=0`$, which is consistent with the HF+BCS energy density functional without proton-neutron pairing used here. Furthermore, it has been shown that for small values of $`\kappa _{pp}`$ far from the collapse, the half-lives are nearly independent of the pp force.
Nevertheless, just as an illustration we show in this section the effect of the inclusion of a pp force on the GT strength distributions and half-lives. For that purpose, we introduce in our formalism a separable residual pp force in the same way as it was done in Ref. . Using separable forces, the QRPA equation for the separable particle-hole and particle-particle forces can be reduced to an algebraic equation, which is now of fourth order by the inclusion of the pp force. For the solution of the algebraic equation we follow Ref. .
We can see in Fig. 19 the effect of the residual pp interaction on the GT strength distributions by changing the $`\kappa _{pp}`$ value. The figure corresponds to the prolate and oblate solutions of the nucleus <sup>70</sup>Se. It is an RPA calculation with the force SG2. As it can be seen in this figure, the attractive character of the pp force makes the strength to be slightly shifted to lower excitation energies, but the position of the GTR is hardly modified by the inclusion of the pp force. We can also see in Table 8 the total GT strength summed up to 30 MeV, as well as the sums up to $`Q_{EC}\left(_{EC}\right)`$, and the half-lives. The total GT strength is reduced as the value of $`\kappa _{pp}`$ increases, but $`_{EC}`$ increases because of the concentration of the strength at lower energies. As a consequence the half-lives are reduced with increasing $`\kappa _{pp}`$. This is so until the collapse of the RPA takes place, which for the case discussed here occurs at about $`\kappa _{pp}=0.10`$ MeV.
We also note that if we would fix $`\kappa _{pp}`$ to fit the experimental value of the half-life, we would need $`\kappa _{pp}=0.02`$ MeV in the prolate case and $`\kappa _{pp}=0.08`$ MeV in the oblate case, which is close to the collapse.
## IV Summary and final remarks
We have investigated shape isomerism and $`\beta `$decay in several Ge, Se, Kr, and Sr isotopes on the basis of the selfconsistent HF+RPA framework with Skyrme forces. This is a well founded method that has been successfully used to describe quite diverse properties of stable spherical and deformed nuclei through the nuclear chart. It has the appealing feature of treating the excitations and the ground state in a selfconsistent framework with no free parameters. This feature is particularly desirable for nuclei far from stability, where extrapolations of methods based on local fits are more doubtful. We took here the challenge to test the predictions of this method on the above mentioned chains including unstable isotopes. Very reasonable agreement with both ground state and $`\beta `$decay properties is obtained.
Compared to the uncorrelated two quasiparticle response, RPA shifts the GT strength to higher energies and reduces the total strength. While the shifting effect is already contained in the TDA description, the quenching effect is not. This effect produces half-lives that are much larger in RPA than in the bare 2qp approach. Inclusion of RPA correlations are clearly necessary for comparison to experiment.
We have found shape isomerism in most of the isotopes studied. The RPA Gamow Teller $`\beta ^+`$ strength distributions depend on the shape (prolate, spherical or oblate) of the parent nucleus. It is important to notice that these results do not depend much on which effective Skyrme interaction (Sk3 or SG2) is used.
The different nuclear shapes lead in some cases to sizeable differences in the observable range of $`\beta ^+`$decay. We find that <sup>74</sup>Kr, <sup>78</sup>Sr and <sup>80</sup>Sr are particularly interesting cases to look experimentally for shape effects in $`\beta ^+`$decay.
For even-even nuclei, neutron-proton $`T=0`$ and $`T=1`$ pairing is known to be important when $`N=Z`$ . Since our theoretical treatment does not explicitly include neutron-proton $`\left(np\right)`$ pairing, we may expect larger deviations between theory and experiment in the $`N=Z`$ isotopes. The comparison in tables 1-4 of bulk properties like binding energies and r.m.s. radii shows that the agreement between theory and experiment is as good for the $`N=Z`$ as for the $`N=Z+2,Z+4,Z+6`$ isotopes. This allows us to conclude that the effect of $`np`$ pairing correlations in the binding energy is roughly taken into account by the use of the phenomenological gap parameters $`\mathrm{\Delta }_p,\mathrm{\Delta }_n.`$ This could be expected from HFB theory where the total gap satisfies
$`\left|\mathrm{\Delta }_p\right|^2=\left|\mathrm{\Delta }_{pp}\right|^2+\left|\mathrm{\Delta }_{pn}^{T=1}\right|^2+\left|\mathrm{\Delta }_{pn}^{T=0}\right|^2`$
and similarly for neutrons.
$`\beta ^+`$decay strength functions and half-lives of $`N=Z`$ nuclei are expected to be more sensitive to the explicit inclusion of $`np`$ pairing in the microscopic calculations. Indeed, a look at our RPA results in table 7 shows that for the $`N=Z`$ isotopes of Ge and Se, the experimental half-lives are overestimated by a factor of 3 to 5, depending on the interaction and shape, while fair agreement with experimental half-lives is obtained for the $`N>Z`$ isotopes. Interestingly enough our RPA results for the $`N=Z`$ isotopes of Kr and Sr are in good agreement with experiment. It will therefore be interesting to see how the inclusion of $`np`$ pairing in our microscopic calculation affects our present results. It will also be interesting to compare our results with future data on $`\beta ^+`$strengths.
## ACKNOWLEDGMENTS
We are thankful to J. Dukelsky, M.J. García Borge, W. Gelletly, and Ch. Miehé for stimulating comments and discussions. This work was supported by DGICYT (Spain) under contract number PB95/0123. One of us (A.E.) thanks Ministerio de Educación y Cultura (Spain) for support.
Figure captions
Figure 1. Total energy of the Ge isotopes <sup>64,66,68,70</sup>Ge as a function of the mass quadrupole moment $`Q_0`$. The results correspond to a constrained HF+BCS calculation with the Skyrme interaction SG2 (solid line) and Sk3 (dashed line). The distance between two ticks in the vertical axes is always 1 MeV but the origin is different for each curve.
Figure 2. Same as in Fig. 1 for the Se isotopes <sup>68,70,72,74</sup>Se.
Figure 3. Same as in Fig. 1 for the Kr isotopes <sup>72,74,76,78</sup>Kr.
Figure 4. Same as in Fig. 1 for the Sr isotopes <sup>76,78,80,82</sup>Sr.
Figure 5. $`(p,n)`$ and $`(n,p)`$ $`L=0`$ cross sections in <sup>54,56</sup>Fe compared to theoretical GT strength distributions obtained with the force SG2 in RPA. Experimental data for $`(p,n)`$ and $`(n,p)`$ reactions are from and , respectively.
Figure 6. $`(n,p)`$ $`L=0`$ cross sections in <sup>70,72</sup>Ge compared with the RPA theoretical GT strength distributions obtained from SG2.
Figure 7. Comparison of the Gamow Teller strength distribution \[$`g_A^2/4\pi `$\] in the Ge isotopes <sup>64,66,68,70</sup>Ge. The results are for the force SG2 in RPA.
Figure 8. Same as in Fig. 7 for the Se isotopes <sup>68,70,72,74</sup>Se.
Figure 9. Same as in Fig. 7 for the Kr isotopes <sup>72,74,76,78</sup>Kr.
Figure 10. Same as in Fig. 7 for the Sr isotopes <sup>76,78,80,82</sup>Sr.
Figure 11. Comparison of RPA (solid line), TDA (dashed line), and bare two-quasiparticle (dotted line) Gamow Teller strength distributions \[$`g_A^2/4\pi `$\] in the Ge isotopes <sup>64,66,68,70</sup>Ge. The results correspond to the force SG2.
Figure 12. Same as in Fig. 11 for the Se isotopes <sup>68,70,72,74</sup>Se.
Figure 13. Same as in Fig. 11 for the Kr isotopes <sup>72,74,76,78</sup>Kr.
Figure 14. Same as in Fig. 11 for the Sr isotopes <sup>76,78,80,82</sup>Sr.
Figure 15. Gamow Teller strength distributions \[$`g_A^2/4\pi `$\] in the Ge isotopes <sup>64,66,68,70</sup>Ge as a function of the excitation energy of the daughter nucleus. The results correspond to the forces SG2 (solid line) and Sk3 (dashed line) in RPA.
Figure 16. Same as in Fig. 15 for the Se isotopes <sup>68,70,72,74</sup>Se.
Figure 17. Same as in Fig. 15 for the Kr isotopes <sup>72,74,76,78</sup>Kr.
Figure 18. Same as in Fig. 15 for the Sr isotopes <sup>76,78,80,82</sup>Sr.
Figure 19. GT strength distributions in <sup>70</sup>Se calculated in RPA with the force SG2 for various values of the coupling strength $`\kappa _{pp}`$ of the particle-particle force.
|
no-problem/9907/hep-ph9907388.html
|
ar5iv
|
text
|
# 1 INTRODUCTION
## 1 INTRODUCTION
The applicability of perturbative QCD to exclusive processes at large momenta is an interesting research problem. The Brodsky-LePage pQCD based factorization has been only partially successful. In this case the process is factorized into a perturbatively calculable hard scattering piece and the soft distribution amplitude. The pion electromagnetic form factor at momentum transfer $`q^2=Q^2`$, for example, can be written as
$$F_\pi (Q^2)=𝑑x_1𝑑x_2\varphi (x_2,Q)H(x_1,x_2,Q)\varphi (x_1,Q)$$
(1)
where $`\varphi (x,Q)`$ are the distribution amplitudes which can be expressed in terms of the pion wave function $`\psi (x,\stackrel{}{k}_T)`$ as
$$\varphi (x,Q)=^Qd^2k_T\psi (x,\stackrel{}{k}_T).$$
(2)
Here $`x`$ is the longitudinal momentum fraction and $`\stackrel{}{k}_T`$ the transverse momentum carried by the quark. The factorization is possible provided the external photon momentum $`Q^2`$ is much larger than the intrinsic quark transverse momentum $`k_T^2`$, in which case the $`k_T`$ dependence of the hard scattering $`H`$ can be neglected.
The formalism predicts that at large momenta the cross section for exclusive processes $`d\sigma /dt`$, where $`t`$ is the momentum transfer squared, scales like $`1/t^{n2}`$ up to logs, where $`n`$ is the total number of elementary partons participating in the process. The underlying reason for the power law is scale invariance of the fundamental theory. The extra logarithmic dependence is given by QCD scaling violations. The dominant contribution to this scattering arises from the valence quark, since every additional parton leads to an additional suppression factor of $`1/t`$. Physically the scattering probes the short distance part of the hadron wave function. Dominance by the short distance wave functions leads to several predictions such as helicity conservation, color transparency etc.
The successes and failures of this scheme are well known. The predicted momentum dependence of exclusive processes, in particular the hadronic electromagnetic form factors, have generally been found to be in good agreement with data. However more detailed dynamical predictions such as helicity conservation in hadron-hadron collisions fail to agree. Calculation of electromagnetic form factors using this factorization scheme has been criticised by several authors. The basic problem is that the momentum scales of the exchanged gluons tend to become rather small, and the applicability of pQCD becomes doubtful. The normalization of form factors is largely unknown; use of asymptotic distribution amplitudes tends to give small normalizations compared to data. Form factor magnitudes can be enhanced by use of model distribution amplitudes which peak closer to the endpoints, namely $`x0,1`$, which then exacerbates the problem of small internal momentum transfers.
## 2 THE SUDAKOV FORM FACTOR
In order to investigate this problem in more detail, Botts and Sterman and Li and Sterman developed an alternate factorization which does not neglect the $`k_T`$ dependence of the hard scattering. This formalism also includes use of a Sudakov form factor. For the case of pion form factor the starting point is,
$$F_\pi (Q^2)=𝑑x_1𝑑x_2𝑑\stackrel{}{k}_{T1}𝑑\stackrel{}{k}_{T2}\psi ^{}(x_2,\stackrel{}{k}_{T2},P_2)H(x_1,x_2,Q^2,\stackrel{}{k}_{T1},\stackrel{}{k}_{T2})\psi (x_1,\stackrel{}{k}_{T1},P_1),$$
(3)
where it is again assumed that the process is factorizable into hard scattering and soft hadronic wave functions $`\psi (x,\stackrel{}{k}_T,P)`$. The calculation is simplified by dropping the $`k_T`$ dependence in the quark propagators in hard scattering kernel $`H`$, in which case only the combination $`\stackrel{}{k}_{T1}+\stackrel{}{k}_{T2}`$ appears in the calculation. The authors work in configuration space where this can be written as
$$F_\pi (Q^2)=𝑑x_1𝑑x_2\frac{d^2\stackrel{}{b}}{(2\pi )^2}𝒫(x_2,b,P_2,\mu )\stackrel{~}{H}(x_1,x_2,Q^2,\stackrel{}{b},\mu )𝒫(x_1,b,P_1,\mu ),$$
(4)
where $`𝒫(x,b,P,\mu )`$ and $`\stackrel{~}{H}(x_1,x_2,Q^2,\stackrel{}{b},\mu )`$ are the Fourier transforms of the wave function and hard scattering respectively; $`\stackrel{}{b}`$ is conjugate to $`\stackrel{}{k}_{T1}+\stackrel{}{k}_{T2}`$, $`\mu `$ is the renormalization scale and $`P_1`$, $`P_2`$ are the initial and final momenta of the pion.
Sudakov form factors are obtained by summing the leading and next to leading logarithms using renormalization group (RG) techniques. The wave function at small $`b`$ with a transverse momentum $`k_T`$ cutoff equal to $`\omega =1/b`$ can be approximated by the distribution amplitude $`\varphi (x,1/b)`$. Large $`k_T`$ corrections can be evaluated perturbatively, which result in the Sudakov form factor. The final result is given by:
$`𝒫(x,b,P,\mu )`$ $`=`$ $`\mathrm{exp}\left[s(x,\omega ,Q)s(1x,\omega ,Q)2{\displaystyle _\omega ^\mu }{\displaystyle \frac{d\overline{\mu }}{\overline{\mu }}}\gamma _q(\alpha _s(\overline{\mu }))\right]`$ (5)
$`\times `$ $`\varphi (x,1/b)+O(\alpha _s(\omega )).`$
where $`\gamma _q(\alpha _s)`$ is the quark anomalous dimension. The explicit formula for the function $`s(x,\omega ,\mu )`$ is given in Li and Sterman . Here $`\omega =1/b`$ plays the role of the factorization scale, above which QCD corrections give the perturbative evolution of the wave function $`P`$, and below which QCD corrections are absorbed into the nonperturbative distribution amplitude $`\varphi `$. For the case of the pion, $`1/b`$ is the natural choice for this scale. However as discussed below, for the proton the relevant scale is not obvious and several possibilities exist in the literature.
The final formula for the form factor, after incorporating the renormalization group evolution of the hard scattering from the renormalization scale $`\mu `$ to $`t`$, $`t=\mathrm{max}(\sqrt{x_1x_2}Q,1/b)`$, is given by,
$`F_\pi (Q^2)`$ $`=`$ $`16\pi C_F{\displaystyle _0^1}𝑑x_1𝑑x_2\varphi (x_1)\varphi (x_2){\displaystyle _0^{\mathrm{}}}b𝑑b\alpha _s(t)K_0(\sqrt{x_1x_2}Qb)`$ (6)
$`\times `$ $`\mathrm{exp}[S(x_1,x_2,\omega ,Q)],`$
where
$$S(x_1,x_2,b,Q)=\underset{i=1}{\overset{2}{}}\left[s(x_i,b,Q)+s(1x_i,b,Q)\right]4_\omega ^t\frac{d\overline{\mu }}{\overline{\mu }}\gamma _q(\alpha _s(\overline{\mu })).$$
The function $`e^S`$ is plotted in fig. 1. It cuts off large $`b`$ regions of the integral and hence the calculation is infrared finite, without needing any arbitrary infrared cutoff such as a gluon mass. At small $`b`$ the function has been set equal to one.
The resulting form factor using asymptotic as well as CZ distribution amplitudes is shown in fig. 2. A remarkable fact is that the correct asymptotic $`Q^2`$ behavior is seen beyond the scale of about $`Q=1`$ GeV, irrespective of the choice of wave function. In contrast to the Brodsky-LePage factorization, the $`k_T`$ dependence of the hard scattering is not neglected, and hence this $`Q^2`$ dependence does not follow trivially. It is rather a detailed dynamical prediction of the theory and depends on the relative size of intrinsic $`k_T^2`$ and $`x_1x_2Q^2`$. The prediction is robust, since it is independent of the details of the distribution amplitude used. This simple yet important point justifies the basic idea of Brodsky-LePage factorization, namely that $`k_T`$ can be treated as negligible in the hard scattering.
We note that the normalization of the theoretical result falls below the experimental data for both choices of distribution amplitude. However, the large difference between theory and experiment at high momenta should be interpreted with caution, since as emphasized by Sterman and Stoler , there may be large systematic errors in the experimental extraction of the form factor which are not shown in the figure. Further theoretical issues in this extraction have been raised in Ref. .
In any event, the theoretical normalization of the form factor is comparatively murky, because it depends on the details of the distribution amplitude. Furthermore, the leading order pQCD amplitudes that are practical to calculate may not give a very reliable estimate of the normalization. One can investigate this further by considering the transverse separation cutoff ($`b_c`$) dependence of the form factor. This can give an idea about the integration regions important for the calculation. We show the $`b_c`$ dependence in fig. 3 as originally discussed in Ref. . Based on this plot Li and Sterman argue that roughly 50% of the contribution can regarded as perturbative, since it is obtained from the region where $`\alpha _s/\pi <0.7`$. The observation also implies that higher order contributions in $`\alpha _s`$ are not negligible, and the leading order predictions for the normalization of the form factor cannot be regarded as accurate. The next to leading order calculation of the pion form factor also leads to the same conclusion.
We are left with the following interesting situation: Although the basic Brodsky-LePage factorization is correct, one may need to go to higher orders in $`\alpha _s`$ in order to obtain an accurate prediction for the form factor normalizations. However the predicted $`Q^2`$ dependence appears to be quite robust, and independent of the theoretical uncertainties such as the choice of distribution amplitude.
## 3 THE PROTON
The improved factorization has also been applied to the proton Dirac form factor $`F_1^p(Q^2)`$. The calculation is considerably more complicated compared to the pion. Here also it is necessary to use distribution amplitudes which peak close to the end points in order to obtain the experimental normalization of the form factor. In contrast to pion, there is no natural choice for the infrared cutoff $`\omega `$ in the Sudakov exponent, due to the presence of three quarks and resulting three distances.
The Sudakov resummation of large logarithms in $`𝒫`$ leads to
$$𝒫(x_i,𝐛_i,P,\mu )=\mathrm{exp}\left[\underset{l=1}{\overset{3}{}}s(x_l,w,Q)3_w^\mu \frac{d\overline{\mu }}{\overline{\mu }}\gamma _q\left(\alpha _s(\overline{\mu })\right)\right]\times \varphi (x_i,w),$$
(7)
where the quark anomalous dimension $`\gamma _q(\alpha _s)=\alpha _s/\pi `$ in axial gauge governs the RG evolution of $`𝒫`$. The function $`\varphi `$ is the standard proton distribution amplitude. The exponent $`s`$ is given in Ref. .
In equation 7 we use the same infrared parameter $`\omega `$ for all the three $`s(x_l,\omega ,Q)`$ for $`l=1,2,3`$ as well as in the integrals over the anomalous dimension. Earlier Li chose to use different infrared cutoffs $`b_l`$ for each exponent $`s`$ and for each integral involving $`\gamma _q`$. As pointed out in this choice does not does not suppress the soft divergences from $`b_l1/\mathrm{\Lambda }`$ completely, where $`\mathrm{\Lambda }`$ is the QCD scale parameter. For example, the divergences from $`b_11/\mathrm{\Lambda }`$, which appear in $`\varphi (x_i,w)`$ as $`w\mathrm{\Lambda }`$, survive as $`x_10`$, since $`s(x_1,b_1,Q)`$ vanishes and $`s(x_2,b_2,Q)`$ and $`s(x_3,b_3,Q)`$ remain finite. On the other hand, $`w`$ should play the role of the factorization scale, above which QCD corrections give the perturbative evolution of the wave function $`𝒫`$ in Eq. (7), and below which QCD corrections are absorbed into the initial condition $`\varphi `$. It is then not reasonable to choose the cutoffs $`b_l`$ for the Sudakov resummation different from $`w`$.
A modified choice of the cutoffs, $`w=1/b_{max}`$, $`b_{max}=\mathrm{max}(b_l)`$, $`l=1,2,3`$, was proposed in Ref. . This choice was found to suppress the soft enhancements, and the form factor was found to saturate as $`b_c1/\mathrm{\Lambda }`$. The authors also included a model non-perturbative soft wave function in the calculation. Unfortunately, it turned out that the normalization of the resulting $`Q^4F_1`$ was found to be less than half of that of the data for all the distribution amplitudes explored . Bolz et al then concluded that pQCD is unable to fit the experimental form factor.
Kundu et al reexamined the situation. The group argued that the form factor normalization is sensitive to the precise choice of the infrared cutoff $`w`$. They used $`w=c/b_{max}`$, $`b_{max}=\mathrm{max}(b_l)`$, $`l=1,2,3`$, as the infrared cutoff in the Sudakov exponent, instead of the one used by Bolz et al . The introduction of parameter $`c`$ is natural from the viewpoint of the resummation, since the scale $`cw`$, with $`c`$ of order unity, is as appropriate as $`w`$ . Kundu et al find that the calculation is in good agreement with data using the King-Sachrajda (KS) distribution amplitude and setting $`c=1.14`$.
The choice $`c=1.14`$ can also be motivated physically by considering the proton as a quark-diquark type configuration. The diquark constituents are the two quarks closest to each other in the transverse plane. Let $`d_{\mathrm{typ}}`$ be the distance between the center of mass of the diquark and the remaining third quark. Then the infrared cutoff scale $`\omega `$ can be taken to be $`1/d_{\mathrm{typ}}`$. We choose $`c`$ such that for a large number of randomly chosen triangles, we get for the average $`d_{typ}/b_{max}=1/c`$. Defining $`c`$ in such a way, gives $`c1.14`$.
The results of the calculation using KS and CZ distribution amplitudes and $`c=1`$ and 1.14 are shown in fig. 4. It is found that with $`c=1.14`$ the KS distribution amplitude gives good agreement with data. The $`b_c`$ dependence of the form factor is shown in fig. 5, which shows saturation at about $`b_c=0.8/\mathrm{\Lambda }`$. The result after including a model nonperturbative soft wave function are displayed in fig. 6. Again we find that choosing $`c`$ of order unity gives pQCD calculations in agreement with data. For all choices of the distribution amplitude and parameter $`c`$, independent of whether the model soft wave function is included or not, the $`Q^2`$ dependence of the form factor is in good agreement with data. An analogous situation was found for pion form factor.
The natural agreement of $`Q^2`$ dependence of the pQCD calculations is in contrast to data fits obtained using soft model . In such models the $`Q^2`$ dependence depends on the details of the model wave function. Soft model predictions at high momentum have a tendency to fall more strongly than experimental data. As also noted for the pion, the approximate power law behaviour in $`Q^2`$ is not directly implied by the factorization and is a detailed dynamical prediction of the calculation. This could have been achieved only if the intrinsic $`k_T`$ were negligible in the hard scattering kernel. Thus the observation of power-law dependence in the data lends considerable support to the basic factorization of Brodsky-LePage. Nevertheless, the relatively small magnitude of internal momentum scales and the sensitive dependence of final result on the infrared scale $`w`$ indicates that calculation of normalizations using leading order diagrams is not reliable. While higher order contributions are propably non-negligible, there is every reason to believe that the power-law dependence of the calculations is robust.
We have reviewed the current status of pQCD calculations of hadronic electromagnetic form factors. We argue that the normalization of the form factor cannot be predicted reliably by a leading order calculation in $`\alpha _s`$. Detailed calculations including the soft $`k_T`$ dependence, however, support the basic factorization scheme. One finds the correct asymptotic $`Q^2`$ evolution of the form factors for $`Q`$ as small as 2-3 GeV, independent of the choice of distribution amplitude and other theoretical uncertainties. Hence we conclude that agreement of quark counting scaling predictions is not accidental but is well supported by detailed dynamical calculations.
PJ and BK thank the staff of ICTP, Trieste, for hospitality during a visit where this paper was written. This work was supported by BRNS grant No. DAE/PHY/96152, the National Science Council of the Republic of China Grant No. NSC-87-2112-M-006-018, the Crafoord Foundation, the Helge Ax:son Johnson Foundation, DOE Grant number DE-FGO2-98ER41079, the KU General Research Fund and NSF-K\*STAR Program under the Kansas Institute for Theoretical and Computational Science.
|
no-problem/9907/hep-ex9907011.html
|
ar5iv
|
text
|
# CERN-TH/99-196 GLAS-PPE/1999-08 hep-ex/9907011 Progress and Problems in QCD — Report from the Hadronic Final States Working Group at DIS99
## 1 THEORY SUMMARY
During the parallel sessions of the Hadronic Final States Working Group about forty talks were presented. Out of these, about ten might be classified as theoretical ones. The issues they discussed ranged from instantons to heavy flavour production to higher order QCD calculations. It is therefore understandable how difficult it might be to try to provide an organic summary of such a widespread collection of interesting topics.
We shall therefore try to highlight here what we consider to be the main points of the various contributions, leaving the task of properly introducing and explaining the subject to the individual summaries. Such summaries will not be cited explicitly in this section, since they can easily be found in these proceedings under the name of the person who gave the presentation.
A common feature can be identified in almost all the talks given. They clearly focus on the need to go beyond standard fixed next-to-leading order (NLO) QCD perturbation theory as the quality of the experimental data demands more accurate theoretical calculations.
G. Salam and H. Jung both described phenomenological studies related to the CCFM equation. This is an evolution equation that goes beyond the so-called “multi-Regge” limit of the BFKL equation, and also tries to implement effects due to colour coherence (via angular ordering) and soft particles. As CCFM is harder to solve than BFKL, the question is how similar the predictions of the two approaches are. Salam argued that, with the inclusion of soft effects, BFKL and CCFM can be shown to lead to identical predictions at the leading log level. Differences at sub-leading level, as well as ambiguities in the implementation of the equations at this level, do however remain. Jung described a practical implementation of the CCFM equation in the program SMALLX and showed that a good phenomenological description of both $`F_2`$ and forward jets data can be achieved.
Small-$`x`$ logarithms are not the only ones appearing in a QCD perturbative expansion. N. Kidonakis described how to take care of the large logs resulting from soft gluon emissions when final states are produced near threshold. Resummed expressions for these terms have been written some time ago, but until recently it was still unclear how to treat these expressions so that the final result would indeed only contain the resummation of the perturbative series and not further spurious effects. In the approach that Kidonakis described, the resummed expression is truncated at next-to-next-to-leading order, i.e. one order beyond what is available as a full fixed order calculation. Due to the fairly good convergence properties of the series, such a treatment suffices to bring an improvement over the standard NLO calculation, and at the same time ensures consistency with the perturbative expansion.
One further resummation issue was addressed by S. Kretzer who described how charm effects in parton distribution functions, appearing at fixed order as $`\mathrm{log}(Q/m)`$, can be resummed using the ACOT scheme. He studied both neutral and charged current deep inelastic scattering (DIS). The difference between fixed order and resummed predictions does not seem to be within the reach of present experimental accuracy. It is however likely that such resummed approaches will be more and more important in the future, as experimental precision improves and larger scales are probed.
S. Frixione reviewed the status of theoretical calculations of heavy quark production. At small and moderate transverse momenta, NLO QCD calculations are available and reliable. In the large transverse momentum region, on the other hand, large logs of the form $`\mathrm{log}(p_T/m)`$ develop. Resummation techniques for such terms have been developed in recent years. By making use of the Altarelli-Parisi evolution of a perturbatively calculable heavy quark fragmentation function these logs are resummed to all orders. Inclusion of a non-perturbative parametrization for heavy quark hadronization effects, such as the Peterson fragmentation function, also provides a description of $`D^{}`$ photoproduction data. Frixione did however warn that this resummed calculation should not be used at too small transverse momentum values; a proper combination with the full fixed-order NLO massive calculation is necessary to make it reliable in this region.
A more exotic feature of QCD, namely instantons, which also go beyond perturbation theory, was discussed by F. Schrempp. Such objects, originating from the rich structure of the QCD vacuum, can in principle play an important role in various long distance aspects of QCD. There are also short-distance implications. It was argued in this talk that in DIS at HERA there exists a potential for observing instanton-induced processes otherwise forbidden by usual QCD perturbation theory.
Back on more standard ground, D. Soper and M. Grazzini described efforts to produce calculations in fixed order perturbation theory. Soper described a numerical approach to one-loop computations: rather than evaluating each diagram analytically, all diagrams are properly added and the integrals involved are performed numerically, in such a way that the individual singularities cancel before integrating. The final result is therefore finite. This method has so far been successfully applied to the evaluation of the thrust distribution in $`e^+e^{}`$ collisions, but can in principle be extended to other processes. Grazzini reported on progress in perturbative calculations beyond the one-loop approximation. In particular, he described studies of the collinear limits of three partons, where generalizations of the Altarelli-Parisi splitting vertices appear. Such limits are one of the building blocks for next-to-next-to-leading calculations, the next frontier of perturbative QCD.
## 2 HEAVY QUARK PRODUCTION
Experimental results, both on charm and on bottom production, were presented by the ZEUS and H1 collaborations. Many $`D^{}`$ production data were updated from previous measurements. Heavy quarks can provide particularly interesting tests of perturbative QCD since, due to their large mass acting as a cutoff for infrared singularities, their total production rate can be predicted on rigorous theoretical grounds with no free parameters other than their mass, the parton distribution functions and the strong coupling.
The H1 collaboration showed that a generally good, albeit not perfect, agreement with NLO QCD predictions can be observed . It is also worth noticing that the gluon distribution function in the proton has been extracted from charm data in both photoproduction and DIS. The two results show good agreement with each other and with the indirect determination from $`F_2`$, as shown in Fig. 1.
ZEUS $`D^{}`$ data were shown in a new low photon-proton energy region, 80 GeV $`<W_{\gamma p}<120`$ GeV . As shown in Fig. 2, the experimental data tend to be generally higher than the NLO QCD predictions, especially in the forward (proton) direction. Comparisons with the so-called massless approach show better agreement, but this kind of approach is probably not fully reliable at these low transverse momentum values .
Bottom production data, given in terms of observed leptons coming from semileptonic decays, were also presented by both ZEUS and H1 collaborations. ZEUS observed electrons, finding a cross section larger by a factor of $`4`$ than the one predicted by the HERWIG leading order (LO) plus parton shower Monte Carlo program . H1 performed two analyses, both using muons to tag the heavy quark . In these cases, the experimental results are about a factor of 5 and 3 larger than the prediction of a different LO Monte Carlo program, AROMA.
Such large factors between theory and experiment might seem important. However, one should keep in mind both the large errors still present in the experimental determinations and the fact that the theoretical prediction is given here by a leading order Monte Carlo program. It is well known that NLO calculations for heavy quarks greatly increase leading order predictions, usually by at least a factor of 2. However, a comparison with a NLO calculation is unfortunately not yet possible, since experimental results are only quoted for a “visible” cross section for which no full simulation at NLO is so far available. Combining this fact with the experimental uncertainty, it becomes apparent that it is still premature to talk of a discrepancy in the bottom production cross section at HERA. We are of course looking forward to more detailed comparisons between theory and experiment, bottom production being better behaved in QCD perturbation theory than charm, and therefore providing a better test of QCD predictions.
Heavy quark production was also discussed for the hidden flavour case, i.e. heavy quarkonia production. Elastic and inelastic $`J/\psi `$ electroproduction has been analysed by the H1 collaboration . Results from an inclusive sample (elastic plus inelastic) were compared to predictions from the so-called soft colour interaction model as implemented in the Monte Carlo program AROMA. Most of the shapes of the differential distributions can generally be reproduced, while the magnitude of the theoretical predictions is usually too low. Results from an inelastic sample were instead compared to predictions obtained with the non-relativistic QCD (NRQCD) approach to quarkonium production. Such predictions can be seen to be at some variance with the data, both in magnitude and, especially for the rapidity distribution, in shape. It is however known that, especially at such a low scale as the one set by the charm mass, NRQCD predictions can suffer from large uncertainties.
## 3 JETS IN DIS
The use of jet algorithms and the study of jet-related observables continue to be both a popular and powerful approach in order to characterize the hadronic final state’s properties. The analysis of multi-jet events in DIS has been used to extract the value of the strong coupling constant $`\alpha _s`$ . These and many other important analyses have clearly shown the large potential of QCD studies with jets at HERA but have also revealed important limitations. The latter are caused by the difficulties of current QCD Monte Carlo models to describe the data precisely, by the large renormalization scale uncertainties of QCD predictions in NLO in the phase space covered, or by the uncertainties of the proton’s parton density.
In this session various presentations of jet analyses were given, which addressed these issues. Most analyses profited from the large data sample collected in particular during the very successful ’97 data taking period of HERA. Thus, generally the measurements were considerably extended, either into the region of high photon virtuality, $`Q^2`$, or to harder jet structures corresponding to large transverse jet energies, $`E_T^{Breit}`$, in the Breit frame. Three representative jet analyses are discussed in more detail below.
In a systematic comparison of various measured jet distributions with model predictions was performed in dijet events at $`Q^2>150`$ GeV<sup>2</sup>. In Fig. 3 the $`x_p`$ distribution determined with the modified Durham algorithm (run in the laboratory frame) is shown for three different ranges of $`Q^2`$. The precision of the data is high and significant deviations of the QCD Monte Carlo models ARIADNE and LEPTO are observed. In contrast to the MC models, the perturbative QCD predictions in NLO, which are also shown in Fig. 3, describe the measured jet distributions very well. The same conclusions are reached using the factorizable $`k_T`$ algorithm (run in the Breit frame).
It is important to note that both Monte Carlo models have recently been investigated in the context of the HERA Monte Carlo Workshop and were considerably improved. In version 4.10 of ARIADNE, the $`Q^2`$ dependence of dijet events has been modified. In version 6.51 of LEPTO, a new model of soft-colour interactions has been implemented. Given the relative disagreement with the data, further development of the models is desirable.
An excellent description of the data by QCD in NLO is also observed in the preliminary inclusive jet cross sections, $`d^2\sigma _{jet}/dE_TdQ^2`$, in the kinematic region of $`Q^2>150`$ GeV<sup>2</sup> and for transverse jet energy $`E_T^{Breit}>7`$ GeV . In this phase space region both perturbative and non-perturbative uncertainties are expected to be small.
The corresponding dijet cross section measurements were fitted simultaneously with $`F_2`$ determinations to yield the gluon density of the proton . This procedure extends the accessible range of the gluon momentum fraction, $`\xi `$, to larger values with respect to the less direct determinations of the gluon density from the scaling violation of $`F_2`$. The value of the strong coupling constant was assumed to be known in this analysis, but a combined fit of both the gluon density and $`\alpha _s`$ should be possible in the future.
A preliminary measurement of the dijet event rate and the dijet cross section as a function of $`Q^2`$ and a determination of $`\alpha _s`$ were presented in . Jets of high transverse momentum $`E_T^{Breit}`$ have been selected at $`Q^2>470`$ GeV<sup>2</sup>. In the region of large $`Q^2`$, corresponding to large parton momentum fractions, the proportion of gluon-initiated scattering processes is minimal. Thus this analysis is less sensitive to the uncertainties in the gluon density and may depend to a lesser extent on the details of parton density extractions from the world data than earlier $`\alpha _s`$ determinations. Also, the renormalization scale uncertainties are (relatively) small, both owing to the large value of $`Q^2`$ and the high jet transverse energies required. Again, this nicely illustrates the quest for smaller (theoretical) uncertainties by selecting more restrictive but safer phase space regions.
## 4 EVENT SHAPES AND POWER CORRECTIONS
The comparison of exclusive hadronic final state observables with the corresponding perturbative QCD predictions requires an estimation of hadronization effects. Phenomenological models, such as the string or the cluster model implemented in Monte Carlo generators, are primarily used for these estimates. Depending on the observable under study, hadronization corrections may be large, however, and it is difficult to estimate their uncertainty rigorously.
An alternative approach consists in applying analytical power corrections of the form $``$ 1/$`Q^p`$ when comparing perturbative QCD predictions with measured hadronic distributions. The leading power $`p`$ and the exact form of the power corrections to the mean values of event shape distributions have been calculated for a large number of observables in both $`e^+e^{}`$ annihilation and deep inelastic scattering. The size of the correction depends on the value of an effective universal coupling constant $`\overline{\alpha }_0`$, on the strong coupling constant $`\alpha _s`$ and, of course, on $`Q^2`$.
The results of a 2-parameter fit of $`\alpha _s`$ and $`\overline{\alpha }_0`$ to the mean values of various event shape distributions recently measured at HERA are shown in Fig. 5. These results consider the recent theoretical developments of the calculation of two-loop corrections, leading to the so-called Milan factor , and of an updated calculation of the coefficient for the jet broadening variable . Also, the experimental precision has been considerably improved with respect to an earlier analysis . The value of $`\overline{\alpha }_0`$ is found to be $`0.5`$ and it is independent of the variable studied to within 20$`\%`$. The variation of the resulting $`\alpha _s`$ values is very large, however, which strongly suggests that further theoretical studies are needed.
In $`e^+e^{}`$ annihilation similar fits to the mean values (as a function of the centre-of-mass energy $`\sqrt{s}`$) of thrust, wide jet broadening and heavy jet mass yielded much better agreement in $`\alpha _s`$ and similar agreement in $`\alpha _0`$ . Predictions of power corrections to distributions, in contrast to mean values, of event shape observables have also been tested . Fig. 6 shows the wide jet broadening distribution measured at different centre-of-mass energies. Perturbative QCD predictions with either hadronization corrections from Monte Carlo models (full line) or power corrections (dashed line) are fitted to the data and yield fits of similar quality to those without the power corrections but systematically lower values of $`\alpha _s`$. The differences are largest at low centre-of-mass energy where also the applied corrections are sizeable.
In conclusion, the concept of power corrections has led to a remarkable theoretical activity, often in close collaboration with experimentalists, and power corrections prove to be surprisingly successful. The approach is rather economical in the sense that only one additional universal parameter $`\alpha _0`$ needs to be determined by experiment. A number of examples have recently been given, where also the limitations and difficulties became apparent.
## 5 FRAGMENTATION IN DIS
In the above jet analyses, the emphasis is placed on accessing the very rare multi-jet events at increasingly large jet $`E_T`$ to minimize soft non-perturbative effects in the subsequent comparison to perturbative QCD predictions. Valuable information on the properties of QCD can also be gained from the study of charged particle production, however, which is obviously strongly influenced by the non-perturbative hadronization phase. A key question is: To what extent do the measured hadrons reflect the underlying parton spectra? These depend on the initial partonic configuration and the subsequent parton cascade. The typical observables are particle rates, momentum distributions of the hadronic final state particles and multi-particle correlation variables. Increasingly refined perturbative predictions for more and more complex observables have been derived within the framework of the Modified Leading Log Approximation (MLLA) . Many of those predictions have been compared with data using the concept of Local Parton-Hadron Duality (LPHD). The hypothesis of LPHD in connection with MLLA relates the average properties of partons and hadrons by means of a simple normalization constant. With these assumptions, the theoretical predictions depend essentially on two parameters only, an effective strong coupling constant determined by the QCD scale $`\mathrm{\Lambda }`$ and an energy cutoff parameter $`Q_0`$. Both have to be determined by measurement.
A recent measurement of scaled momentum distributions is presented in Fig. 7, where the mean value and higher moments of the $`\xi `$ distribution are shown as a function of $`Q^2`$ for different ranges of $`x`$. The variable $`\xi `$ is defined as $`\mathrm{ln}1/x_p=\mathrm{ln}Q/2p^{Breit}`$. Only particles in the current hemisphere of the Breit frame are considered. This makes possible a direct comparison with results from $`e^+e^{}`$ annihilation, which are also included in the figure.
The fair agreement observed between the results of DIS and $`e^+e^{}`$ annihilation suggests that the main features of quark fragmentation are universal. While the mean value of the $`\xi `$ distribution as a function of $`Q^2`$ is well described by MLLA predictions, no consistent description of mean, width, skewness and kurtosis can be achieved. The measurements benefit considerably from the large range in $`Q^2`$ that is now covered by HERA. Clearly, further studies, possibly considering mass effects , are needed to understand this disagreement.
Measurements of the properties of particles in the target hemisphere have also been made and the correlation between the mean particle multiplicity in the two hemispheres has been studied .
A measurement of the charged particle $`x_p`$ distribution as a function of $`Q`$ is shown in Fig. 8 for different ranges in $`x_p`$ . QCD calculations in NLO combined with NLO fragmentation functions are compared to the measurement. For large values of $`x_p`$ and with increasing $`Q`$, a significant decrease of the distributions is observed, which is expected from scaling violations in the fragmentation functions due to gluon radiation. The agreement with the prediction is good, except at small values of $`x_p`$ where mass effects, which are not considered in the calculations, are important. A simple power correction ansatz as proposed in results in a surprisingly close description of the data. Detailed calculations of these power corrections have been started and were discussed at this meeting .
## 6 FORWARD JET AND $`\pi ^0`$ PRODUCTION IN DIS
The study of jet and particle production in the very forward (proton) region in DIS is largely motivated by the interest in the parton dynamics at small values of $`x`$. In particular, one would like to access a region where the validity of the BFKL equations could be tested.
Experimentally, measurements close to the edge of the detectors are challenging, but both H1 and ZEUS have nevertheless succeeded to measure forward jet cross sections. They show a significant rise of the forward jet cross section with decreasing values of $`x`$. The measurements are in striking disagreement with the predictions of Monte Carlo models based on the traditional DGLAP parton showers. Furthermore, QCD predictions in NLO disagree with the data .
Recently, several ways to describe the measurement have been found and were presented at this meeting . These are: the inclusion of a resolved virtual photon contribution in the NLO calculations performed by ; the inclusion of a resolved virtual photon component in the Monte Carlo model RAPGAP ; and the inclusion of NLO effects in a recent BFKL calculation by applying higher order consistency conditions . In addition, obtained a good description of forward jets (and of $`F_2`$) with a modified version of the Monte Carlo program SMALLX based on the CCFM parton evolution equation.
An important new measurement of forward $`\pi ^0`$ cross sections was presented in . Differential distributions in $`x`$, $`\eta _\pi `$ and $`p_{t,\pi }`$ have been measured for three ranges of $`Q^2`$. Given the difficult phase space region, the precision of the measurement is excellent and compares favourably with the jet measurements. The inclusive $`\pi _0`$ cross sections as a function of $`x`$ are shown in Fig. 9.
Again, RAPGAP with a resolved virtual photon contribution describes the data better than MC models based on DGLAP parton showers such as LEPTO (shown). The best description is obtained by the modified BFKL calculation mentioned above.
In conclusion, significant theoretical progress has been made in understanding the physics of the forward region. The precise measurement of the multi-differential $`\pi ^0`$ distributions will constrain existing and future models considerably.
## 7 REAL PHOTON STRUCTURE
In a hard collision involving an incoming photon, this photon may scatter directly, or it may first fluctuate into a hadronic object. Although it is no longer considered possible to make a complete prediction of the photon’s structure without input from experiment, measurements that are sensitive to the hadronic nature of the photon can nevertheless provide a test of some fundamental hypotheses.
In $`e^+e^{}`$ collisions, the photon’s structure function $`F_2^\gamma `$ is probed in deep inelastic scattering processes where one of the leptons is scattered through a large angle. $`F_2^\gamma `$ is expressed as a function of the fraction of the photon’s energy participating in the scatter, $`x`$, at the resolution scale provided by the square of the momentum transfer at the scattered lepton vertex, $`Q^2`$. Fig. 10 shows the latest world measurements of $`F_2^\gamma (x)/\alpha `$ at medium $`Q^2`$ .
The available photon parton density parametrizations are generally able to describe the data.
At HERA, the partons of the proton probe directly, not only the quark density, but also the gluon density of the photon. Here the resolution scale of the probe is measured in terms of the transverse momenta of the jets or tracks produced. In Fig. 11 an extraction of the leading order gluon density of the photon is shown . Here the probing resolution for the jet analysis
is $`P_T^2=74`$ GeV<sup>2</sup> and for the track analysis $`P_T^2=38`$ GeV<sup>2</sup>. The different experimental approaches both indicate a rise of the leading order gluon density as $`x_\gamma `$ falls.
At high values of the resolution power it becomes possible to describe the high-$`x`$ quark component of the photon’s structure using quark parton model predictions alone, as shown in Fig. 12 . The experimental constraint on the photon’s structure coming from the $`e^+e^{}`$ experiments, however, grows weak here.
ZEUS has measured dijet cross sections in photoproduction in this high scale region ($`E_{T\text{leading, second}}^{\text{jet}}>14,11`$ GeV) as shown in Fig. 13 .
Next-to-leading order perturbative QCD calculations using current photon parton densities are unable to describe these data when both jets are in the central region, $`0<\eta _{\text{ 1,2}}^{\text{jet}}<1`$. As other uncertainties are expected to be low in this kinematic regime, it would be a good test of our understanding of photon-induced processes to check whether a parton density can be found which allows the ZEUS data to be described while remaining consistent with the available high-$`Q^2`$ $`F_2^\gamma `$ measurements from LEP.
An interesting alternate process, which may reveal information on the photon’s structure, comes from prompt photon production in $`\gamma \gamma `$ and $`\gamma p`$ collisions. Prompt photon cross sections have the potential to be free of theoretical uncertainties arising from hadronization effects. Both the ZEUS and TOPAZ collaborations presented early studies of prompt photon production but a considerable improvement in statistics is necessary before any strong conclusions may be drawn.
## 8 VIRTUAL PHOTON STRUCTURE
It is expected that as a photon’s virtuality increases, it will begin to lack the time to develop a complex hadronic structure.
From the total cross section for the double-tag process, $`e^+e^{}e^+e^{}\gamma ^{}\gamma ^{}e^+e^{}X`$, there is an indication that the hadronic component of the photon’s structure is still evident at sizeable photon virtualities .
ZEUS has measured the ratio of the low-$`x_\gamma `$ dijet cross section to the high-$`x_\gamma `$ dijet cross section as a function of the photon’s virtuality $`Q^2`$ at the probing scale provided by $`E_T^{\text{jets}}>5.5`$ GeV, Fig. 14 .
This ratio is flat for a parton density that does not evolve with $`Q^2`$ (GRV LO) and falling for a parton density that is suppressed with $`Q^2`$ (SaS 1D). Therefore the fall that is observed in the data indicates that the photon’s parton density is suppressed as the photon’s virtuality increases.
H1 present the effective parton density of the photon, $`\stackrel{~}{f}_\gamma `$, as a function of $`Q^2`$ in Fig. 15 .
The data are consistent with parton densities which fall logarithmically with $`Q^2`$ and are inconsistent with a pure vector meson dominance ansatz for the photon’s structure.
## 9 JET SUBSTRUCTURE IN PHOTON-INDUCED COLLISIONS
The measurement of jet substructure in photon-induced collisions has been used to study universal properties of fragmentation.
Jet shapes measured in deep inelastic scattering and in $`\gamma \gamma `$ collisions are compared in the top row of Fig. 16 .
The comparison is made for measurements of similar $`E_T^{\text{jet}}`$ and a universality of fragmentation in jets is observed. In the bottom plot of this figure a comparison is made between jets in $`\gamma \gamma `$ collisions and in $`\gamma p`$ collisions. The jets from the $`\gamma p`$ collisions are narrower, but this could be due to the slightly larger $`E_T^{\text{jet}}`$ of the $`\gamma p`$ measurement.
Using a clustering algorithm based on the relative transverse momentum of particles, it is possible to resolve subjets within jets in a well-defined manner. The number of subjets will depend upon the value of the resolution parameter, $`y_{\text{cut}}`$, with which one looks into the jet. In Fig. 17 the mean number of subjets at resolution parameter $`y_{\text{cut}}=0.01`$ is shown as a function of jet pseudorapidity .
It is found that $`n_{\text{subjet}}`$ increases as expected from the predominance of gluon jets in the forward region.
## 10 INCLUSIVE PHOTONS IN $`p\overline{p}`$ COLLISIONS
Inclusive photon results from the Tevatron were presented, where the CDF and D$`Ø`$ data were seen to be consistent with each other . The CDF data reach lower photon $`p_T`$ values where the cross section tends to be higher than theory calculations. This shape difference is difficult to explain with current NLO QCD calculations. Best fits are obtained with the $`k_T`$ smearing procedure used to explain the E706 photon data. A $`k_T`$ smearing of about 3.5 GeV is needed to explain the data. The result is shown in Fig. 18.
In the same talk a high statistics measurement of photon-muon production was also presented, which compares well with NLO QCD.
## 11 JET RESULTS FROM THE TEVATRON
Results were presented on subjet multiplicity in quark and gluon jets at D$`Ø`$ . Jets are identified using the $`k_T`$ jet algorithm, which has the advantage of being IR-safe and allows a more direct comparison between theory and measurements. The gluon jet fraction was determined at $`\sqrt{s}=1800`$ GeV and $`\sqrt{s}=620`$ GeV. The results indicate that there are more subjets at $`\sqrt{s}=1800`$ GeV as well as more subjets in gluon jets.
In the joint session with the structure function working group, both CDF and D$`Ø`$ presented results on the inclusive jet cross section . When the CDF run IA results were published, they showed an excess of events at high $`E_T`$ with respect to QCD expectations using particular parton density functions. This excess generated a lot of interest, and explanations ranged from quark substructure to modified parton density functions. The preliminary CDF measurement from the IB run is based on an integrated luminosity of 87.7 pb<sup>-1</sup> and is in agreement with the Run IA measurement. D$`Ø`$ presented recently published results which are consistent with QCD predictions. Improved energy calibrations at D$`Ø`$ allowed them to reduce the systematic errors to 10% at low $`E_T`$ and to about 30% at high $`E_T`$. A comparison of CDF data with D$`Ø`$ data is shown in Fig. 19, where the results are seen to be consistent.
It appears that the behaviour at high $`E_T`$ can be accommodated by enhancing the gluon at high $`x`$ as is done in the CTEQ PDFs. A more sensitive search for quark substructure can be conducted using either the dijet mass distribution or dijet angular distribution and will be discussed later.
A consistency check of $`\alpha _s`$ from jet data was presented by CDF . The technique extracts $`\alpha _s`$ from a third-order equation where the coefficients are calculated assuming a particular PDF and value of $`\alpha _s`$. By varying $`\alpha _s`$ one can check for a consistent solution where the extracted $`\alpha _s`$ equals the input value. The method depends on the choice of PDF since different PDFs result in a different $`\alpha _s`$. The results show the running of $`\alpha _s`$ in one experiment and yield a result consistent with measured $`\alpha _s`$ from other experiments.
Both CDF and D$`Ø`$ presented the ratio of the scaled cross section for a centre-of-mass energy of 630 and 1800 GeV as a function of $`x_T`$ . The ratio allows a reduction of the uncertainty due to theory and experiment. Above values of $`x_T=0.1`$ the CDF and D$`Ø`$ measurements agree, while at lower $`x_T`$ values the measurements diverge. The D$`Ø`$ data tend to higher ratio values while the CDF results tend to lower values as is shown in Fig. 20.
D$`Ø`$ also presented the rapidity dependence of the inclusive jet cross section for three different $`\eta `$ bins up to $`\eta <1.5`$ . Results were in good agreement with NLO QCD calculations.
Both CDF and D$`Ø`$ presented measurements on the dijet mass distribution . Results from the two experiments are in good agreement in both shape and normalization. D$`Ø`$ has used the measurement to place limits on quark compositeness as is shown in Fig 21. Dijet angular distributions provide a sensitive test of new physics and have the advantage that the distributions are less sensitive to the energy measurement uncertainty. Results were presented from CDF and used to place limits on quark compositeness.
D$`Ø`$ presents the differential dijet cross section separately for opposite-side jets and same-side jets . Both jets are required to sit in the same $`\eta `$ bin. Results were compared to the JETRAD calculation using different PDFs. Dijet differential cross sections from CDF were shown where the central jet was used to measure the $`E_T`$ of the event . A second jet is allowed to fall in one of four $`\eta `$ bins. A quantitative comparison of different PDFs is under way. The differential dijet measurement covers a plane in the $`x`$$`Q^2`$ space, making it more sensitive to the shape of the cross section determined by different PDFs. The data will provide a useful input to QCD fits in order to determine refined PDFs.
## 12 CONCLUSIONS
At this meeting many beautiful, high-precision experimental results were presented and compared with theoretical predictions. QCD has been clearly established as a successful theory for the description of hard scattering processes. It is now increasingly important to better understand soft non-perturbative phenomena and processes where more than one hard scale plays a part. The further development of power corrections and resummed calculations as well as the calculation of QCD cross sections at next-to-next-to-leading order is under way. This program will eventually lead to an even more stringent comparison of theory and experiment. The development of these concepts should benefit greatly from continued close communication between theorists and experimentalists.
|
no-problem/9907/astro-ph9907326.html
|
ar5iv
|
text
|
# The fundamental plane of elliptical galaxies with modified Newtonian dynamics
## 1 Introduction
On a phenomenological level, the most successful alternative to cosmic dark matter is the modified Newtonian dynamics (MOND) proposed by Milgrom (1983a). The basic idea is that the deviation from Newtonian gravity or dynamics occurs below a fixed acceleration scale– a proposal which is supported in a general way by the fact that discrepancy between the classical dynamical mass and the observable mass in astronomical systems does seem to appear at accelerations below $`10^8`$ cm/s<sup>2</sup> (Sanders 1990, McGaugh 1998). The fact that this acceleration scale is comparable to the present value of the Hubble parameter multiplied by the speed of light ($`cH_o`$) suggests a cosmological basis for this phenomenology.
MOND, in a sense, is designed to reproduce flat extended rotation curves of spiral galaxies and a luminosity-rotation velocity relationship of the observed form, $`Lv^4`$ (the Tully-Fisher relation). But apart from these aspects which are “built-in”, the prescription also successfully predicts the observed form of galaxy rotation curves from the observed distribution of stars and gas with reasonable values for the mass-to-light ratio of the stellar component (Begeman et al. 1991, Sanders 1996, Sanders & Verheijen 1998). Moreover, MOND predicts that the discrepancy between the Newtonian dynamical mass and the observed mass should be large in low surface brightness galaxies– a prediction subsequently borne out by observations of these systems (McGaugh & de Block, 1997, 1998).
The observational success of MOND is most dramatically evident for spiral galaxies where often the rotation curve can be a rather precise tracer of the radial distribution of the effective gravitational force; for hot stellar systems– elliptical galaxies– the predictions are less precise. Milgrom, in his original papers, (1983b) pointed out that MOND implies a mass-velocity dispersion relation for elliptical galaxies of the form $`M\sigma ^4`$. If there were no systematic variation of M/L with mass, this, would become the observed Faber-Jackson relation (Faber & Jackson 1976).
In his seminal paper on this subject, Milgrom (1984) calculated the structure of isothermal spheres in the context of MOND and drew several very general conclusions: First, all isothermal spheres, regardless of the degree of anisotropy in the velocity distribution, have finite mass. For a one-dimensional velocity dispersion of 100 to 200 km/s, this mass is inevitably on a galaxy scale. Second, at large radial distance the density decreases as $`r^\delta `$ where $`\delta `$ is in the vicinity of 4. Third, there is an absolute maximum on the mean surface density which is on the order of $`a_o/G`$ where $`a_o`$ is the MOND critical acceleration. For a mass-to-light ratio of three to four in solar units, this would translate into a surface brightness in the V band of 20 to 20.5 mag/(arcsec)<sup>2</sup> which is characteristic of hot stellar systems ranging from massive ellipticals to bulges of spiral galaxies to globular clusters (Corollo et al. 1997). Finally, the mass of an isothermal sphere with a specific anisotropy factor is exactly proportional to $`\sigma ^4`$ where $`\sigma `$ is the space velocity dispersion. All of these conclusions would apply to elliptical galaxies, in so far as these objects can be regarded as isothermal spheres.
Since this work, it has come to light that the global properties of elliptical galaxies comprise a three parameter family (Dressler et al. 1987, Djorovski & Davis 1987); that is to say, elliptical galaxies lie on a surface in a three-dimensional space defined by the luminosity (L), the the central velocity dispersion ($`\sigma _o`$), and the effective radius ($`r_e`$); the mean surface brightness ($`I_e`$) may be substituted for either luminosity or effective radius (i.e., $`L=2\pi I_er_{e}^{}{}_{}{}^{2}`$). This surface appears as a plane on logarithmic plots and, consequently, has been designated as “the fundamental plane” of elliptical galaxies, with the form $`L\sigma ^ar_{e}^{}{}_{}{}^{b}`$ where $`a1.5`$ and $`b0.8`$. Because of the small scatter perpendicular to the fundamental plane, this three-parameter relationship supersedes the Faber-Jackson relation as a distance indicator. The usual physical interpretation of the fundamental plane is that these relations result from the traditional virial theorem plus a dependence of mass-to-light ratio on galaxy mass (van Albada et al. 1995); although, one must also assume that elliptical galaxies comprise a near-homologous family.
It is not immediately clear how the fundamental plane can be interpreted in terms of modified dynamics, or, indeed, if the fundamental plane is even consistent with modified dynamics. The effective virial theorem for a system deep in the regime of MOND (low internal accelerations) is of the form $`\sigma ^4M`$, with no length scale appearing. On the face of it, this would imply that hot stellar systems should comprise a two parameter family as suggested by the older Faber-Jackson relation.
The purpose of this paper is to consider the dynamics of elliptical galaxies in terms of MOND, particularly with respect to the relations between global properties. I demonstrate that the MOND isothermal sphere is actually not a good representation of high surface brightness elliptical galaxies because the implied average surface densities are too low. Elliptical galaxies, within the half-light radius (the effective radius), are essentially Newtonian systems with accelerations in excess of the critical MOND acceleration. This suggests that other possible degrees of freedom must be exploited to model elliptical galaxies using modified dynamics.
To reproduce the observed properties of high surface-brightness elliptical galaxies, it is necessary to introduce small deviations from a strictly isothermal and isotropic velocity field in the outer regions. A simple and approximate way of doing this is to consider high-order polytropic spheres (all of which are finite in the context of MOND) with a velocity distribution which varies from isotropic within a critical radius to highly radial motion in the outer regions. The structure of such objects is determined here by numerical solutions of the hydrostatic equation of stellar dynamics (the Jeans equation) modified through the introduction of the MOND formula for the gravitational acceleration. I find that all models characterized by a given value of the polytropic index and the appropriately scaled anisotropy radius are homologous and exhibit a perfect mass-velocity dispersion relation of the form $`M\sigma ^4`$ with no intrinsic scatter. However a range of models over this parameter space is required to reproduce the dispersion in the observed global properties of elliptical galaxies, and strict homology is broken. This adds considerable scatter to the mass-velocity dispersion relation and introduces a third parameter which is, in effect, the surface density (or effective radius). Although these objects are effectively Newtonian in the inner regions, MOND imposes boundary conditions which restrict these Newtonian solutions to a well-defined domain in the three dimensional space of dynamical parameters– a dynamical fundamental plane. Combined with a weak dependence of M/L on galaxy mass, the fundamental plane in its observed form is reproduced.
Not only the form but also the scaling of the $`M\sigma _or_e`$ relation is fixed over the relevant domain of parameter space; this scaling is relatively independent of the detailed structure of the stellar system. Therefore, given the effective radius and velocity dispersion, the mass of any elliptical galaxy may be calculated and the mass-to-light ratio can be determined. For the galaxies in the samples of Jørgensen, Franx, & Kærgaard (1995a,b) and Jørgensen (1999) the mean M/L turns out to be 3.6 M/L with about 30% scatter. With a weak dependence of M/L with galaxy mass, the predicted form of the Fundamental Plane agrees with that found by Jørgensen et al.
## 2 Basic equations and assumptions
Following Milgrom (1984), I calculate the structure of spherical systems by integrating the spherically symmetric hydrostatic equation (the Jeans equation):
$$\frac{d}{dr}(\rho \sigma _{r}^{}{}_{}{}^{2})+\frac{2\rho \sigma _{r}^{}{}_{}{}^{2}\beta }{r}=\rho g$$
$`(1)`$
where $`\rho `$ is the density, $`\sigma _r`$ is the radial component of the velocity dispersion, $`\beta =1\sigma _{t}^{}{}_{}{}^{2}/\sigma _{r}^{}{}_{}{}^{2}`$ is the anisotropy parameter ($`\sigma _t`$ is the velocity dispersion in the tangential direction), and $`g`$ is the radial gravitational force which, in the context of MOND, is given by
$$g\mu (g/a_o)=\frac{GM_r}{r^2}=\frac{4\pi G}{r^2}_0^rr_{}^{}{}_{}{}^{2}\rho (r^{})𝑑r^{}.$$
$`(2)`$
Here $`M_r`$ is the mass within radius r, $`a_o`$ is the MOND acceleration parameter (found to be $`1.2\times 10^8`$ cm/s<sup>2</sup> from the rotation curves of nearby galaxies), and $`\mu (x)=x/\sqrt{(1+x^2)}`$ is the typically assumed function interpolating between the Newtonian regime ($`x>>1`$) and the MOND regime ($`x<<1`$). Because of the assumed spherical symmetry, eq. 2 is exact in the context of the Lagrangian formulation of MOND as a modification of Newtonian gravity (Bekenstein & Milgrom 1984). However, if viewed as a modification of inertia eq. 2 may only be an approximation in the general case of motion on non-circular orbits (Milgrom 1994).
There are four unknown functions of radius, $`\rho `$, $`\sigma _r`$, $`\beta `$, and $`g`$; therefore, additional assumptions are required to close this set of 2 equations. First, I take a definite pressure-density relation, that of a polytropic equation of state:
$$\sigma _{r}^{}{}_{}{}^{2}=A_{n\sigma }\rho ^{\frac{1}{n}}$$
$`(3)`$
where $`A_{n\sigma }`$, for a particular model, is a constant which is specified by the given central velocity dispersion and density, and $`n`$ is the polytropic index (for an isothermal sphere $`n`$ is infinite). This is done simply as a convenient way of providing stellar systems which are somewhat cooler in the outer regions as is suggested by observations (discussed below). The anisotropy parameter is further assumed to depend upon radius as
$$\beta (r)=\frac{(r/r_a)^2}{[1+(r/r_a)^2]}$$
$`(4)`$
where $`r_a`$ is the assumed anisotropy radius. This provides a velocity distribution which is isotropic within $`r_a`$, but which approaches pure radial motion when $`r>>r_a`$. Such behavior is typical of systems which form by dissipationless collapse (van Albada 1982). Thus I will be considering a two-parameter set of models for elliptical galaxies characterized by a polytropic index $`n`$ and an anisotropy radius $`r_a`$. Additional physical considerations, such as stability, may limit the range of these parameters.
Milgrom (1984) found that for an isothermal sphere with a given constant $`\beta `$, there exits a family of MOND solutions having different asymptotic behavior as $`r0`$. In general, the density approaches a constant value near the center except for one particular limiting solution where $`\rho 1/r^2`$. The global properties (e.g., the mean surface density, the value of $`M/\sigma ^4`$, the asymptotic density distribution at large r) do not vary greatly within such a family of solutions, except for models which are characterized by an unrealistically low central density of $`<10^2M_{}`$/pc<sup>3</sup> (see Milgrom 1984, Figs. 1 & 2). I find the same to be true for the high-order polytropic spheres with an anisotropy factor given by eq. 4; i.e., for a given value of $`n`$ and $`r_a`$, there is a family of solutions with differing asymptotic behavior at small r. Again, because the global parameters do not vary greatly within a family, I consider below only the limiting solution; i.e., that with the $`1/r^2`$ density cusp.
Before describing the results of the numerical integration of eqs. 1-4, one important aspect of this system of equations should be emphasized. The appearance of an additional dimensional constant, $`a_o`$, in the equation for the gravitational force, eq. 2, provides, when combined with a characteristic velocity dispersion of the system (the central radial velocity dispersion, for example), a natural length and mass scale for objects described by these equations. This differs from the case of Newtonian systems where two system parameters (a velocity dispersion and central density) are required to define units of mass and density. For MOND systems the characteristic length and mass are given by
$$R_\sigma =\sigma _{r}^{}{}_{}{}^{2}/a_o,$$
$`(5a)`$
$$M_\sigma =\sigma _{r}^{}{}_{}{}^{4}/Ga_o.$$
$`(5b)`$
There is, in addition, a natural scale for surface density given by
$$\mathrm{\Sigma }_m=M_\sigma /R_\sigma ^2=a_o/G$$
$`(5c)`$
which depends only upon fundamental constants. These natural units imply that the properties of homologous systems described by modified dynamics should scale according to these relations; i.e., $`M\sigma ^4`$ with a characteristic surface density which is independent of $`\sigma `$. However, for the system to be described by modified dynamics it must extend into the regime of low accelerations, i.e., where $`a<<a_o`$. This is not a necessary attribute of Newtonian systems which have finite mass and radius ($`n<5`$). Therefore, we would expect that only those higher order polytropes ($`n>5`$ including the isothermal sphere) to be necessarily characterized by this scaling because the Newtonian solution always has infinite extent and mass. Such polytropes would inevitably extend to the regime of low accelerations.
For a velocity dispersion of 200 km/s, typical for elliptical galaxies, we find $`R_\sigma 10`$ kpc and $`M_\sigma 10^{11}`$ M– characteristic galactic dimensions. The manner in which these length and mass scales arise, in the context of a cosmological setting for the dissipationless formation of elliptical galaxies, will be discussed in future paper.
## 3 A test case: the isotropic isothermal sphere
Eq. 1 is numerically solved using a fourth-order Runge-Kutte technique. The integration proceeds radially outward by specifying a central radial velocity dispersion and choosing, at a particular radius, the density corresponding to that of the limiting solution given by Milgrom (1984). In solving for the structure of isothermal spheres, Milgrom used the natural units of length and mass (eq. 5) to write eq. 1 in unitless form. Here, I use physical units: 1 kpc, $`10^{11}M_{}`$, and 1 km/s (in these units $`G=4.32\times 10^5(\mathrm{km}/\mathrm{s})^2\mathrm{kpc}/(10^{11}\mathrm{M}_{})`$ and $`a_o=3700(\mathrm{km}/\mathrm{s})^2/\mathrm{kpc}`$. Although convenient for comparison with observations, these units do obscure the scaling of solutions.
For comparison with Milgrom’s results the first systems considered here are isotropic isothermal spheres ($`n`$, $`r_a`$ $`\mathrm{}`$). The limiting solution of an isothermal sphere with a specified value of $`\beta `$ can be scaled as implied by the natural units (eqs. 5). I verify this by solving eqs. 1-4 for isothermal spheres with radial velocity dispersion ranging from 50 km/s to 350 km/s. For all such spheres, the density at large radius falls off as $`1/r^4`$. This implies, directly from eq. 1, that
$$\sigma _{r}^{}{}_{}{}^{4}=0.063GMa_o$$
$`(6)`$
which is to say, the mass-velocity dispersion relation is exact for MOND isotropic isothermal spheres.
Because the density distribution for the limiting solution falls as $`1/r^2`$ at small r and as $`1/r^4`$ in the outer regions, the density profile of the MOND isotropic isothermal sphere resembles that of the Jaffe model (Jaffe 1983). The mean surface density inside the projected half-mass radius (the effective radius), $`r_e`$, is found to be
$$\mathrm{\Sigma }=0.134\mathrm{\Sigma }_m.$$
$`(7)`$
where the characteristic MOND surface density, $`\mathrm{\Sigma }_m`$, is given by eq. 5c. Given that $`M=2\pi r_{e}^{}{}_{}{}^{2}\mathrm{\Sigma }`$, then from eqs. 6 and 7 it follows that
$$r_e=4.36\sigma _{r}^{}{}_{}{}^{2}/a_o.$$
$`(8)`$
For a system with $`\sigma _r=125`$ km/s, one would find, with these formulae, that M = $`2.4\times 10^{11}`$ M and $`r_e=18`$ kpc. The surface density distribution of such an object is well-described by the de Vaucouleurs $`r^{1/4}`$ law which works well as an empirical fit to surface brightness profile of elliptical galaxies.
However, the global properties of MOND isotropic isothermal spheres are inconsistent with those of high surface-brightness elliptical galaxies. Basically, for a given velocity dispersion, the mass is too large, the effective radius is too large, and the mean surface density is too small compared to that of actual galaxies.
This is evident from Fig. 1 which is a logarithmic plot of the central velocity dispersion plotted against effective radius for a large homogeneously observed sample of galaxies. The sample consists of 154 elliptical galaxies in several clusters (open points) observed by Jørgensen et al. (1995a,b) combined with 116 early-type galaxies (crosses) in the Coma cluster (Jørgensen 1999). The connected solid points show the sequence of isotropic isothermal spheres (eq. 8). We see that the distribution of observed properties is not matched with those of the isothermal spheres; for example, for galaxies with a central line-of-sight (l.o.s.) velocity dispersion of about 150 km/s, the effective radii are typically 2.5 kpc, an order of magnitude lower than that of the MOND isothermal spheres. Assuming a mass-to-light ratio of 3 to 4 M/L for the stellar population, the mean observed surface density in actual elliptical galaxies within $`r_e`$ would be about two or three times larger than $`\mathrm{\Sigma }_m`$; which is to say, elliptical galaxies are well within the Newtonian regime inside an effective radius. In contrast the mean surface density of MOND isotropic isothermal spheres are at least 10 times lower (eq. 7) and hence fail as representations of luminous high surface brightness elliptical galaxies.
The situation becomes worse for constant $`\beta >0`$ as is evident from the calculation of Milgrom (1984). Such systems are even deeper in the MOND regime; for $`\beta =0.9`$, $`\mathrm{\Sigma }=0.0025\mathrm{\Sigma }_m`$. In order to represent elliptical galaxies in the context of MOND, we must allow for the possibility that pressure-supported systems become cooler with a radial orbit anisotropy in the outer regions.
## 4 Anisotropic polytropes as models for elliptical galaxies
There is observational evidence that the stellar component of elliptical galaxies is not, in general, isothermal: the l.o.s. velocity dispersion is observed to decrease with increasing projected radius. For reasons of stability such a decrease can probably not be attributed entirely to velocity field anisotropy (see below). Describing this decline by a power law, $`\sigma r^ϵ`$, Franx (1989) finds that typically $`ϵ=0.06`$. A simple way of introducing this mild deviation from an isothermal state into the MOND models is to consider the more general equation of state expressed by eq. 3– the polytropic gas assumption with large $`n`$.
It is straight-forward to demonstrate from eq. 1 that all MOND polytropic spheres of finite $`n`$ are finite in extent as well as in mass, unlike the Newtonian case where only polytropes with $`n<5`$ have finite extent and mass. As $`n\mathrm{}`$, the radius of the edge of the sphere also approaches infinity; for high order polytropes ($`n>10`$) the outer radius is many times larger than the effective radius. A polytropic sphere with $`n>5`$ will always have a MOND regime which establishes boundary conditions for the inner Newtonian solution. Therefore, the polytropic index, which must lie between 5 and infinity, is a free parameter of such models.
It is unlikely that the stars in elliptical galaxies have a completely isotropic velocity distribution. As noted above, a radial orbit anisotropy of the form given by eq. 4 emerges naturally in dissipationless collapse models. This expression provides a second dimensionless parameter for characterizing MOND models of elliptical galaxies: $`\eta =r_a/R_\sigma `$, the anisotropy radius in terms of the characteristic MOND length scale
A very general result is that such high order anisotropic polytropic spheres have a velocity dispersion-mass relation of the form,
$$\sigma _{o}^{}{}_{}{}^{4}=q(n,\eta )GMa_o$$
$`(9)`$
where $`\sigma _o`$ is the central l.o.s. velocity dispersion. That is to say, the ratio of $`\sigma _{o}^{}{}_{}{}^{4}`$ to $`GMa_o`$, defined here as $`q`$, depends upon the two dimensionless parameters $`n`$ and $`\eta `$. This follows from the general scaling relation for systems which extend into the regime of modified dynamics, eq. 5b. For the pure isotropic isothermal sphere $`q_{\mathrm{}}=0.063`$ (eq. 6). For any given polytropic index and scaled anisotropy radius, the structure of objects is self-similar over central radial velocity dispersion, and the $`M\sigma _{o}^{}{}_{}{}^{4}`$ relation is exact; i.e., such objects form a 2-parameter family in $`\sigma _o`$ and $`M`$.
The same is not true if we consider the set of such systems over a domain of the two-dimensional parameter space. In Fig. 2 the connected solid points show the logarithm of $`q`$ vs. the logarithm of the mean surface density within the effective radius ($`\mathrm{\Sigma }=M/2\pi r_e^2`$) in units of the MOND critical surface density $`\mathrm{\Sigma }_m`$ for a sequence of isotropic polytropes ($`\eta =\mathrm{}`$). Each point is a set of models over a range of central $`\sigma _o`$ but having a specific value of $`n`$. The locus of such points define a curve with the isothermal sphere (denoted by an X) at one extreme and, as displayed here, the n=7 polytrope at the other. The sequence of isotropic polytropes approaches but does not exceed the MOND critical surface density. Each polytrope is self-similar over velocity dispersion with a perfect mass-velocity dispersion relation, but its own mass-velocity relation. The entire set of polytropes breaks homology, and for this ensemble of isotropic polytropes, $`q`$ can be considered as a function of the mean surface density; indeed, over the range of n=30 to n$`\mathrm{}`$, the curve is well approximated by a power law, i.e.,
$$q=k(\mathrm{\Sigma }/\mathrm{\Sigma }_m)^\kappa $$
$`(10)`$
That is to say, a third parameter, the mean surface density, enters into the basic dynamical relationship, eq. 9. Substituting eq. 10 into eq. 9 we find a relation of the form
$$\sigma _{o}^{}{}_{}{}^{4}=k\frac{G^{\kappa +1}M^{\kappa +1}}{2\pi r_{e}^{}{}_{}{}^{2\kappa }a_{o}^{}{}_{}{}^{\kappa 1}}$$
$`(11)`$
This can be viewed as a generalized virial relationship for objects which extend into the regime of modified dynamics. For models with a specified value of $`n`$ and $`\eta `$, $`\kappa =0`$ and we recover the MOND mass-velocity dispersion relationship for homologous systems; models covering a range of n and $`\eta `$– non-homologous models with $`\kappa 0`$– comprise a three-parameter family.
If structure of elliptical galaxies could be approximated by this set of high order isotropic polytropic spheres ($`30<n<\mathrm{}`$), then, from eq. 11, there would exist a theoretical fundamental plane relationship of the form
$$M=K\sigma _{o}^{}{}_{}{}^{\alpha }r_{e}^{}{}_{}{}^{\gamma }$$
$`(12a)`$
where
$$\alpha =\frac{4}{\kappa +1},$$
$`(12b)`$
$$\gamma =\frac{2\kappa }{\kappa +1}$$
$`(12c)`$
and
$$K=\left(\frac{2\pi }{kG^{\kappa +1}a_{o}^{}{}_{}{}^{1\kappa }}\right)^{\frac{1}{\kappa +1}}.$$
$`(12d)`$
For the sequence of isothermal spheres over this range of $`n`$, $`\kappa =1.5`$ which, from eqs. 12b and 12c, implies that $`\alpha =1.6`$ and $`\gamma =1.2`$. Thus, in this generalized dynamical relation, the exponents may differ from the expected Newtonian values ($`\alpha =2`$, $`\gamma =1`$). Significantly, this one dynamical formula (eq. 12a) applies to a range of models which are non-homologous.
However, pure isotropic polytropic spheres also fail as acceptable models of elliptical galaxies. In the models, as in actual ellipticals, the l.o.s. velocity dispersion declines with increasing projected radius. This decline, mild though it is, is steeper than that typically observed in ellipticals if $`n<12`$ and too shallow if $`n>16`$. MOND polytropes in the range of n=12 to n=16 exhibit roughly the observed form of $`\sigma (r)`$. For polytropes in this range, the mean surface density within the effective radius is still significantly lower than that of true elliptical galaxies– typically by about a factor of 5 assuming a mass-to-light ratio of 4 for stellar population of elliptical galaxies. If such simple spherically symmetric models are to approximate real elliptical galaxies, it is clear an additional degree of freedom must enter into the structure equation (eq. 1). Here we assume that that degree of freedom is provided by the radial dependence of the anisotropy parameter as represented by eq. 4.
In Fig. 2 we see the effect of introducing this second parameter, $`\eta `$, on the position of polytropes in the log($`q`$)-log$`(\mathrm{\Sigma })`$ plane. Branching off of the principal curve defined by the sequence of isotropic polytropes are sequences of models with $`\eta `$ ranging from 200 to 0.1 for polytropes of n=12 and n=16. When $`\eta >>1`$ the models, of course, are very similar to the MOND isotropic polytropes. The effect of increasing anisotropy (lower $`\eta `$) is to increase the mean surface density of the polytropic spheres– as is needed to match the observations of elliptical galaxies. The mass and effective radius are decreased but the mean surface density is higher.
For a given polytropic index, the branch defined by increasing anisotropy (decreasing $`\eta `$) exhibits a maximum surface density; for n=16 this maximum is about $`2\mathrm{\Sigma }_m`$ and occurs for $`\eta 0.15`$; that is, for lower $`\eta `$ (higher anisotropy) the surface brightness is again lower. The sequence of anisotropic models is double-valued in surface brightness. These models near the maximum surface density are quite anisotropic in the sense that the radial orbit anisotropy reaches within the effective radius; all models with $`r_a<0.75r_e`$ are designated by an open circle in Fig. 2. The stability of such anisotropic models is questionable (Binney & Tremaine 1987).
Given the fact that MOND anisotropic polytropes between n=12 and n=16 can reproduce the approximate decline of the l.o.s. velocity dispersion with projected radius observed in ellipticals, we can take this as an observational restriction upon the range of $`n`$. The range of the second parameter, $`\eta `$, can also be restricted by excluding all highly anisotropic models (with $`\eta <0.2`$) on the basis of possible radial orbit instability. Fig. 3 is the locus of a grid of such models on the log($`q`$)-log$`(\mathrm{\Sigma })`$ plane. There are 360 models with $`n`$ = 12, 13, 14, 15, 16 and $`\eta `$ = 0.2, 0.4, 0.8, 1.6, 3.2, 6.4 and covering a range of the central $`\sigma _r`$ between 75 km/s and 350 km/s in steps of 25 km/s.
Each point in Fig. 3 represents a particular value of $`n`$ and $`\eta `$ and exhibits its own perfect $`M\sigma _{o}^{}{}_{}{}^{4}`$ relation. However, the ensemble of models is non-homologous and presents an ensemble of $`M\sigma _{o}^{}{}_{}{}^{4}`$ relations. Because $`q`$ varies by a factor of 5 or 6 this would be the expected intrinsic scatter in the combined $`M\sigma _o`$ relation. However, over this range of parameter space, the models lie in a restricted domain of the log($`\mathrm{\Sigma }`$)-log$`(q)`$ plane– sufficiently restricted as to define a theoretical fundamental plane with scatter less than that of the $`M\sigma _o`$ relation. A least-square fit to this distribution of points gives $`\kappa =0.98`$ in eq. 10. Thus, by eqs. 12, this yields a dynamical fundamental plane relation near that implied by the Newtonian virial theorem for homologous systems, i.e., $`\alpha =2`$, $`\gamma =1`$. Because the scatter in $`q`$ about this power law relation is much less than the total range in $`q`$, the scatter perpendicular to the dynamical fundamental plane is very much reduced.
For comparison with observations, it must be realized that both the $`M\sigma `$ and fundamental plane relations are altered by the way in which elliptical galaxies are actually observed. Specifically, it is not the velocity dispersion along the very central line-of-sight, $`\sigma _o`$, which is measured, but rather the l.o.s. velocity dispersion , $`\sigma _d`$, within some finite-size aperture with radius $`r_d`$. The data of Jørgensen et al. have the advantage that all measured velocity dispersions are corrected to a circular aperture with a fixed linear diameter of 1.6 kpc for H<sub>o</sub> = 75. The appearance of a fixed linear scale has the effect of introducing an additional dimensionless parameter into dynamical relationship, eq. 9; i.e., $`q`$ also becomes a function of $`r_d/R_\sigma `$ where $`R_\sigma =\sigma _{d}^{}{}_{}{}^{2}/a_o`$ is the MOND length scale appropriate to the system. However, this parameter can be absorbed if it is expressed as $`r_d/R_\sigma =\sigma _{m}^{}{}_{}{}^{2}/\sigma _{d}^{}{}_{}{}^{2}`$ where $`\sigma _m=\sqrt{a_or_d}=54.4`$ km/s.
When we observe the polytropic models in the same way as real galaxies (determining the volume emissivity-weighted l.o.s. velocity dispersion in the inner projected 0.8 kpc), the distributions of velocity dispersions and effective radii may be compared directly to the observations of Jørgensen et al. This is done in Fig. 4 where we see that such models can account for the observed range in these properties provided that free parameters cover their allowed ranges: $`12n16`$ and $`0.2\eta `$. That is to say, the set of models must be non-homologous in order to explain the dispersion in observed properties.
For these realistically “observed” models, the dependence of $`q`$ on $`\sigma _d`$ is also found to be power law; thus, we may rewrite eq. 9 as
$$\sigma _{d}^{}{}_{}{}^{4}=q^{}(\mathrm{\Sigma }/\mathrm{\Sigma }_m)[\sigma _d/\sigma _m]^\lambda GMa_o;$$
$`(13)`$
i.e., I explicitly write the dependence of $`q`$ on $`\sigma _d`$ leaving the quantity $`q^{}`$ as a pure function of surface density. In Fig. 3 we see that the dependence of $`q^{}`$ on surface brightness is well-represented by a power law with with roughly the same exponent as the $`q`$ dependence. Thus the $`M\sigma `$ relation becomes $`M\sigma _{d}^{}{}_{}{}^{(4\lambda )}`$ and the fundamental plane exponent in eq. 12a is $`\alpha =(4\lambda )/(\kappa +1)`$ From the models it is found via least-square fits that $`\lambda =0.53`$ and $`\kappa =0.98`$ implying
$$M/(10^{11}M_{})=2\times 10^8\sigma _d(\mathrm{km}/\mathrm{s})^{3.47}$$
$`(14a)`$
for the mass-velocity dispersion relation and
$$M/(10^{11}M_{})=3\times 10^5[\sigma _d(\mathrm{km}/\mathrm{s})]^{1.76}[r_e(\mathrm{kpc})]^{0.98}.$$
$`(14b)`$
These $`M\sigma `$ and dynamical fundamental plane relations are shown for the 360 models in Fig. 5. The scatter about the dynamical fundamental plane is a factor of 10 smaller than that about the $`M\sigma `$ relation.
Note in eqs. 14 that for this restricted set of models there is a definite scaling of both the $`M\sigma `$ and the dynamical fundamental plane. Using eq. 14b to calculate the mass of the galaxies in the Jørgensen et al. sample, one finds the distribution of M/L shown in Fig. 6 which is a log-log plot of M/L against the calculated mass. Here it is found that $`<M/L>=3.6\pm 1.2`$ in solar units (H<sub>o</sub> $``$ 75) and and $`M/LM^{0.2}`$. This, of course, ignores possibly important effects such as deviations from spherical symmetry and systematic rotation, and the fact that real galaxies are almost certainly not characterized by a pure polytropic velocity dispersion-density relation. Bearing these potential dangers in mind, one could also extrapolate eq. 14b down to globular clusters. For a sample of globular clusters tabulated by Trager, Djorgovski & King (1993) and by Pryor & Meylen (1993), I find that $`<M/L>=1.7\pm 0.8`$ in solar units.
Thus, MOND polytropic spheres in the range n=12 to n=16 with radial anisotropy beyond the effective radius not only reproduce the observed distribution of galaxies in the $`r_e\sigma _d`$ plane but also provide a reasonable value for the mass-to-light ratio of ellipticals and a weak dependence of M/L on mass. As is evident from eq. 9, the usual MOND $`M\sigma _{o}^{}{}_{}{}^{4}`$ relation remains, albeit with large scatter due to the necessary deviations from homology. Considering the manner in which the central velocity dispersion is actually measured (within a fixed circular diaphragm) the relation is altered to $`M\sigma _{d}^{}{}_{}{}^{3.47}`$. Further, considering the necessary dependence of M/L on M, the predicted Faber-Jackson relation becomes $`L\sigma _{d}^{}{}_{}{}^{2.78}`$ which is consistent with the data of Jørgensen et al.; i.e., a least square fit to the log(L)-log($`\sigma _d`$) distribution for this sample of early-type galaxies yields a slope of $`2.6\pm 0.8`$.
The predicted dynamical fundamental plane eq. 14b can be converted into the more commonly used form by making use of the relation $`M=2\pi \mathrm{\Sigma }r_{e}^{}{}_{}{}^{2}`$; then one finds $`r_e\sigma _{d}^{}{}_{}{}^{1.73}\mathrm{\Sigma }^{0.98}`$ where $`\mathrm{\Sigma }`$ is the mean mass surface density within $`r_e`$. With $`M/LM^{0.17}`$ we would then predict a fundamental plane of
$$r_e\sigma _{d}^{}{}_{}{}^{1.23}I_{e}^{}{}_{}{}^{0.84}$$
$`(15)`$
where $`I_e`$ is the mean surface brightness within $`r_e`$. Within the uncertainties this is identical to the fundamental plane defined by the observations of Jørgensen et al. (1995a,b).
Given the approximations involved in these calculations (primarily the polytropic gas assumption and the specific radial dependence of the anisotropy parameter), the actual exponents of the $`M\sigma `$ and fundamental plane relationships are less important than the fact that MOND predicts a fundamental plane relation with a scatter which is a factor of 10 less than that about the $`M\sigma `$ relation. This is true in spite of the fact that the set of models must be non-homologous in order to explain the range of observed properties– surface density and effective radius. This arises as a natural aspect of the basic dynamics of systems which extend into the regime of modified dynamics and need not be accounted for by complicated conspiracies in the process of galaxy formation. Moreover various mechanisms for the subsequent dynamical evolution of ellipticals (e.g. merging, canabalism) would not effect this relationship. All that is required is that stellar velocity field in ellipticals not deviate too dramatically from being isothermal and possess a radial orbit anisotropy similar to that described by eq. 4.
## 5 Conclusions
The essential results of these calculations can be summarized as follows:
1. The dynamics of high surface brightness elliptical galaxies span the range from Newtonian within the effective radius to MOND beyond. The mean surface density within $`r_e`$ is at least twice as large as the MOND surface density, and the internal accelerations are too large to be within the domain of modified dynamics. This is consistent with the fact that there is no large mass discrepancy or, viewed in terms of dark matter, no need for dark matter within the bright inner regions. However, MOND isothermal spheres have a mean surface density which is roughly one-tenth the critical surface density within $`r_e`$; they are almost entirely pure MOND objects. This effectively rules out these objects as models for elliptical galaxies.
2. In order to reproduce the observed global properties of high surface brightness elliptical galaxies in the context of MOND, it is necessary to introduce deviations from a constant velocity dispersion and strict isotropy of the velocity field in the outer regions. This may be done, in an approximate way, by considering MOND polytropes in the range n=12 to n=16 with a radial orbit anisotropy beyond an effective radius ($`r_a>0.75r_e`$). These objects exhibit the mean decline of line-of-sight velocity dispersion with projected radius observed in elliptical galaxies. Moreover, such models provide reasonable representations of elliptical galaxies with respect to the distribution by velocity dispersion and effective radius (Fig. 4). In order to match these observations, the models must cover a range in the parameter space of polytropic index and scaled anisotropy radius which implies that strict homology is broken. This breaking of homology leads to considerable scatter in the mass-velocity dispersion relation (and the implied Faber-Jackson relation) while introducing a third parameter which is the mean surface density or effective radius. The intrinsic scatter about this dynamical fundamental plane is much lower than that about the mass-velocity dispersion relationship because of the relative insensitivity of this theoretical relationship to deviations from homology (Fig. 5). Both the theoretical $`M\sigma `$ and fundamental plane relationships are modified when one considers that the central velocity dispersion is actually measured within a finite size aperture corrected, in the Jørgensen et al. observations, to a fixed diameter of 1.6 kpc for all galaxies in their samples.
3. These calculations are highly idealized and apply strictly only to spherical systems with a perfect polytropic equation of state. Moreover, the fact that the models cover a range of internal accelerations around $`a_o`$ means that the detailed structure is dependent upon the assumed form of the MOND interpolation function, $`\mu (x)`$ (eq. 2). Nonetheless, when the derived dynamical fundamental plane relation is used to estimate the mass of elliptical galaxies from the observed central velocity dispersion and effective radius, one finds, for the galaxies in the samples of Jørgensen et al., a mean mass-to-light ratio of 3.6 M/L with a dispersion of 30% and a weak dependence of this M/L on galaxy mass (as in the strictly Newtonian case). Such a M/L would seem quite reasonable for the older stellar populations of elliptical galaxies. When the predicted dynamical fundamental plane relation is converted into an observed relationship (on the $`r_e`$, $`\sigma _d`$, and $`I_e`$ parameter space), the Jørgensen et al. result is recovered if $`M/LM^{0.17}`$.
The principal conclusion is that the existance of a fundamental plane with lower intrinsic scatter than that of the Faber-Jackson relation is implied by modified dynamics, given that high surface brightness elliptical galaxies cannot be represented by pure MOND isothermal spheres. It may be argued that Newtonian dynamics also predicts a fundamental plane via the traditional virial theorem, and therefore the existance of such a relationship in no sense requires modified dynamics. It is true that the fundamental plane by itself would not be a sufficient justification for modified dynamics. However, a curious aspect of the Newtonian basis for the fundamental plane is the small scatter about the observed relation in view of the likely deviations from homology in actual elliptical galaxies. The advantage of MOND in this respect is the existance of a single dynamical relationship (eq. 12 or eq. 13 ) for a range of non-homologous models. Because an additional dimensional constant, $`a_o`$, enters into the structure equation (eq. 2), MOND self-gravitating objects are more constrained than their Newtonian counterparts.
In this respect, it is worthwhile to emphasize that a pure Newtonian self-gravitating object with a central line-of-sight velocity dispersion of 200 km/s can have any mass. But an object with this same velocity dispersion and which extends at least partially into the regime of modified dynamics can only have a galaxy-scale mass. It is the proximity to being isothermal which requires that elliptical galaxies extend into the regime of modified dynamics. If this one basic requirement is satisfied, MOND inevitably imposes boundary conditions on the inner Newtonian solution– boundary conditions which restrict these objects to lie on such a well-defined plane in the three dimensional space of observed quantities in spite of detailed variations in the structure between individual objects. Structural variety does, however, lead to a large intrinsic scatter in the $`M\sigma `$ relation because each distinct class of objects characterized by an appropriately scaled radial dependence of velocity dispersion and degree of anisotropy exhibits it’s own M-$`\sigma `$ relation with a different normalization. None-the-less, a Faber-Jackson relation does exist and remains as an imprint of modified dynamics on nearly isothermal hot stellar systems.
I am grateful to Moti Milgrom and Stacy McGaugh for very useful comments on the original manuscript. I thank especially Inger Jørgensen and Marijn Franx for providing their data on elliptical galaxies in convenient form and for helpful comments. I am grateful to the referee, Massimo Stiavelli, for remarks and criticisms which greatly improved the content of this paper.
|
no-problem/9907/cond-mat9907051.html
|
ar5iv
|
text
|
# Magnetoresistance in Heavily Underdoped YBa2Cu3O6+x: Antiferromagnetic Correlations and Normal-State Transport
## Abstract
We report on a contrasting behavior of the in-plane and out-of-plane magnetoresistance (MR) in heavily underdoped antiferromagnetic (AF) YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub> ($`x`$$``$$`0.37`$). The out-of-plane MR ($`I`$$``$$`c`$) is positive over most of the temperature range and shows a sharp increase, by about two orders of magnitude, upon cooling through the Néel temperature $`T_N`$. A contribution associated with the AF correlations is found to dominate the out-of-plane MR behavior for $`H`$$``$$`c`$ from far above $`T_N`$, pointing to the key role of spin fluctuations in the out-of-plane transport. In contrast, the transverse in-plane MR ($`I`$$``$$`a(b);H`$$``$$`c`$) appears to be small and smooth through $`T_N`$, implying that the development of the AF order has little effect on the in-plane resistivity.
High-$`T_c`$ superconductivity (SC) in cuprates occurs as a crossover phenomenon in the doping range between an antiferromagnetic (AF) insulator and a Fermi-liquid metal state. While the hole (electron) doping destroys the long-range AF order in the CuO<sub>2</sub> planes, short-range AF correlations persist well into the superconducting compositions , and thus it is likely that the interplay of the doped carriers with the AF correlations underlies the physics of cuprates in a wide range of carrier concentrations .
In order to clarify the role of the magnetic interactions in cuprates one may study the temperature and doping regions which are peculiar for the spin subsystem. So far, a crossover at a temperature $`T^{}`$ corresponding to the formation of a pseudogap in the spin and charge excitation spectra has attracted most attention. An additional decrease in the in-plane resistivity $`\rho _{ab}`$ below $`T^{}`$ observed in underdoped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub> (Y-123) and in YBa<sub>2</sub>Cu<sub>4</sub>O<sub>8</sub> (Y-124) suggests the possibility that the in-plane transport is determined to a large extent by the spin scattering . The pseudogap (or spin gap) was also employed to explain both the activated behavior of the out-of-plane resistivity $`\rho _c(T)`$ and the negative out-of-plane magnetoresistance (MR) in Y-123 and Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub> .
The vicinity of the Néel transition is another peculiar region for the spin subsystem besides the crossover at $`T^{}`$. The dynamic AF correlations developed in the CuO<sub>2</sub> planes evolve into the long-range AF order upon crossing the Néel temperature $`T_N`$ , and one can expect some singular behavior to show up in the properties governed by the the magnetic interactions. A magnetotransport study in the vicinity of $`T_N`$ is therefore an attractive possibility to clarify the role of spin degrees of freedom in the peculiar electron transport.
In this Letter, we present a study of the in-plane and out-of-plane MR of heavily underdoped antiferromagnetic Y-123 crystals, supplemented by measurements of the Lu-123 crystal used earlier for the study of the phase diagram . We find that the out-of-plane MR undergoes a drastic change in the vicinity of the Néel temperature, increasing by about two orders of magnitude with a transition into the AF state. At the same time, quite unexpectedly, no feature associated with the AF ordering was observed in the transverse in-plane MR \[$`I`$$``$$`a(b);H`$$``$$`c`$\]. Therefore, the development of the AF correlations and the formation of the long-range Néel order have a profound influence only on the charge transport between the CuO<sub>2</sub> planes, leaving the in-plane transport unchanged. Moreover, we find that the longitudinal out-of-plane MR ($`H`$$``$$`c`$) is apparently governed by the AF fluctuations even in the temperature range above $`T_N`$, indicating that the spin fluctuations are playing a major role in the out-of-plane transport regardless of the presence of the Néel order.
The high-quality YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub> single crystals are grown by the flux method in Y<sub>2</sub>O<sub>3</sub> crucibles to avoid incorporation of impurities, and their oxygen content is reduced by subsequent high-temperature annealing. While the $`c`$-axis resistivity is easily measured in these samples owing to the high anisotropy, special care is paid to measure $`\rho _{ab}`$ reliably; samples with a length/thickness ratio $``$ 100-150 are used and the current contacts are carefully placed to cover the crystal side surfaces. The MR measurements are performed either by sweeping temperature (controlled by a Cernox resistance sensor) in constant magnetic fields up to 16 T, or by sweeping the field at a fixed temperature stabilized by a capacitance sensor to an accuracy of about 1 mK. The latter method allows measurements of $`\mathrm{\Delta }\rho /\rho `$ as small as $`10^5`$ at 10 T.
It was recently reported that heavily underdoped RBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub> (R=Tm, Lu) exhibits a maximum in $`\rho _c(T)`$, which originates from a competition between two distinct mechanisms contributing to the interplane transport . The long-range AF ordering brings about an additional peculiarity; an abrupt increase in $`\rho _c`$ occurs upon cooling through $`T_N`$, which is followed by a resistivity divergence at lower $`T`$ . We confirm essentially the same behavior in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub>, and in Fig. 1 we present a set of $`\rho _c(T)`$ curves obtained for the same Y-123 single crystal at slightly different oxygen contents in the AF region. The rise in $`\rho _c`$ induced by the AF transition becomes more and more evident as $`T_N`$ is lowered; for high $`T_N`$, a derivative plot helps to highlight the anomaly and to evaluate $`T_N`$ (see inset to Fig. 1). It is worth noting that $`T_N`$ is extremely sensitive to the hole doping, as can be seen in Fig. 1, and the width of the AF transition observed here (10-15 K) is almost the smallest achievable value, indicating the high quality of our crystals.
Figure 2 demonstrates the unusual behavior of the out-of-plane MR in the heavily underdoped Y-123, where a step-like increase in $`\mathrm{\Delta }\rho _c/\rho _c`$ is observed upon cooling through $`T_N`$. Except for a small difference in the MR step width, which is obviously related to the width of the AF transition, this striking feature is very reproducible within a set of Y-123 crystals and in the Lu-123 crystal. One would expect that the Néel transition in Y-123, like other phase transitions associated with the magnetic subsystem, is considerably affected by the application of magnetic fields. If a strong magnetic field suppresses AF order and lowers $`T_N`$, the out-of-plane MR should then become negative, because $`\rho _c`$ is enhanced below $`T_N`$. What we have found is opposite to these naive expectations; $`T_N`$ appears to be quite insensitive to the magnetic field and the out-of-plane MR, surprisingly, is positive.
Upon tilting the magnetic field (inset to Fig. 3), we observe a considerable anisotropy in the out-of-plane MR, $`\mathrm{\Delta }\rho _c/\rho _c`$, which becomes largest for the transverse geometry $`H`$$``$$`I`$$``$$`c`$. Besides the difference in the magnitude, the $`T`$ dependence of the out-of-plane MR is remarkably different between the $`H`$$``$$`ab`$ and the $`H`$$``$$`c`$ geometries; while $`\mathrm{\Delta }\rho _c/\rho _c`$ for $`H`$$``$$`ab`$ keeps growing below $`T_N`$ (Fig. 2) after it shows a small peak, $`\mathrm{\Delta }\rho _c/\rho _c`$ for $`H`$$``$$`c`$ gradually diminishes below $`T_N`$ after it shows a pronounced peak (Fig. 3). Therefore, the magnetoresistance becomes more anisotropic as the temperature is lowered below $`T_N`$.
To obtain an idea about the mechanisms which couple mobile carriers with the AF order, it is helpful to look also at the in-plane transport. For the oxygen contents under study the in-plane resistivity demonstrates a crossover behavior, passing through a minimum at $`T=5060`$ K \[Fig. 4(a)\]. To our surprise, the in-plane MR $`\mathrm{\Delta }\rho _{ab}/\rho _{ab}`$, as well as $`\rho _{ab}`$ itself, is always smooth in the vicinity of $`T_N`$ and we do not find any anomaly which can be associated with the Néel transition. For the transverse in-plane MR \[$`H`$$``$$`c`$; $`I`$$``$$`a(b)`$\], a small magnitude of the MR and a possible admixture of the asymmetric Hall component ($`H`$) in raw data required more precise measurements with field sweeping to be performed; the resulting field dependences of $`\rho _{ab}`$ are presented in Fig. 4(b). The $`T`$ dependence of $`\mathrm{\Delta }\rho _{ab}/\rho _{ab}`$ is shown in Fig. 4(c), where the in-plane MR reveals no correlation with the AF transition even at this sensitivity level ($``$10<sup>-5</sup>). Instead it is rather small and remains almost constant down to the temperature region where $`\rho _{ab}`$ acquires localizing behavior \[Fig. 4(a)\] and $`\mathrm{\Delta }\rho _{ab}/\rho _{ab}`$ changes its sign. The Néel transition and the corresponding changes in the spin-excitation spectrum have therefore no effect on the charge transport within the CuO<sub>2</sub> planes.
An intriguing issue is whether the out-of-plane transport is sensitive exclusively to the long-range order arising below $`T_N`$. If the short-range AF correlations above $`T_N`$ also contribute to $`\rho _c`$ and its MR, we may expect that the AF fluctuations play an essential role not only in the AF compositions but also in the SC compositions. Apparently, the more precise field-sweeping technique should be employed to investigate the behavior above $`T_N`$, where the MR becomes very small. Besides, below $`T_N`$ such measurements allow us to single out the main $`\gamma H^2`$ term from possible additional contributions to the MR .
The $`T`$ dependences of $`\gamma _{}H^2`$ and $`\gamma _{}H^2`$ components of $`\mathrm{\Delta }\rho _c/\rho _c`$ (for $`H`$$``$$`c`$ and $`H`$$``$$`c`$, respectively) presented in Fig. 5 depict the qualitative difference in the MR behavior for the two directions of the magnetic field. For $`H`$$``$$`ab`$ \[Fig. 5(a)\], the out-of-plane MR changes at $`T_N`$ in a step-like manner by up to two orders of magnitude, where the width of the step is the same as the width of the AF transition itself. The step separates regions below and above $`T_N`$ with relatively weak dependence of the MR on temperature. This behavior implies that the sensitivity to the magnetic field appears abruptly with the onset of the long-range AF order. On the other hand, for $`H`$$``$$`c`$ \[Fig. 5(b)\] we observe a MR peak at $`T_N`$, which is accompanied by a tail spreading to far above $`T_N`$. The MR as a function of temperature has no discontinuity at $`T_N`$ and one can infer from Fig. 5(b) that $`\mathrm{\Delta }\rho _c/\rho _c`$ grows as $`T^k`$ with lowering $`T`$ until the Néel transition interrupts this tendency. However, the right-hand side of the MR peak for $`H`$$``$$`c`$ (the $`T^k`$ behavior) apparently shifts with $`T_N`$ when the $`x`$ is changed, which indicates its relation to the AF ordering. Therefore, we can conclude that a mechanism associated with the AF fluctuations dominates the out-of-plane MR in a wide temperature range above $`T_N`$ as well. This observation clearly demonstrates that the short-range AF correlations play an essential role in the interplane transport. At high $`T`$, the longitudinal MR turns out to be weakly \[$`<`$(1.5-2.5)$`\times `$10<sup>-5</sup> at 10 T\] negative \[Fig. 5(b)\], which is reminiscent of the large negative MR observed in moderately underdoped Y-123 . We note that this weak negative background has a negligible effect on the MR behavior in the temperature range up to $`2T_N`$ \[Fig. 5(b) shows how its subtraction modifies the data\] and is not important for the present discussion.
The contrasting behavior of the in-plane and out-of-plane MR indicates that changes which occur in the spin subsystem at $`T_N`$ are influential only on the electron transport between the CuO<sub>2</sub> planes and apparently not on the in-plane one. It is known that the heavily underdoped Y-123 above $`T_N`$ possesses well-developed dynamic AF correlations in the CuO<sub>2</sub> planes and the Néel temperature actually corresponds to the establishment of AF order along the $`c`$-axis. The symmetry change accompanying the long-range order and a change in the spin dynamics are the only two mechanisms for the Néel transition to influence the electron transport. In spite of the sharp increase in $`\rho _c`$ upon cooling through $`T_N`$, opening of a gap in the quasiparticle energy spectrum due to the magnetic superstructure is unlikely, since one can hardly imagine a gap formation to have no impact on the in-plane transport. On the other hand, it is possible that the freezing of the spin degrees of freedom below $`T_N`$ causes an increase in $`\rho _c`$, if the spin fluctuations assist the electron hopping between the CuO<sub>2</sub> planes. Since an increase in $`\rho _c`$ also takes place when the magnetic field is applied, one can infer that the field suppression of the spin fluctuations is likely to be the main source of the positive out-of-plane MR in our heavily underdoped Y-123. Also, the dramatic changes in the out-of-plane transport associated with the evolution of the magnetic state might suggest that it is the spin subsystem that is responsible for the charge confinement within the CuO<sub>2</sub> planes.
Now let us discuss the actual mechanism which gives rise to the peculiar transport properties observed here. A possible picture to account for the observed features is the segregation of the doped holes into “stripes” which separate AF domains . In this picture, the confinement of the charges into the CuO<sub>2</sub> planes is substituted in a sense by the confinement to the quasi-1D stripes. Since the formation of the stripes themselves is governed by the magnetic interactions, it is not surprising that the spin degrees of freedom are playing a dominant role in the hole confinement and hence in the out-of-plane charge transport. Also, we can expect the in-plane MR to be very weak in this picture; the orbital MR term is irrelevant for the carriers moving along quasi-1D stripes, since the magnetic field cannot bend their trajectories. The spin-charge separation (naturally expected for 1D stripes ) and the spin gap formed in both the stripes and their AF environment imply that the spin-dependent scattering for the charge transport along stripes should not be large.
In summary, the out-of-plane transport in heavily underdoped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub> is found to show anomalous magnetoresistance associated both with the Néel ordering and with the AF correlations above $`T_N`$. The MR behavior gives evidence that the spin fluctuations play an essential role in the interplane transport and hence suggest that the charge confinement within the CuO<sub>2</sub> planes is also fundamentally related to the spin degrees of freedom.
We are grateful to L. P. Kozeeva for providing the Lu-123 crystals. A.N.L. acknowledges the support from JISTEC through an STA fellowship.
|
no-problem/9907/hep-ph9907538.html
|
ar5iv
|
text
|
# A new model for the quark mass matrices
(28 July 1999)
## Abstract
I present a new model for the quark mass matrices, which uses four scalar doublets together with a horizontal symmetry $`S_3\times Z_3`$. The model is inspired on a suggestion made a few years ago by Ma, but it is different. The predictions $`\left|V_{ts}\right|m_s/m_b`$ and $`\left|V_{ub}/V_{cb}\right|>0.085`$ are obtained. Flavour-changing neutral Yukawa interactions do not exist in the down-type-quark sector.
A few years ago, Ernest Ma put forward a model for the quark mass matrices based on the discrete symmetry $`S_3\times Z_3`$. His model is characterized by the absence of flavour-changing neutral Yukawa interactions (FCNYI) from the charge-$`2/3`$-quark sector. Ma’s model predicts rather low values for both $`\left|V_{ub}/V_{cb}\right|`$ and $`m_s/m_d`$; those values are moreover correlated, with a larger $`\left|V_{ub}/V_{cb}\right|`$ implying a lower $`m_s/m_d`$, and vice-versa . In spite of this problem, Ma’s mass matrices should be praised: they are not postulated as an Ansatz, a “scheme”, or a “texture”, rather they follow from a complete model, with a well-defined field content and a well-defined internal symmetry. As a consequence, Ma’s model—which needs to be supplemented by soft symmetry breaking in the scalar potential —is self-contained and consistent from the point of view of quantum field theory. This situation contrasts with the one typical of many, sometimes otherwise quite successful, schemes or textures that have been proposed for the quark mass matrices. Most of those schemes cannot be justified in terms of a complete theory . In this very important respect, Ma’s model is clearly superior.
In this Brief Report I remark that the role of the up-type and down-type quarks in Ma’s model may be interchanged, one then obtaining another viable model for the mass matrices. The new model enjoys the same features of self-containedness and consistency as Ma’s one. It leads to quite distinct predictions for the quark masses and mixings. A very clear-cut prediction is $`\left|V_{ts}\right|m_s/m_b`$; as a consequence of this relation and of the measured value of $`\left|V_{cb}\right|\left|V_{ts}\right|`$, the strange-quark mass must be in the upper part of its allowed range. It also predicts $`\left|V_{ub}/V_{cb}\right|>0.085`$. FCNYI are absent from the charge-$`1/3`$-quark sector, impeding tree-level contributions to the mass differences in the $`K^0`$$`\overline{K^0}`$ and $`B_d^0`$$`\overline{B_d^0}`$ systems, and to the CP-violating parameter $`ϵ`$.
In my model there are four Higgs doublets $`\varphi _a`$ ($`a=1,2,3,4`$) and two $`Z_3`$ symmetries. I denote $`p_{Ri}`$ ($`i=1,2,3`$) the right-handed charge-$`2/3`$ quarks, $`n_{Ri}`$ the right-handed charge-$`1/3`$ quarks, and $`q_{Li}=(p_{Li},n_{Li})^T`$ the doublets of left-handed quarks. The quantum numbers of the various fields under $`Z_3^{(1)}`$ and $`Z_3^{(2)}`$ are given in Table 1.
Besides $`Z_3^{(1)}`$ and $`Z_3^{(2)}`$, there is one further horizontal symmetry, which effects the interchanges
$$\varphi _1\varphi _2,q_{L2}q_{L3},p_{R2}p_{R3},n_{R2}n_{R3},$$
(1)
and leaves all other fields invariant. This symmetry commutes with $`Z_3^{(2)}`$ but it does not commute with $`Z_3^{(1)}`$. Hence, the internal-symmetry group of the model is $`S_3\times Z_3^{(2)}`$.
As a consequence of this internal symmetry, the quark mass matrices take the form
$$M_p=\left(\begin{array}{ccc}y_1v_3^{}& y_2v_2^{}& y_2v_1^{}\\ y_3v_2^{}& 0& y_4v_4^{}\\ y_3v_1^{}& y_4v_4^{}& 0\end{array}\right),M_n=\left(\begin{array}{ccc}y_5v_3& 0& 0\\ 0& y_6v_2& 0\\ 0& 0& y_6v_1\end{array}\right),$$
(2)
where the Yukawa coupling constants $`y_{1\text{}6}`$ and the vacuum expectation values (VEVs) $`v_a=0\left|\varphi _a^0\right|0=\left|v_a\right|\mathrm{exp}\left(i\theta _a\right)`$ are in general complex. One identifies $`\left|y_5v_3\right|=m_d`$, the mass of the down quark, while $`\left|y_6v_2\right|=m_s`$ and $`\left|y_6v_1\right|=m_b`$ are the masses of the strange quark and of the bottom quark, respectively. Thus, $`\left|v_2/v_1\right|=m_s/m_br`$. This ratio of VEVs being different from $`1`$, the internal symmetry of Eq. (1) is spontaneously broken.<sup>1</sup><sup>1</sup>1The spontaneous breaking of the interchange symmetry of Eq. (1) follows from its soft breaking in the scalar potential , which is achieved through the introduction of a term $`\mu \left(\varphi _1^{}\varphi _1\varphi _2^{}\varphi _2\right)`$.
One may eliminate most of the phases in the mass matrices by means of rephasings of the quark fields, obtaining
$$M_p=\left(\begin{array}{ccc}f& rge^{i\psi }& g\\ rhe^{i\psi }& 0& a\\ h& a& 0\end{array}\right),M_n=\left(\begin{array}{ccc}m_d& 0& 0\\ 0& m_s& 0\\ 0& 0& m_b\end{array}\right),$$
(3)
where $`a`$, $`f`$, $`g`$, and $`h`$ are real and non-negative. $`M_p`$ is bi-diagonalized by the Cabibbo–Kobayashi–Maskawa matrix $`V`$ and another unitary matrix, $`U_R^p`$:
$$VM_pU_R^p=\text{diag}(m_u,m_c,m_t).$$
(4)
Thus,
$$HM_pM_p^{}=\left(\begin{array}{ccc}f^2+g^2\left(1+r^2\right)& ag+rfhe^{i\psi }& fh+rage^{i\psi }\\ ag+rfhe^{i\psi }& a^2+r^2h^2& rh^2e^{i\psi }\\ fh+rage^{i\psi }& rh^2e^{i\psi }& a^2+h^2\end{array}\right)=V^{}\left(\begin{array}{ccc}m_u^2& 0& 0\\ 0& m_c^2& 0\\ 0& 0& m_t^2\end{array}\right)V,$$
(5)
and one immediately sees that
$$\frac{H_{33}H_{22}}{\left|H_{23}\right|}=\frac{1}{r}r=\frac{m_b}{m_s}\frac{m_s}{m_b}.$$
(6)
Using $`H_{22}H_{33}m_t^2`$ and $`\left|H_{23}\right|m_t^2\left|V_{ts}\right|`$, one finds the main prediction of this model,
$$\left|V_{ts}\right|\frac{m_bm_s}{m_b^2m_s^2}\frac{m_s}{m_b}.$$
(7)
Equation (7) is almost exact. In practice, we may write it with $`\left|V_{ts}\right|`$ substituted by the more interesting parameter $`\left|V_{cb}\right|`$, obtaining the slightly worse approximation
$$\left|V_{cb}\right|\frac{m_s}{m_b}.$$
(8)
I use the quark masses renormalized at $`1\text{GeV}`$
$$m_s=\left(175\pm 25\right)\mathrm{MeV},m_b=\left(5.3\pm 0.1\right)\mathrm{GeV}.$$
(9)
The scale uncertainty on the light-quark masses is substantial, while their ratios are relatively well known; in particular ,
$$\frac{m_s}{m_u}=34.4\pm 3.7.$$
(10)
Comparing Eqs. (8) and (9) with the experimental value
$$\left|V_{cb}\right|=0.0395\pm 0.0017,$$
(11)
one sees that the main prediction of the model is quite well verified; as a matter of fact, the smallness of the error bar in Eq. (11) allows us to constrain $`m_s`$ to be in the highest part of its allowed range:
$$m_s\left(1\mathrm{GeV}\right)>190\mathrm{MeV}.$$
(12)
In order to find out other predictions of the model one must treat it numerically. One easily concludes that $`hm_t`$ and the phase $`\psi `$ is very close to zero. Contrary to what happens in most Ansätze and textures, $`\left|V_{us}\right|`$ is not related to quark-mass ratios, rather it must be fitted to its experimental value $`0.22`$. The exact value of the top-quark mass $`m_t`$, being quite high, is practically irrelevant. One finds
$$\left|\frac{V_{ub}}{V_{cb}}\right|>0.085,$$
(13)
to be compared with the experimental value
$$\left|\frac{V_{ub}}{V_{cb}}\right|=0.08\pm 0.02.$$
(14)
My model easily accomodates $`\left|V_{ub}/V_{cb}\right|`$ as large as $`0.3`$.<sup>2</sup><sup>2</sup>2The error bar in Eq. (14) is probably under-estimated , as there is a substantial uncertainty in the theoretical modelling of $`bu`$ decays. Maybe one should not exclude the possibility that $`\left|V_{ub}/V_{cb}\right|`$ is substantially higher than $`0.1`$. On the other hand, $`\left|V_{ub}/V_{cb}\right|<\mathrm{\hspace{0.17em}0.09}`$ is only marginally possible.
The CP-violating invariant $`J`$ is small because $`\psi `$ is so close to zero. One obtains $`\left|J\right|<3.5\times 10^5`$, but usually $`J`$ is barely enough to account for the observed value of $`ϵ`$. This is not a problem, since there are in the model extra CP-violating box diagrams, in particular those with virtual charged scalars.
In order to work out the Yukawa interactions, one should first expand the scalar doublets as
$$\varphi _a=e^{i\theta _a}\left(\begin{array}{c}\varphi _a^+\\ \left|v_a\right|+\frac{\rho _a+i\eta _a}{\sqrt{2}}\end{array}\right),$$
(15)
where $`\rho _a`$ and $`\eta _a`$ are Hermitian fields. Their Yukawa interactions are given by
$`_\mathrm{Y}^{(\mathrm{q})}`$ $`=`$ $`\mathrm{}{\displaystyle \frac{\left(\rho _1+i\eta _1\right)m_b\overline{b_L}b_R}{\sqrt{2}\left|v_1\right|}}{\displaystyle \frac{\left(\rho _2+i\eta _2\right)m_s\overline{s_L}s_R}{\sqrt{2}\left|v_2\right|}}{\displaystyle \frac{\left(\rho _3+i\eta _3\right)m_d\overline{d_L}d_R}{\sqrt{2}\left|v_3\right|}}`$ (22)
$`(\overline{u_L},\overline{c_L},\overline{t_L})V\left(\begin{array}{ccc}{\displaystyle \frac{\rho _3i\eta _3}{\sqrt{2}\left|v_3\right|}}d& {\displaystyle \frac{\rho _2i\eta _2}{\sqrt{2}\left|v_2\right|}}rge^{i\psi }& {\displaystyle \frac{\rho _1i\eta _1}{\sqrt{2}\left|v_1\right|}}g\\ {\displaystyle \frac{\rho _2i\eta _2}{\sqrt{2}\left|v_2\right|}}rbe^{i\psi }& 0& {\displaystyle \frac{\rho _4i\eta _4}{\sqrt{2}\left|v_4\right|}}a\\ {\displaystyle \frac{\rho _1i\eta _1}{\sqrt{2}\left|v_1\right|}}b& {\displaystyle \frac{\rho _4i\eta _4}{\sqrt{2}\left|v_4\right|}}a& 0\end{array}\right)U_R^p\left(\begin{array}{c}u_R\\ c_R\\ t_R\end{array}\right).`$
In the first line of Eq. (22) one observes the absence of FCNYI with the down-type quarks. Unfortunately, the physical neutral scalars result from an unspecified orthogonal rotation of the $`\rho _a`$ and $`\eta _a`$—wherein $`_{a=1}^4\left|v_a\right|\eta _a`$ is a Goldstone boson. This is equivalent to saying that the symmetries of the model do not determine the masses and mixings of the neutral scalars. Moreover, the constraints $`\left|v_2/v_1\right|=m_s/m_b`$ and $`_{a=1}^4\left|v_a\right|^2=\left(2\sqrt{2}G_F\right)^1`$ are insufficient to determine the four $`\left|v_a\right|`$. Under these conditions, any attempt at an evaluation of the effects of the neutral Yukawa interactions—in particular enhanced $`D^0`$$`\overline{D^0}`$ mixing, which might be an interesting consequence of the present model—cannot be rigorous and has little genuine value.<sup>3</sup><sup>3</sup>3The studies of the FCNYI in Ma’s model suffer from the same limitation: many assumptions about the values of the $`\left|v_a\right|`$ and of the neutral-scalar mixings had to be done. The same may of course be said about an evaluation of the effects of the charged Yukawa interactions, including their contribution to $`ϵ`$.
In conclusion, I have shown that, in Ma’s model for the quark mass matrices, the roles of the up-type and down-type quarks may be interchanged, one then obtaining a different viable model. The new model makes the sharp prediction $`\left|V_{cb}\right|m_s/m_b`$ and forces $`m_s`$ and $`\left|V_{ub}/V_{cb}\right|`$ to be close to the upper end of their allowed ranges. Flavour-changing neutral Yukawa interactions are absent from the charge-$`1/3`$-quark sector. The model has the distinctive advantages of having a well-defined field content and horizontal symmetry, and of not containing any poorly justified assumptions.
|
no-problem/9907/astro-ph9907033.html
|
ar5iv
|
text
|
# Variability of the extreme z=4.72 blazar, GB 1428+4217
## 1 Introduction
At redshift $`z=4.72`$, GB 1428+4217 is the most distant X-ray source known (Hook & McMahon 1997; Fabian et al 1997; 1998). Its bright X-ray flux, of about $`3\times 10^{12}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ in the 2–10 keV band, and radio flux, of about 100 mJy at 15 GHz, strongly suggest that the object is a blazar pointed toward us. The previous observations did not however clearly show the flux variability which is common to blazars. We have therefore observed GB 1428+4217 again in X–rays with the ROSAT HRI and have monitored it for several months with the Ryle radio telescope. We report here on our discovery of the expected X–ray and radio variability. We also now make a detailed comparison of its properties with those of nearby blazars.
## 2 ROSAT and Radio variability data
GB 1428+4217 was observed 4 times with the ROSAT HRI during 1997 December and 1998 January. The source is very clearly detected each time and shows significant variability (see Fig. 1 where we include the earlier HRI point from 1996). It varied by a factor of about two over a timescale of two weeks (or less), which corresponds to less than 2.5 days in the restframe of the source. We note that previous ASCA and serendipitous ROSAT detections of the source give a flux consistent with that of 1996 July.
The Ryle Telescope at Cambridge was used to monitor the flux density of GB 1428+422 at 15 GHz on about 50 occasions between 1998 April and December (unfortunately much later than the HRI pointing). These observations, usually of short duration (typically $`<`$ 1 h), were made during gaps in the regular schedule of the telescope. A more detailed description of the observing technique is given in Pooley & Fender (1997). The flux–density scale of the observations was established by a nearby observation of either 3C 48 or 3C 286. Similar datasets have shown that the overall calibration in such cases has an r.m.s. scatter of less than 3 per cent.
In Fig. 2 the flux density of GB 1428+422 is plotted: significant variability can be detected with an amplitude of $``$ 40 per cent over about three months, and $``$ 15 per cent on timescale of ten days.
We therefore conclude that the significant variability on short timescales detected both in the X–ray and radio bands strongly confirms the identification of GB 1428+4217 with a blazar. In the following we will then consider the variability and spectral properties of this source, and compare them with those of nearby blazars, with the aim to gain insight both on the physics and evolutionary behavior of this class of AGN.
## 3 Infrared Observations
In addition to the broad band B and R magnitudes (and monochromatic continuum flux at a rest frame wavelength of 1500Å) reported by Hook & McMahon (1998), we present here $`J`$, $`H`$ and $`K`$ observations. They were carried out on 1997 March 26 with the United Kingdom Infra-Red Telescope (UKIRT) using the $`256^2`$ InSb array based camera IRCAM3 in the 0.28arcsec/pixel mode with exposure times per waveband of 900 s. The data underwent dark subtraction, flat-fielding, and mosaicing of the dithered images using the Starlink IRCAMDR software. Photometry was carried out using apertures of diameter 5 arcsec calibrated against similarly analyzed standard stars from Casali & Hawarden (1992). In Table 1, we list the observed magnitudes and derived fluxes.
The spectral index over the range covered by the optical and IR data is $``$0.0 ($`F_\nu \nu ^\alpha `$). This is bluer than the canonical $`\alpha _{\mathrm{uv}}`$ of 0.7 (eg. Fall, Pei & McMahon 1989) which means that there is no evidence for reddening over the rest frame spectral range 1500–4000Å.
## 4 Variability constraints
The spectral energy distribution (SED) of this source already pointed towards its identification with a blazar (Fabian et al. 1997, 1998). In particular, Fabian et al. (1998) showed that the (poorly sampled and not simultaneous) SED of GB 1428+4217 can be accounted for as non–thermal synchrotron and inverse Compton emission from a relativistically moving source, forming two broad peaks in $`\nu F(\nu )`$, as characteristic of blazars. However, no constraints on its size were available at the time. The X–ray variability timescale inferred from our ROSAT observations sets instead a significant upper limit on the dimension. We therefore re-consider the modeling of the SED and find that a broad band energy distribution and flat X–ray spectrum consistent with the data can still be found adopting a simple homogeneous model. As an example of the results obtained, in Fig. 3 we show the SED from one of the specific models (see caption) proposed by Fabian et al. (1998), where the intrinsic dimensions and Doppler factor are of order $`R5\times 10^{16}`$ cm and $`\delta 20`$, respectively. As already pointed out by Fabian et al. the parameters inferred from the modeling are globally consistent with those deduced for larger samples of blazars at lower redshifts (Ghisellini et al. 1998).
A further hint that beaming is involved is given by the rate of change of luminosity, $`\mathrm{\Delta }L/\mathrm{\Delta }t>5\times 10^{41}`$ erg s<sup>-2</sup>. The simple efficiency limit (Fabian 1979; Brandt et al. 1999) then yields a radiative efficiency for the source $`>`$ 20 per cent if only the luminosity in the ROSAT band is considered, and about ten times higher if one consider the total X–ray luminosity.
## 5 Broad band spectral energy distribution
A closer comparison can now be made between the spectral properties of GB 1428+4217 and those of nearby blazars. Differences can give important hints on the evolution in the intrinsic or environmental properties of radio–loud AGN.
Here we consider as quantitative SED indicators the broad band spectral indices<sup>1</sup><sup>1</sup>1where the quasar restframe radio, optical and X–ray monochromatic fluxes are calculated at 5 GHz, 5500 Å and 1 keV., $`\alpha _{\mathrm{𝑟𝑜}}`$, $`\alpha _{\mathrm{𝑜𝑥}}`$ and $`\alpha _{\mathrm{𝑟𝑥}}`$.
Because of the uncertainties due to the flux variability we estimate the range spanned by $`\alpha _{\mathrm{𝑟𝑜}}`$, $`\alpha _{\mathrm{𝑜𝑥}}`$ and $`\alpha _{\mathrm{𝑟𝑥}}`$, by considering the extremes of the observed radio and X–ray flux ranges. The two areas in Fig. 4 are representative of the intervals obtained. In this same figure, the spectral indices derived for GB 1428+4217 are compared with those obtained for complete samples of BL Lac objects and flat spectrum radio quasars (FSRQ; see Fossati et al. 1998 for details). Other two $`z>4`$ blazars, GB1508+5714 at $`z=4.3`$ (Moran & Helfand 1997) and RXJ 1028.6-0844 at $`z=4.28`$ (Zickgraf et al. 1997), appear similar to GB 1428+4217, subject to the uncertainty in the necessary optical K-correction for these objects (we adopt an optical spectral index for them of 0.7).
These three sources lie apart from the nearby FSRQ reported. They do however follow the general trends which hold for the entire blazar class. It has been pointed out that if blazars are considered according to their total (and radio) power, then the presence/luminosity in (broad) emission lines correlates with the shape of the SED. In particular, the position in energy of the two broad continuum peaks and their relative intensity (power in the low energy component with respect to that in the high energy one) decrease with increasing total source power. This also translates into an increase of the $`\alpha _{\mathrm{ro}}`$ and $`\alpha _{\mathrm{rx}}`$ spectral indices, and a flattening of the latter at the highest luminosities (see Fossati et al. 1998). Note indeed that GB 1428+4217 follows these trends in the spectral index plane as predicted by its radio power. An interesting consequence is that one would expect the synchrotron peak to be located in the mm band, well below the apparent optical peak of the (sparse) SED shown in Fig. 3.
Although the behaviour of GB 1428+4217, and the other two $`z>4`$ blazars, fit the blazar scenario just described, the X–ray luminosities of the sources still exceed those predicted by the above correlations; the values of $`\alpha _{\mathrm{ox}}`$ are significantly flatter than expected for the given source powers. Two caveats should however be remembered, namely the critical role of variability when discussing properties of blazars, and the possible influence of selection effects in detecting X–ray emission from GB 1428+4217. Indeed, a similar object with less extreme X–ray luminosity (fully consistent with the trends discussed above) would not have been followed up so readily in the X–ray band. Similar considerations apply for the other two high redshift objects, although we stress that the uncertainties in the (radio, optical and X–ray) spectral slopes of these two sources is large.
We now consider the implication of these results. Within the most widely accepted blazar scenario the two peaks of the SED are interpreted as due to synchrotron (the low energy component) and inverse Compton (the high energy one) processes. Although the nature of the seed photons \[internal/synchrotron (SSC) vs external radiation field (EC)\] is still not fully settled, evidence has been found that the source power is directly linked to the magnitude of the external radiation field, thus implying the increasing importance of the EC mechanism over the SSC one with increasing power. This also means more effective radiative cooling of electrons (by inverse Compton) - and thus a possible interpretation of the lowering of the energy of particles which emit at the peaks - and an increase in the relative importance of the Compton spectral component (Ghisellini et al. 1998).
If so, GB 1428+4217 (and maybe the other two high redshift quasars) represents a powerful source with an intense (or even extreme) external radiation field. This does not seem to be supported by the lack of very luminous emission lines (the photon field of which could be an important contributor to the external seed field).
However, there is interesting evidence for an intense optical–UV continuum flux, which cannot be easily interpreted as non–thermal emission. It seems plausible that this component (and that responsible for the intense Compton emission) have a nuclear origin. In fact no other radiation field (e.g. any plausible star cluster, or the cosmic microwave background radiation) seems to be energetically relevant. It is possible that this component is analogous to the excess optical–UV emission in nearby quasar and might be ascribed to thermal dissipation from accreting material, which would itself contribute to the local external radiation field.
Independently of its origin it should be stressed that the detection of this optical component sets extremely tight limits to the presence of any dust along the line of sight to this high $`z`$ source.
## 6 Discussion and conclusions
Evidence for X–ray and radio variability in GB 1428+4217 confirms the blazar nature of this quasar.
Interestingly and perhaps surprisingly, the properties of such an extreme high redshift source seem to fit globally the scenario for low redshift blazars. Although no conclusive evidence of peculiar intrinsic or environmental conditions can be found, there is some indication of even more extreme Compton cooling which might be associated for example with an unusually high external radiation field.
While no conclusions can be drawn on this basis, the issue remains open. Are all high redshift blazars characterized by such high (X–ray) luminosities? In Fig. 5 we show the broad band energy distributions of GB 1428+4217 and the two other $`z>4`$ radio–loud blazars, GB 1508+5714 and RXJ 1028.6–0844. These are also characterized by a very flat X–ray spectral index and the broad band properties of luminous objects, and at least GB 1508+5714 might share with GB 1428+4217 such an extreme X–ray brightness. Broad band spectral coverage and variability studies of a significant number (possibly a complete sample) of such ‘primordial’ blazars are required.
We finally note that if indeed the detected optical–UV flux can be ascribed to a thermal component produced in the accreting process, a bolometric luminosity of $``$ 10$`{}_{}{}^{47}\mathrm{erg}\mathrm{s}^1`$ requires the presence of a black hole with billion solar masses, if accreting at the Eddington rate, which has to be formed by $`z5`$ (Efstathiou & Rees 1988). Furthermore, one can speculate that if the black hole mass is related to the mass of the galactic bulge according to the relation suggested by Maggorian et al. (1998), a $``$ 10$`{}_{}{}^{11}\mathrm{M}_{}^{}`$ bulge component has also to be present. Clearly the observational confirmation or rejection of these possibilities is of crucial importance for the study of galaxy and black hole formation and their mutual relationship.
## Acknowledgments
We thank Gabriele Ghisellini for the use of the code for the homogeneous emission model. The Royal Society (ACF, RGM), the Italian MURST (AC), PPARC (KI) and the NASA LTSA Program (WNB) are thanked for financial support. This research was supported in part by the National Science Foundation under Grant No. PHY94-07194 (AC).
## References
Bessell M.S., Brett J.M., 1988, PASP, 100, 1134
Brandt W.N., Boller Th., Fabian A.C., Ruszkowski M., 1999, MNRAS, submitted
Casali M.M., Hawarden T.G, 1992, JCMT-UKIRT Newsletter, 3, 33
Efstathiou G., Rees M.J., 1988, MNRAS, 230, 5P
Fabian A.C., 1979, Proc. Roy. Soc. A, 336, 449
Fabian A.C., Brandt W.N., McMahon R.G., Hook I., 1997, MNRAS, 291, L5
Fabian A.C., Iwasawa K., Celotti A., Brandt W.N., McMahon R.G., Hook I., 1998, MNRAS, 295, L25
Fall S.M., Pei Y.C., McMahon R.G., 1989, ApJ, 341, L5
Fossati G., Maraschi L., Celotti A., Comastri A., Ghisellini G., 1998, MNRAS, 299, 433
Ghisellini G., Celotti A., Fossati G., Maraschi L., Comastri A., 1998, MNRAS, 301, 451
Hook I.M., McMahon R.G., et al, 1994, MNRAS, 273, L63
Hook I.M., McMahon R.G., 1997, MNRAS, submitted
Magorrian J., et al., 1998, AJ, 115, 2285
Moran E.C., Helfand D.J., 1997, ApJ, 484, L95
Padovani P., Urry C.M., 1992, ApJ, 387, 449
Perlman E.S., et al., 1996, ApJS, 104, 251
Pooley G.G., Fender R.P., 1997, MNRAS, 292, 925
Stickel M., Fried J.W., Kühr H., Padovani P., Urry C.M., 1991, ApJ, 374, 431
Stickel M., Meisenheimer K., Kühr H., 1994, A&AS, 105, 211
Wall J.V., Peacock J.A., 1985, MNRAS, 216, 173
Zickgraf F.-J., Voges W., Krautter J., Thiering I., Appenzeller I., Mujica R., Serrano A., 1997, A&A, 323, L21
|
no-problem/9907/math9907037.html
|
ar5iv
|
text
|
# 1 A particular configuration of 3 random turns walkers performing 8 steps in the sequence 𝐿⁴𝑅⁴. The walk can conveniently be represented diagramatically as done at right according to the rules specified in the text
## Acknowledgement
The financial support of the ARC, including funds to support the visit of G. Olshanski whose lectures benefitted the present work, are acknowledged. Also, the remarks of T.H. Baker on the original manuscript are appreciated.
|
no-problem/9907/astro-ph9907207.html
|
ar5iv
|
text
|
# The absolute magnitudes of RR Lyrae stars from hipparcos parallaxesBased on data from the ESA Hipparcos astrometry satellite.
## 1 Introduction
RR Lyrae stars are fundamental standard candles and the accurate determination of their absolute luminosity has a wide range of applications, including the derivation of the Hubble constant and the determination of globular clusters ages. The results of the hipparcos mission allow in principle a calibration of this luminosity, based on the parallaxes and proper motions.
Fernley et al. (1998a, hereafter F98) did that by employing the method of statistical parallax on a sample of 84 RR Lyrae stars (out of the 144 they considered) with \[Fe/H\] $`1.3`$. Combining the statistical parallax result with the absolute magnitude of RR Lyrae itself, computed without applying any Lutz-Kelker (LK) type correction (see Lutz & Kelker 1973, Turon Lacarrieu & Crézé 1977, Koen 1992, Oudmaijer et al. 1998), they derived a zero point of 1.05 $`\pm `$ 0.15 mag for the $`M_\mathrm{V}`$-\[Fe/H\] relation, by assuming a slope of 0.18 $`\pm `$ 0.03 (Fernley et al. 1998b). Tsujimoto et al. (1998, hereafter T98) used the statistical parallax method, a maximum likelihood technique and the derived $`M_\mathrm{V}`$ of the star RR Lyrae (with LK correction included) for deriving a combined final value $`M_\mathrm{V}`$ = 0.6-0.7 mag at \[Fe/H\] = $`1.6`$. Luri et al. (1998, hereafter L98) applied a maximum-likelihood method that takes all available data into account, including parallaxes, proper motions and radial velocities, considering the sample of 144 RR Lyrae stars given in F98. They derived $`M_\mathrm{V}`$ = 0.65 $`\pm `$ 0.23 at an average metallicity of \[Fe/H\] = $`1.51`$.
The results by F98, T98 and L98 imply dimmer RR Lyrae stars by about 0.3 mag with respect to the results from either the Main Sequence fitting technique using hipparcos subdwarfs (see, e.g., Gratton et al. 1997) or recent theoretical Horizontal Branch models (see, e.g., Salaris & Weiss 1998, Caloi et al. 1997). On the other hand, F98, T98 and L98 agree with the results from Baade-Wesselink analyses, which predict a zero point of about 1.00 mag for the $`M_\mathrm{V}`$-\[Fe/H\] relation (see, e.g., Clementini et al. 1995).
Turon Lacarrieu & Crézé (1977) presented two methods to derive the absolute magnitude of stars from the observed parallaxes, namely using individual LK-corrections and the method of “reduced parallaxes” (hereafter RP) on a sample of stars. The advantages of the RP method are the following: it avoids the biases due to the asymmetry of the errors when transforming the parallaxes into magnitudes, it can be applied to samples which contain negative parallaxes, it is free from LK-type bias if no selection on parallax, or error on the parallax is made (Koen & Laney 1998), and it requires no knowledge about the space distribution of stars. We will apply the RP method to the sample of 144 RR Lyrae stars used by F98, and will derive the zero points of the $`M_\mathrm{V}`$-\[Fe/H\] and $`M_\mathrm{K}`$-$`\mathrm{log}P_0`$ relations. Recently, Koen & Laney (1998) also briefly discussed the application of the RP method to RR Lyrae stars.
## 2 The “reduced parallax” method
Let us consider a relation of the form:
$$M_\mathrm{V}=\delta [\mathrm{Fe}/\mathrm{H}]+\rho .$$
(1)
If $`V`$ is the intensity-mean visual magnitude and $`V_0`$ its reddening corrected value, then one can write:
$$10^{0.2\rho }=\pi \times \mathrm{0.01\hspace{0.17em}\hspace{0.33em}10}^{0.2(V_0\delta [\mathrm{Fe}/\mathrm{H}])}\pi \times \mathrm{RHS},$$
(2)
which defines the quantity rhs and where $`\pi `$ is the parallax in milli-arcseconds. A weighted-mean of the quantity 10<sup>0.2ρ</sup> is calculated, with the weight (weight = $`\frac{1}{\sigma ^2}`$) for the individual stars derived from:
$$\sigma ^2=\left(\sigma _\pi \times \mathrm{RHS}\right)^2+\left(0.2\mathrm{ln}(10)\pi \sigma _\mathrm{H}\times \mathrm{RHS}\right)^2,$$
(3)
with $`\sigma _\pi `$ the standard error in the parallax. This follows from the propagation-of-errors in Eq.(2). We have adopted the slope $`\delta =0.18`$ (see the discussion in Fernley et al. 1998b), which is the one used by F98 and which is in agreement with the results from Baade-Wesselink methods (see, e.g., Clementini et al. 1995), Main Sequence fitting (Gratton et al. 1997) and theoretical models (see, e.g., Salaris & Weiss 1998, Cassisi et al. 1999).
The sample we consider is identical to that of F98, that is 144 stars out of a total of 180 stars in the hipparcos catalogue. F98 discuss the reasons for discarding the 36 stars. Arguments include the fact that these stars do not have reddening determinations, are not RR Lyrae variables, or have poor quality hipparcos solutions. Table 1 of F98 (retrievable from the CDS) lists all necessary data to perform the above analysis: periods, intensity-mean $`V`$ and $`K`$ magnitudes, colour-excesses $`E(BV)`$, and metallicities \[Fe/H\]. The extinction is calculated from $`A_\mathrm{V}=3.1E(BV)`$ (as done by F98).
An important requirement when applying this method is that the value of $`\sigma _\mathrm{H}`$ is small compared to the errors on the parallax. If the dispersion $`\sigma _\mathrm{H}`$ of the exponent in the factor RHS is large, the distribution of errors on the right-hand term in equation 2 is asymmetrical and a bias towards brighter magnitudes is introduced (Feast & Catchpole 1997, Pont 1999). The adopted value of $`\sigma _\mathrm{H}`$ has been computed by considering four different contributions: errors on the intensity-mean $`V`$ values of the RR Lyrae stars (as given in Table 1 of F98), on the extinction (as derived from the errors on E(B-V) given in Table 1 of F98), on \[Fe/H\] (again, from Table 1 of F98), and the intrinsic scatter due to evolutionary effects in the instability strip. This last term is the most important one, and we have adopted for it a 1$`\sigma `$ value by 0.12 mag (as in Fernley et al. 1998b), following the results of the exhaustive observational analysis by Sandage (1990). The final value is $`\sigma _\mathrm{H}`$ = 0.15, a quantity small enough in comparison with the parallax errors so that no substantial bias is introduced on the right-hand term of equation 2, as we have verified by means of numerical simulations. Even a $`\sigma _\mathrm{H}`$ of 0.20 mag. would lead to a bias by at most 0.02 mag.
Table 1 lists the values of the zero point with error we obtain with different sample selections for the $`M_\mathrm{V}`$-\[Fe/H\] relation. Solution 1 corresponds to the case of the whole sample; the zero point of 0.67 $`\pm `$ 0.24 mag is about 0.4 mag brighter than the value derived by F98, and consistent with the value listed in Koen & Laney (1998) using the same method with slightly different values for $`\sigma _\mathrm{H}`$. The sample with \[Fe/H\] $`1.3`$ (Solution 2) corresponds to a sample constituted entirely (according to the discussion in F98) by Halo RR Lyrae stars, with a negligible contamination from the Disk population. In this case the zero point is equal to 0.77 $`\pm `$ 0.26 mag; it is slightly fainter than Solution 1, but well in agreement within the statistical errors. We also re-derived the zero point for Solution 2 in the case of $`\sigma _\mathrm{H}`$ = 0.0, and we found a change by only 0.04 mag. A systematic change in the metallicity scale (Solution 4) by 0.15 dex does not affect appreciably the zero point determination, while the result is more sensitive to a systematic variation of the adopted reddenings (Solution 5).
The RP method has also been used to derive the zero point of the $`M_\mathrm{K}`$-$`\mathrm{log}P_0`$ relation. This relation appears to be insensitive to the metallicity (Fernley et al. 1987, Carney et al. 1995) and is also very weakly affected by reddening uncertainties, since $`A_\mathrm{K}=0.112A_\mathrm{V}`$ (Rieke & Lebofsky 1985). Moreover, the intrinsic scatter around this relation is smaller than in the case of the $`M_\mathrm{V}`$-\[Fe/H\] relation (Fernley et al. 1987). In the sample considered here there are 108 RR Lyrae stars with an observed intensity-mean $`K`$ magnitude. The procedure is the same as described before, the only difference is that now, instead of Eq. 1, we use the expression $`M_\mathrm{K}=\delta \mathrm{log}\mathrm{P}_0+\rho `$ where $`P_0`$ is the fundamental pulsation period. For the first-overtone RRc variables we have derived the fundamental periods using the relation $`\mathrm{log}P_0/P_1`$ = +0.120 (Carney et al. 1995). We adopt a slope $`\delta =2.33`$ following Carney et al. (1995); for the value of $`\sigma _\mathrm{H}`$ we have considered the same contributions previously described (with the exception, of course, of the contribution due to the error on \[Fe/H\]). In this case the observational estimate of the intrinsic scatter due to the width of the instability strip comes from Carney et al. (1995), and the final value results to be $`\sigma _\mathrm{H}`$=0.10.
In Tab. 1 the values of the zero point for the $`M_\mathrm{K}`$-$`\mathrm{log}P_0`$ relation are listed. When considering the entire sample we obtain a zero point of $`1.28\pm 0.25`$ mag, $``$ 0.4 mag brighter than the value from the Baade-Wesselink method (see, e.g., Carney et al. 1995). In the case of a pure Halo RR Lyrae sample (\[Fe/H\]$`1.3`$) we obtain $`1.16\pm 0.27`$ mag, slightly dimmer but again in agreement with the value derived for the whole sample. The influence of $`\sigma _\mathrm{H}`$ is even less than for the $`M_\mathrm{V}`$-\[Fe/H\] relation.
As the sample of the RR Lyrae stars is not volume complete it may be subject to Malmquist type bias. If the space distribution of RR Lyrae is spherical it implies that the true zero points of the $`M_\mathrm{K}`$-\[Fe/H\] and $`M_\mathrm{K}`$-$`\mathrm{log}P_0`$ relations may be fainter by up to 0.03 and 0.01 mag, respectively, for the adopted values of $`\sigma _\mathrm{H}`$. This applies when average absolute magnitudes of a volume and brightness limited sample are compared. Oudmaijer et al. (1999) showed empirically that when the averaging is done over 10$`^{0.2M_\mathrm{V}}`$ the effect of Malmquist bias is less.
In Fig. 1 we compare, for the 62 hipparcos RR Lyrae stars with \[Fe/H\] $`1.3`$ and both observed K and V magnitudes, the true distance moduli derived from the $`M_\mathrm{V}`$-\[Fe/H\] and $`M_\mathrm{K}`$-$`\mathrm{log}P_0`$ relations, using zero points of $`0.77`$ and $`1.16`$ mag, respectively. Each data point has an error bar of 0.26 mag in $`x`$ and 0.27 mag in the $`y`$direction. The comparison of the two photometric distances can in principle give us an independent indication for possible biases in the determination of the zero points of the two relations with the RP method. As it is evident from the figure, the distance moduli from both relations agree very well. A linear fit to the data is consistent with a slope of unity, and the dispersion around the 1:1 relation is equal to 0.098 mag. A dispersion of this order is what is expected from the dispersions in the observed $`\mathrm{log}P`$-\[Fe/H\] and $`(VK)_0\mathrm{log}P`$ relations for the RR Lyrae sample.
## 3 Discussion
For their preferred sample of 84 stars with \[Fe/H\] $`1.3`$ F98 obtain a zero point of 1.05 $`\pm `$ 0.15 mag for the $`M_\mathrm{V}`$-\[Fe/H\] relation (assuming a slope of 0.18), in agreement with results from Baade-Wesselink methods. When applying the RP method to the same sample of stars, we find a zero point 0.28 mag brighter. An analogous result, which means a zero point $``$0.30 mag brighter than the Baade-Wesselink one, is derived for the $`M_\mathrm{K}`$-$`\mathrm{log}P_0`$ relation.
Even if within the error bar the results derived with the different methods formally agree, there appears to exist a systematic difference between zero points obtained using the parallaxes directly and zero points obtained by employing methods which are sensitive to proper motions and radial velocities (F98, T98, L98), especially if one also takes into account the results for the hipparcos Cepheids. Also with the Cepheids one finds that methods where the results are mostly sensitive to the proper motions and radial velocities find dimmer zero points for the Cepheids PL-relation compared to methods which directly use the parallax. In particular, using the RP method Feast & Catchpole (1997) derived a zero point of $`1.43\pm 0.10`$ mag, and Lanoix et al. (1999) using a slightly bigger sample find $`1.44\pm 0.05`$ mag. Oudmaijer et al. (1998), using only the positive parallaxes but then correcting for the LK-bias, find $`1.29\pm 0.08`$ mag. On the other hand, L98 find a zero point of $`1.05\pm 0.17`$ mag using a maximum likelihood method that takes into account parallaxes, proper motions and velocity informations. As discussed by Pont (1999), in this technique the parallaxes do not influence the result to first order, and the method is similar to a statistical parallax analysis. A careful check of all assumptions implicit in the kinematical methods could be the key to understanding the nature of this puzzling disagreement. In the case of the RP method, as discussed extensively in the previous section, the condition for deriving the zero point without introducing a bias is to have $`\sigma _\mathrm{H}`$ small with respect to the errors on the parallaxes; this condition appears to be fulfilled in the sample considered.
Our zero point for the $`M_\mathrm{V}`$-\[Fe/H\] relation is in agreement with results from the Main Sequence fitting technique (Gratton et al. 1997), and from theoretical Horizontal Branch models. In particular, the Horizontal Branch models by Salaris & Weiss (1998) and Cassisi et al. (1999) give a zero point for the Zero Age Horizontal Branch (ZAHB) at the RR Lyrae instability strip in the range 0.74-0.77 mag. To compare the results for the ZAHB brightness with the $`M_\mathrm{V}`$-\[Fe/H\] relations mentioned in this paper which consider the mean absolute brightness of the RR Lyrae stars population at a certain metallicity, one has to apply a correction by $`0.1`$ mag (see, e.g., Caloi et al. 1997 and references therein) to the ZAHB result; this takes into account the evolution off the ZAHB of the observed RR Lyrae stars. Even after applying this correction the theoretical results are in good agreement with the results from the RP method. Moreover, the zero point derived with the RP method is also in agreement with the recent results by Kovacs & Walker (1999), who derive, by employing linear pulsation models, RR Lyrae luminosities that are brighter by 0.2-0.3 mag with respect to Baade-Wesselink results.
Finally, we want to derive the LMC distance implied by our zero point of the RR Lyrae distance scale. Table 2 collects the available data on RR Lyrae stars in LMC clusters: the name of the cluster, the observed mean $`V`$-magnitude, reddening, metallicity and the difference in distance modulus ($`\mathrm{\Delta }`$) between the cluster and the main body of the LMC. All these data are taken from the references listed. From them the dereddened magnitude at the centre of the LMC (Col. 7), and this value minus the quantity (0.18 \[Fe/H\]) (Col. 8) have been calculated for those clusters with $`\mathrm{\Delta }<`$0.1 mag. At this point we have taken into account the difference in metallicity between the clusters before deriving the LMC distance. More in detail, we have derived the weighted mean of the values in Col. 8 to find an average of 19.38 with a rms dispersion of 0.10 mag, which can be compared directly to the zero point of the $`M_\mathrm{V}`$\[Fe/H\] relation to find a distance modulus of 18.61 $`\pm `$ 0.28. This result turns out to be consistent with the Cepheids distance to the LMC as derived by Feast & Catchpole (1997) or Oudmaijer et al. (1998).
### Acknowledgements
René Oudmaijer, Phil James and the referee, Xavier Luri, are warmly thanked for valuable comments and suggestions which improved the presentation of the paper. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France.
|
no-problem/9907/cond-mat9907380.html
|
ar5iv
|
text
|
# Atomic Layering at the Liquid Silicon Surface: a First-Principles Simulation
## Abstract
We simulate the liquid silicon surface with first-principles molecular dynamics in a slab geometry. We find that the atom-density profile presents a pronounced layering, similar to those observed in low-temperature liquid metals like Ga and Hg. The depth-dependent pair correlation function shows that the effect originates from directional bonding of Si atoms at the surface, and propagates into the bulk. The layering has no major effects in the electronic and dynamical properties of the system, that are very similar to those of bulk liquid Si. To our knowledge, this is the first study of a liquid surface by first-principles molecular dynamics.
Liquid metal surfaces have attracted much attention during the last years . Their particular properties, very different from those of non-metals, have been investigated by Ångström-resolution experiments, and simulated by different approaches. One of their most interesting features is the atomic layering, a density oscillation that originates at the sharp liquid-vapor interface, and extends several atomic diameters into the bulk. Although some experiments supported an increased surface density at $`l`$-Hg , and some kind of atomic layering was predicted theoretically , its existence has been demonstrated unambiguously only recently by X-Ray reflectivity in Hg , Ga and Ga-In alloys . Most of the experiments have been done close to the low melting temperatures of these metals, but Regan et al studied the Ga surface up to 170C. They found that capilarity waves strongly decrease the reflectivity peak heights, but not the peak widths, suggesting that the decay length of local layering is temperature-independent, and that surface layering is still present quite above the melting point.
Different explanations have been proposed to account for this effect . Rice et al have argued that the abrupt decay of the delocalized electron density forms a flat potential barrier against which the ions lay orderly, like hard spheres against a hard wall. Tosatti et al have used the glue model of metallic cohesion to argue that surface atoms, trying to effectively recover their optimal coordination, alternatively increase and decrease their density. Surface layering effects, like surface-enhanced smectic ordering, have also been observed in liquid crystals . In this case, its origin is the tendency of the highly nonspherical molecules to present a particular orientation towards the surface.
Liquid silicon ($`l`$-Si) is a rather peculiar system. Silicon transforms, at 1684 K, from a covalent semiconductor solid, with diamond structure and coordination 4, to a liquid metal. Experiments and MD simulations show that its coordination ($`67`$) is lower than that of typical liquids ($`12`$), due to the persistence of directional bonding in the liquid phase. In spite of the enormous literature on its solid surfaces, very little is known on the structure of the liquid silicon surface. Measurements are very difficult because of its high reactivity and melting temperature. Model calculations, and computer simulations with semiempirical potentials, are also difficult because of the mentioned coexistence of covalent and metallic bonding, and its unknown interplay at the surface.
In this letter we present a study of the $`l`$-Si surface by first-principles molecular dynamics (MD) simulation . This approach deals equally well with covalent and metallic bonding, and it is therefore very well suited for this problem. Electrons are treated by solving the Kohn-Sham equations selfconsistently for each ionic configuration, using the local density approximation for exchange and correlation. The quantum mechanically obtained forces are then used to generate the classical trajectories of the ion cores. The calculations were performed with the SIESTA program using a linear combination of numerical atomic orbitals as the basis set, and norm-conserving pseudopotentials . A uniform mesh with a planewave cutoff of 40 Ry is used to represent the electron density, the local part of the pseudopotential, and the Hartree and exchange-correlation potentials. Only the $`\mathrm{\Gamma }`$ $`k`$-point was used in the simulations, since previous work found cell-size effects to be small. For the present calculation we used a minimal basis set of four orbitals (1 $`s`$ and 3 $`p`$) for each Si atom, with a cutoff radius of 2.65 Å. We have extensively checked the basis with static calculations of different crystalline Si phases and solid surfaces, and MD simulations of the bulk liquid . The energy differences between solid phases are described within 0.1 eV of other ab-initio calculations. The diamond structure has the lowest energy, with a lattice parameter of 5.46 Å (0.5 $`\%`$ larger than the experimental value). Adatom- and dimer-based (111) and (100) surface reconstructions found in other ab-initio calculations are well reproduced, with geometries and relative energies changing less than $`0.1`$ Å and $``$0.15 eV when moving from a $`\mathrm{\Gamma }`$-point calculation with a minimal basis set, to a converged $`k`$-sampling with double-$`\zeta `$ and polarization orbitals. The structural, electronic and dynamical properties of $`l`$-Si are in good agreement with other ab-initio calculations at the same density and temperature . The calculated diffusion constant ($`1.5\times 10^4`$ cm<sup>2</sup>/s) is somewhat smaller than that obtained with a double-$`\zeta `$ or polarized basis ($`1.72.0\times 10^4`$ cm<sup>2</sup>/s) which is in agreement with other ab-initio simulations. We interpret that the minimal basis overestimates the energies of the saddle point configurations occurring during diffusion, but we consider that this is not critical for the present application. Also, we leave for a future work the inclusion of spin fluctuations, which affect significantly the diffusion constant but not the structural properties of the liquid .
We first perform a long simulation of bulk $`l`$-Si at $`T`$=1800 K , using a cubic 64-atom cell with periodic boundary conditions. The fixed cell size (10.58 Å) was adjusted to obtain zero mean pressure, and corresponds to a density 3$`\%`$ smaller than the experimental density near the melting point. We then construct our initial 96-atom slab by repeating one bulk unit cell in the $`x`$ and $`y`$ directions, and one and a half cells in the $`z`$ direction, plus 10 Å of vacuum. No particles leave the slab during the 30 ps simulation. After a relaxation of 10 ps, the system reaches equilibrium and the averaged magnitudes are essentially the same for the next or the last 10 ps, and for both sides of the slab. These long relaxation and observation times are required because the calculated density autocorrelation time at the surface ($`1`$ ps) is considerably longer than the typical bulk-liquid correlation times ($``$ 0.1 ps) . The average surface energy (836 $`\pm `$ 40 dyn/cm) is in good agreement with the experimental surface tension (850 dyn/cm at 1800 K) , suggesting a small entropic contribution.
In Fig. 1 (solid line) we present the ionic density profile $`\rho (z)`$. It shows a pronounced atomic layering, with similar features as those reported for the Ga surface . Like in that case, $`\rho (z)`$ can be fitted accurately by a sharp error function at the surface, and a sinusoidal wave with an exponential decay towards the bulk: by superimposing two such functions (not shown), for both surfaces, we obtain similar values of the parameters and an oscillation period of 2.5 Å. To check that the observed layering is not a sign of incipient crystallization, we have computed several magnitudes in the central region of the slab ($`|z|<3.7`$ Å). The radial and angular distribution functions, electronic and vibrational densities of states, and the diffusion constant, are all very similar to those of bulk $`l`$-Si, and have no resemblance of those in the solid phases. As an example, we compare in Fig. 2 the bond-angle distribution function for the bulk and slab simulations, using a bond cutoff distance of $`r_m`$=3.10 Å. For a better understanding of the origin of the layering, we compute the normalized density-density correlation function :
$$c_\rho (z_0,z)=\frac{\delta \rho (z_0,t)\delta \rho (z,t)}{\delta \rho ^2(z_0,t)^{\frac{1}{2}}\delta \rho ^2(z,t)^{\frac{1}{2}}},$$
(1)
where $``$ denotes time average and $`\delta \rho `$ is the difference between the instantaneous density at time $`t`$, $`\rho (𝐫,t)=_{i=1}^N\delta (𝐫𝐫_i(t))`$, and the average density $`\rho (𝐫)=\rho (𝐫,t)`$, where $`\delta (𝐫)`$ is Dirac’s function. Fig. 1 also shows $`c_\rho (z_0,z)`$ for $`z_0`$ at the positions of the outermost peaks of each side. Its decaying oscillation is clearer than that of the density profile, all whose relevant features match very well with the superposition of the two $`c_\rho `$’s. The apparent lack of decay of $`\rho (z)`$ towards the interior is peculiar to our particular slab thickness, because the superposition is positive at the center of the slab, and negative at the two surfaces. Most important is, however, that the two surface-induced oscillations are clearly independent of each other ($`c_\rho `$’s out of phase), and incommensurate to the slab thickness. The density layering is thus an intrinsic surface feature and not a result of finite size effects.
In order to obtain information about the bond orientations at the surface we calculate the two-particle density:
$$\rho _2(𝐫_0;𝐫)=\frac{N/(N1)}{\rho (𝐫_0)}\underset{i=1}{\overset{N}{}}\underset{ji}{}\delta (𝐫_0𝐫_i)\delta (𝐫𝐫_j).$$
(2)
To represent $`\rho _2(𝐫_0;𝐫)`$, we first average over the directions parallel to the surface:
$`\rho _2(z_0;z,x)`$ $`={\displaystyle \frac{1}{2\pi xA}}{\displaystyle d^3𝐫_0^{}d^3𝐫^{}\rho _2(𝐫_0^{};𝐫^{})\delta (z_0z_0^{})}`$ (4)
$`\delta (zz^{})\delta (x\sqrt{(x^{}x_0^{})^2+(y^{}y_0^{})^2}),`$
where $`A`$ is the area of the simulation cell. In Fig. 3 we show $`\rho _2(z_0;z,x)`$ for $`z_0`$ located at the three peaks of $`\rho (z)`$. Fig. 3(a) shows a clear tendency of surface atoms to form bonds parallel and normal to the surface. The height of the correlation peaks goes well beyond those of the density, which can be seen also in the figure (notice that $`\rho _2(𝐫_0;𝐫)\rho (𝐫)`$ for $`|𝐫𝐫_0|\mathrm{}`$). This shows that the bond-induced correlations are responsible for the layering of the density, and not the other way around. Fig. 3(b) shows a similar, but attenuated tendency, that disappears in the third layer (Fig. 3(c)), which already has a very symmetric, bulk-like pair correlation function.
Further insight can be obtained from the $`z`$-dependent coordination $`n(z)`$, defined as the average number of neighbors within a distance $`r_m`$. At the bulk, we obtain $`n`$=6.4, in agreement with the experimental value . The distribution of local coordinations (DLC) is also very close to those of other ab-initio calculations , showing a maximum at coordination 6. We can also use the bulk simulation to construct an ideally terminated surface, cutting abruptly the system at, say, $`z`$=0. We then find $`n(z)=4.3`$ at $`z=1.0`$ Å, which is the distance between the outermost peak and the inflection point in the slab density profile. At the outermost peaks of the actual slab, we obtain $`n(z)`$=5.3, and a DLC peaked at 5. These values show that surface structural rearrangements increase the coordination of the ideally terminated surface, reaching a value of only one neighbor less than in the bulk. If we associate coordination 6, in the bulk liquid, with an octahedral arrangement, a simple picture can be drawn, in which the surface atoms try to preserve their bulk environment while minimizing the number of broken bonds. As a consequence, the octahedra get oriented in the surface so that only one broken ‘bond’ points towards vacuum, with another bond towards the interior and four bonds laying on the surface plane. This picture is consistent with figures 3(a) and 3(d). In the latter, we have restricted the sum over $`i`$ in eq. (2) to particles having coordination 5, what results in even more pronounced peaks in the $`x`$ and $`z`$ directions. Also, we note that the maximum of $`\rho _2(z_0;z,0)`$ occurs at $`|zz_0|=2.5`$ Å, what explains the same period observed in the density profile. The same distance is found for the in-plane surface bonds (i.e. for $`\rho _2(z_0;z_0,x)`$), and for the bulk bonds. Thus, contrary to other metals, we do not find a shortening of the surface bonds, and the silicon surface layering seems to be related only to the bond orientations. However, it must be emphasized that the bond angle distribution (Fig. 2) is very wide, indicating a large variety of fluctuating atomic environments , so that our ‘oriented octahedra’ should be considered only as a very rough and qualitative picture.
An interesting question is whether the surface structural rearrangements produce a noticeable signature in the electronic structure. In Fig. 4, we compare the local density of states (LDOS) at the outermost peaks of $`\rho (z)`$ and at the center of the slab. Although $`k`$-sampling is important for a converged LDOS , we use here only the $`\mathrm{\Gamma }`$-point eigenvalues, to facilitate the comparison with previous work and because we focus on its spatial variation. It can be seen that, apart from a slight narrowing, due to the diminished surface coordination, there are no major differences, suggesting that the surface and bulk atomic environments are rather similar, and again pointing towards bond orientations as responsible for surface layering.
In conclusion, we have performed the first study of a liquid surface by first-principles MD simulation. In spite of the high melting temperature of Si, we find a marked layering of the density near the surface, similar to those observed in other metals, like Ga and Hg, with low melting temperatures. However, the surface layering of Si seems to have an origin at least partially different from that in other metals, with remanent directional covalent bonding playing an essential role. In spite of the rather slow decay of the layering towards the bulk, the average structural, dynamical, and electronic properties converge very rapidly to their bulk liquid values. Although more converged simulations would be highly desirable in the future, we consider that this work provides a new qualitative understanding of the complex structure of liquid surfaces.
We acknowledge useful discussions with E. Chacón and M. Weissmann. This work was supported by Argentina’s CONICET and by Spain’s DGES grant PB-0202.
|
no-problem/9907/astro-ph9907107.html
|
ar5iv
|
text
|
# 1 Partial lightcurve of the RR Lyr star AD UMa, measured in yellow light on 1996 Apr 29/30 with the Jacobus Kapteyn Telescope at the Roque de los Muchachos Observatory, La Palma TF, Spain by R.F. Peletier, H. van Woerden and D. Sprayberry. Minimum magnitude: 16.17±0.01, maximum: 15.21±0.01, average 15.69±0.01 mag. Assuming [Fe/H] = –1.7 by analogy with32 HD 161817, recent calibrations33,34 of RR Lyr variables based on Hipparcos and other data yield an absolute magnitude 𝑀_V = 0.58±0.18. Taking an extinction35 of 0.1 mag, the distance of AD UMa implied is 10.1±0.9 kpc. The coordinates (𝛼 = 09ʰ23ᵐ38.7ˢ, 𝛿 = +55ᵒ46'33'' (J2000); 𝑙 = 160.40o, 𝑏 = +43.28o) were measured from the Palomar Sky Survey, with reference to the chart in the discovery paper36; they differ considerably from those in the Moscow General Catalogue of Variable Stars.
A confirmed location in the Galactic halo for the high-velocity cloud ’chain A’
Hugo van Woerden<sup>a</sup>, Ulrich J. Schwarz<sup>a</sup>, Reynier F. Peletier<sup>a.b</sup>, Bart P. Wakker<sup>c</sup>, Peter M.W. Kalberla<sup>d</sup>
a Kapteyn Institute, Postbus 800, 9700 AV Groningen, The Netherlands b Dept. of Physics, University of Durham, South Road, Durham DH1 3LE, UK c Department of Astronomy, University of Wisconsin, Madison WI 53706, USA d Radio-astronomisches Institut, Universität Bonn, 53121 Bonn, Germany
The high-velocity clouds of atomic hydrogen, discovered about 35 years ago<sup>1,2</sup>, have velocities inconsistent with simple Galactic rotation models that generally fit the stars and gas in the Milky Way disk. Their origins and role in Galactic evolution remain poorly understood<sup>3</sup>, largely for lack of information on their distances. The high-velocity clouds might result from gas blown from the Milky Way disk into the halo by supernovae<sup>4,5</sup>, in which case they would enrich the Galaxy with heavy elements as they fall back onto the disk. Alternatively, they may consist of metal-poor gas – remnants of the era of galaxy formation<sup>2,6-8</sup>, accreted by the Galaxy and reducing its metal abundance. Or they might be truly extragalactic objects in the Local Group of galaxies<sup>7-9</sup>. Here we report a firm distance bracket for a large high-velocity cloud, Chain A, which places it in the Milky Way halo (2.5 to 7 kiloparsecs above the Galactic plane), rather than at an extragalactic distance, and constrains its gas mass to between 10<sup>5</sup> and 2 $`\times `$ 10<sup>6</sup> solar masses.
Distance estimates of HVCs have long been based on models or indirect arguments<sup>2,10</sup>. The only direct method uses the presence or absence of interstellar absorption lines at the HVC’s velocity in spectra of stars at different distances. Presence of absorption shows the HVC to lie in front of the star; absence places it beyond, provided the expected absorption is well above the detection limit<sup>11</sup>. Blue stars are best, since their spectra contain few confusing stellar lines. The method requires that metal ions are present in HVCs. Indeed, since suitable spectrographs have become available, CaII and other metal-ion absorption lines have been found for many HVCs<sup>3</sup> in the spectra of background quasars or Seyfert galaxies. Using HI column densities measured at high angular resolution, the metal-ion/HI ratios thus derived provide estimates of expected absorption-line strengths towards stars probing the same HVC.
The HVCs MII-MIII, for which an upper limit to the distance, $`d`$ $`<`$ 4 kpc, is known<sup>12,13</sup>, but no lower limit<sup>13</sup>, were thus found to be Galactic, and may even lie in the Disk, as does the tiny object<sup>14,15</sup> HVC 100-7+100, with $`d`$ $`<`$ 1.2 kpc, and distance from the plane $`|z|`$ $`<`$ 0.14 kpc. Lower distance limits are known for Complex C<sup>16,17</sup> ($`d`$ $`>`$ 2.5 kpc, and probably<sup>18</sup> $`d>`$ 5 kpc), Cloud 211 = HVC267+20+215<sup>16</sup> ($`d>`$ 6 kpc), parts of the AntiCenter Complexes<sup>19</sup> ($`d`$ $`>`$ 0.6 kpc), and Complex H<sup>20</sup> ($`d`$ $`>`$ 5 kpc). Chain A is the first HVC for which both a significant upper and a non-zero lower limit to its distance are known, constraining its location relative to the Galaxy’s major components.
HVC-Complex A, also called ”Chain A”, was the first HVC discovered and has been studied in detail<sup>3,21</sup>. It is a 30<sup>o</sup> long filament, containing several well-aligned concentrations with velocities (relative to the local standard of rest, LSR) between – 210 and – 140 km/s. HST spectra<sup>22</sup> of the Seyfert galaxy Mark 106 show strong MgII absorption by this HVC. The fact that such absorption is not detected (it is less than 0.03 times the expected strength) in the star PG0859+593, although the HVC’s HI emission (as measured at 1 arcmin resolution) is similar in both directions, sets a firm lower distance limit, $`d>4\pm 1`$ kpc ($`z>2.5\pm 0.6`$ kpc), for Chain A.
We have now also measured an upper limit, $`d`$ $`<`$ 10 kpc, for the upper end of Chain A, using the RR Lyrae star AD Ursae Maioris, which lies at a distance of $`10.1\pm 0.9`$ kpc (Fig. 1). Detection of interstellar lines in the spectra of RR Lyr stars is generally hampered by the presence of many stellar lines; but these are fewer during maximum phase, when the star is hotter. Figure 2 shows portions of the spectrum of AD UMa around the CaII-H and K lines, observed during maximum, and a 21-cm profile taken in the same direction. The latter has components at velocities $`v`$ (relative to the LSR) of –4, –40 and –158 km/s. The CaII-K line shows the same interstellar components as the 21-cm profile, plus a strong stellar absorption around +70 km/s. The weaker CaII-H line, though blended with a broad stellar H-$`ϵ`$ absorption, shows similar structure. In particular, absorption by the HVC at $`v`$ $``$ –160 km/s is present at both K and H.
Could these HVC absorptions be affected by blending with stellar lines? Figure 3 shows three FeI lines of multiplet number 4, indicating a stellar radial velocity $`v`$ = + 77 $`\pm `$ 2 km/s. Comparison of their strengths and widths with those<sup>23</sup> in the blue field-horizontal-branch star HD 161817 allows calculation (Fig. 4) of the profile of a fourth line (laboratory wavelength $`\lambda _0`$ = 3930.3 $`\mathrm{\AA }`$), predicted to be present at 3931.3 $`\mathrm{\AA }`$. Subtraction of this predicted line from the observed spectrum (Fig. 4) leaves a narrow absorption at $`v`$ = – 158.2 $`\pm `$ 1.2 km/s, in close agreement with the HVC velocity $`v`$ = – 157.6 $`\pm `$ 0.2 km/s, found at 21 cm (Fig. 2). If this absorption were due to the FeI 3930.3 $`\mathrm{\AA }`$ line, it would have $`v`$ = + 96 $`\pm `$ 1 km/s in that frame, which is clearly incompatible with the stellar velocity of + 77 $`\pm `$ 2 km/s found from the other lines in the same multiplet. The velocity difference of 19 $`\pm `$ 3 km/s, the lack of any other suitable identification, and the agreement with the 21-cm velocity, convincingly show that the deep line at 3931.5 $`\mathrm{\AA }`$ in Figures 3 and 4 must be due to CaII-K absorption at –158 km/s by the HVC, while the shortward wing is due to the stellar FeI line.
The agreement in velocity of the high-velocity absorptions at K and H, and the fact that the ratio of line depths is about 2 : 1, as expected, further strengthens the identification of the HVC absorption. Thus, it is certain that Chain A lies in front of AD UMa. The CaII and HI line strengths indicate a Ca<sup>+</sup> abundance of order 0.01 times the total solar Ca abundance, confirming our earlier<sup>11</sup> tentative result. (Note that interstellar calcium is generally strongly depleted by inclusion into dust grains, and Ca<sup>+</sup> is not the dominant ion in the interstellar gas phase.)
The absorption seen in AD UMa sets an upper limit of 10 $`\pm `$ 1 kpc to the distance of Chain A. Combining this with the lower limit<sup>22</sup> of 4 $`\pm `$ 1 kpc, we conclude that the high-latitude end of Chain A lies at 4 $`<`$ $`d`$ $`<`$ 10 kpc, or 2.5 $`<`$ $`z`$ $`<`$ 7 kpc above the Galactic plane. Using the HI flux<sup>21</sup>, we derive an HI mass of 9800 $`d^2`$ M, i.e. between 1.5 and 10 $`\times `$ 10<sup>5</sup> M. With the standard helium abundance, the (HI \+ He) gas mass would be a factor 1.4 higher. The nondetection<sup>24</sup> of CO emission from bright HI cores in Chain A implies that the H<sub>2</sub> mass must be one or more orders of magnitude less. Recent H$`\alpha `$ observations<sup>25</sup> suggest that Chain A is mostly neutral. Assuming the ionized contribution to be minor, the total gas mass in Chain A lies between 2 $`\times `$ 10<sup>5</sup> and 2 $`\times `$ 10<sup>6</sup> M. The kinetic energy of the complex then is of order (0.3 - 3) $`\times `$ 10<sup>53</sup> erg, if we assume<sup>21</sup> a peculiar velocity, $`v_{\mathrm{dev}}`$, of – 130 km/s.
Our distance bracket places Chain A definitely in the Galactic Halo, rather than in intergalactic space. It excludes models for its nature and origin requiring a distance of order 1 kpc or less, such as relationships to local molecular clouds<sup>26</sup>, or collision of an intergalactic cloud with the Galactic Disk<sup>27</sup>. It also rules out that Chain A would be a Galactic satellite at about 50 kpc distance<sup>9</sup>, or a protogalactic gas cloud at $``$ 500 kpc distance<sup>28</sup>, or ”a member of the Local Group of galaxies”, as proposed recently<sup>7</sup> for HVCs in general. Other HVCs may well be at such great distances and fit the latter model; and some, e.g. the tiny, nearby cloud HVC100-7+100 (see above), may have a local origin<sup>15</sup>.
The location of Chain A in the Halo still allows several models for its origin. For its height $`2.5<z<7`$ kpc to be consistent with a Galactic-Fountain model<sup>4,5</sup>, a sufficiently hot halo ($`T>5\times 10^5`$ K) would be required. The small-scale structure observed<sup>29,30</sup> in Chain A would then be due to instabilities formed in the downward flow of cooling clouds. Alternatively, Chain A may represent gas captured from intergalactic space<sup>2,6,8</sup>. In that case, collision with an ionized halo extending to high $`z`$ may have served to decelerate the gas to its present velocity<sup>2,6,31</sup>, and to form the small-scale structure, which has typical time-scales<sup>3,29</sup> of order 10<sup>7</sup> years, and therefore probably formed within a few kpc of its present location. In this accretion model, the question whether the origin of Chain A lies in the Magellanic System (as debris from encounters between Milky Way and Magellanic Clouds), or far away in the Local Group (as ”remnant of Local Group formation”<sup>7,8</sup>), remains open: location in the Galactic Halo does not preclude such a distant origin.
A clue to the origin of Chain A might be found in its metallicity. In a Galactic Fountain, near-solar metallicities would be expected; accretion might bring in HVCs with low metallicities. For Chain A, current information is limited to the observed column-density ratios $`N`$(Mg<sup>+</sup>)/$`N`$(HI) $`>`$ 0.035 solar<sup>22</sup> and $`N`$(Ca<sup>+</sup>)/$`N`$(HI) $``$ 0.01 solar (see above); in view of possible depletion by inclusion into dust grains and uncertain ionization conditions, these ratios only give lower limits to total Mg and Ca abundances. The best chance for a more significant value lies in measurement of the ultraviolet SII lines in Mark 106, since sulphur is not depleted onto grains, and S<sup>+</sup> is the dominant ionization stage in neutral gas. Reliable metallicity values and further direct distance measurements of HVCs will hold the key to their understanding. Since the HVC phenomenon is only loosely defined, and may well include objects of very different origins, distance and metallicity measurements of various HVCs will be required.
1. Muller C.A., Oort J.H., Raimond E., Comptes-rendus Acad. Sci. Paris 257, 1661-1664 (1963).
2. Oort J.H., Possible interpretations of the high-velocity clouds, Bull. Astron. Inst. Netherlands 18, 421-438 (1966).
3. Wakker B.P., van Woerden H., High-velocity clouds, Annual Rev. Astron. Astrophys. 35, 217-266 (1997).
4. Bregman J.N., The Galactic Fountain of high-velocity clouds, Astrophys. J. 236, 577-591 (1980).
5. Houck J.C., Bregman J.N., Low-temperature galactic fountains, Astrophys. J. 352, 506-521 (1990).
6. Oort J.H., The formation of galaxies and the origin of the high-velocity hydrogen clouds, Astron. Astrophys. 7, 381-404 (1970).
7. Blitz L., Spergel D.N., Teuben P.J., Hartmann L., Burton W.B., High-velocity clouds: Remnants of Local Group formation, Bull. Amer. Astron. Soc. 28, 1349 (1996).
8. Blitz L., Spergel D.N., Teuben P.J., Hartmann L., Burton W.B., High-velocity clouds: Building blocks of the Local Group, Astrophys. J. 514, 818-843 (1999).
9. Kerr F.J., Sullivan W.T., The high-velocity hydrogen clouds considered as satellites of the Galaxy, Astrophys. J. 158, 115-122 (1969).
10. Verschuur G.L., High-velocity neutral hydrogen, Annual Rev. Astron. Astrophys. 13, 257-293 (1975).
11. Schwarz U.J., Wakker B.P., van Woerden H., Distance and metallicity limits of high-velocity clouds, Astron. Astrophys. 302, 364-381 (1995).
12. Danly L., Albert C.E., Kuntz K.D., A determination of the distance to the high-velocity cloud Complex M, Astrophys. J. 416, L29-31 (1993).
13. Ryans R.S.I., Keenan F.P., Sembach K.R., Davies R.D., The distance to Complex M and the Intermediate Velocity Arch, Mon. Not. R. Astron. Soc. 289, 83-96 (1997).
14. Bates B., Catney M.G., Keenan F.P., High-velocity gas components towards 4 Lac, Mon. Not. R. Astron. Soc. 242, 267-270 (1990).
15. Stoppelenburg P.S., Schwarz U.J., van Woerden H., Westerbork HI observations of two high-velocity clouds, Astron. Astrophys. 338, 200-208 (1998).
16. Danly L., Lockman F.J., Meade M.R., Savage B.D., Ultraviolet and radio observations of Milky Way halo gas, Astrophys. J. Suppl. Ser. 81, 125-161 (1992).
17. de Boer K.S. et al. , The distance to the Complex C of high-velocity halo clouds, Astron. Astrophys. 286, 925-934 (1994).
18. Van Woerden H., Peletier R.F., Schwarz U.J., Wakker B.P., Kalberla P.M.W., Distances and metallicities of high-velocity clouds, in Stromlo Workshop on High-Velocity Clouds (eds. Gibson B.K., Putman M.E.), ASP Conf. Ser. 166, 1-25 (1999).
19. Tamanaha C.M., Distance constraints to the AntiCenter high-velocity clouds, Astrophys. J. Suppl. Ser. 104, 81-100 (1996).
20. Wakker B.P., van Woerden H., de Boer K.S., Kalberla P.M.W., A lower limit to the distance of HVC complex H, Astrophys. J. 493, 762-774 (1998).
21. Wakker B.P., van Woerden H., Distribution and origin of high-velocity clouds. III. Clouds, complexes and populations, Astron. Astrophys. 250, 509-532 (1991).
22. Wakker B.P., et al. , The distance to two hydrogen clouds: the high-velocity complex A and the low-latitude Intermediate-Velocity Arch, Astrophys. J. 473, 834-848 (1996).
23. Adelman S.J., Fisher W.A., Hill G., An atlas of the field horizontal branch stars HD 64488, HD 109995 and HD 161817 in the photographic region, Publ. Dominion Astrophys. Obs. Victoria 16, 203-280 (1987).
24. Wakker B.P., Murphy E.M., van Woerden H., Dame T.M., A sensitive search for molecular gas in high-velocity clouds, Astrophys. J. 488, 216-223 (1997).
25. Tufte S.L., Reynolds R.J., Haffner L.M., WHAM observations of H$`\alpha `$ emission from high-velocity clouds in the M, A, and C complexes, Astrophys. J. 504, 773-784 (1998).
26. Verschuur G.L., An association between HI concentrations within high-velocity clouds A and C and nearby molecular clouds, Astrophys. J. 361. 497-510 (1990).
27. Meyerdierks H., A cloud-Galaxy collision: observation and theory, Astron. Astrophys. 251, 269-275 (1991).
28. Verschuur G.L., The high-velocity cloud complexes as extragalactic objects in the Local Group, Astrophys. J. 156, 771-777 (1969).
29. Oort J.H., Speculations on the origin of the Chain A of high-velocity clouds, in ”Problems of Physics and the Evolution of the Universe” (ed. Mirzoyan L.), 259-280. (Yerevan: Academy of Sciences of Armenian SSR, 1978).
30. Wakker B.P., Schwarz U.J., Westerbork observations of high-velocity clouds. Discussion, Astron. Astrophys. 250, 484-498 (1991).
31. Benjamin R.A., Danly L., High-velocity rain: The terminal velocity model of Galactic infall, Astrophys. J. 481, 764-774 (1997).
32. Adelman S.J., Elemental abundance analyses with coadded DAO spectrograms - IV. Revision of previous analyses, Mon. Not. R. Astron. Soc. 235, 749-762 (1988).
33. Chaboyer B., Demarque P., Kernan P.J., Krauss L.M., The age of globular clusters in light of Hipparcos: resolving the age problem?, Astrophys. J. 494, 96-110 (1998).
34. Fernley J., et al. , The absolute magnitudes of RR Lyraes from HIPPARCOS parallaxes and proper motions, Astron. Astrophys. 330, 515-520 (1998).
35. Lucke P.B., The distribution of color excesses and interstellar reddening material in the solar neighborhood, Astron. Astrophys. 64, 367-372 (1978).
36. Hoffmeister C., Neuer RR Lyrae-Stern S5218 Ursae Majoris, Astron. Nachrichten 284, 165-166 (1958).
Acknowledgments. The William Herschel Telescope (WHT) is operated by the Royal Greenwich Observatory, in the Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias, with financial support from PPARC (UK) and NWO (NL). We thank the NFRA La Palma Programme Committee, and in particular Huib Henrichs, for their support of our program. The Effelsberg Telescope belongs to the Max Planck Institute for Radio Astronomy in Bonn. Wakker was partly supported by NASA through STScI, which is operated by AURA, Inc. Wakker also thanks Blair Savage for financial support and useful discussions. We finally thank the referee who drew our attention to the spectrum of HD 161817.
Address for correspondence: [email protected]
|
no-problem/9907/hep-ph9907479.html
|
ar5iv
|
text
|
# LAB observables for the muon polarization in 𝐾⁺→𝜋⁺𝜇⁺𝜇⁻
## I Introduction
As has been pointed out some years ago by Savage and Wise , the muon polarization in the $`K^+\pi ^+\mu ^+\mu ^{}`$ decay can provide important information on the structure of weak interactions and the flavour mixing. The process is dominated by a parity–conserving contribution, arising from the exchange of one photon. Nowadays the theoretical analysis of the $`K\pi \gamma ^{}`$ form factor is being revisited —including unitarity corrections from $`K\pi \pi \pi `$ and chiral perturbation expansion up to $`𝒪(p^6)`$ — in view of the recent measurement of the ratio $`R=\mathrm{\Gamma }(K^+\pi ^+\mu ^+\mu ^{})/\mathrm{\Gamma }(K^+\pi ^+e^+e^{})`$, which appears to be lower than the prediction obtained at leading order in the chiral expansion .
Parity violating observables, such as the asymmetry in the polarization of the outgoing $`\mu ^+`$ and $`\mu ^{}`$, are sensitive to short-distance dynamics. In the Standard Model (SM), the effect arises from the interference between the one–photon amplitude and one–loop $`Z`$-penguin and $`W`$-box Feynman diagrams . It has been shown that the muon polarizations can be predicted in terms of the well–known $`K_{l3}`$ semileptonic decay form factors, and a parameter $`\xi `$ that carries “clean” information (that means, relatively free from nonperturbative effects) on the quark masses and mixing angles. The explicit expression of $`\xi `$ in terms of the quark mixing parameters has been calculated by Buchalla and Buras up to next–to–leading order in QCD, where the dependence on the renormalization scale is shown to be significantly reduced.
The muon polarization asymmetry receives also potentially significant contributions from the interference of the one–photon amplitude with parity–violating Feynman diagrams in which the muon pair is produced by two–photon exchange. Though these contributions are difficult to evaluate —they arise from nonperturbative QCD—, a detailed analysis performed in Ref. seems to indicate that they are smaller than the short–distance contributions mentioned above. Here we will take this as an assumption, focusing our attention on the effects on the muon polarization arising from the short-distance part.
The theoretical analyses usually concentrate on the case of longitudinal muon polarizations. This is convenient for an obvious reason, which is the fact that the polarization direction is defined in each case by the muon momentum, and no external axes have to be introduced. However, the price one has to pay is that the so–defined longitudinal polarization asymmetry $`\mathrm{\Delta }_{long}`$ is not a Lorentz–invariant magnitude, hence its value depends on the chosen reference frame. In the literature, it is usual to define $`\mathrm{\Delta }_{long}`$ in the rest frame of the $`\mu ^+\mu ^{}`$ pair, and to present the theoretical results in terms of the muon pair invariant mass, $`q^2`$, and $`\theta `$, the angle between the three-momenta of the kaon and the $`\mu ^{}`$ in this reference frame. Then, to compare the measurements in the LAB system with the theoretical predictions, it is necessary not only to measure the muon polarization, but also to reconstruct the full kinematics of each event in order to perform the corresponding boost to the $`\mu ^+\mu ^{}`$ rest frame. In addition, cuts on the variable $`\theta `$, which can improve the sensitivity to the short–distance parameter $`\xi `$ mentioned above , do not translate, in general, into cuts on the pion and muon directions in the LAB. All these facts produce additional sources of uncertainties in the analysis. The aim of this work is to point out these difficulties, and propose the longitudinal polarization asymmetry defined in the LAB system, $`\mathrm{\Delta }_{long}^{(\mathrm{LAB})}`$, as the best observable to be contrasted with experiment. We analyse here the kinematics for the process in the LAB frame, and calculate the expected sensitivity of $`\mathrm{\Delta }_{long}^{(\mathrm{LAB})}`$ to the parameter $`\xi `$, for both stopped and in–flight decaying kaons. For a fixed energy of the $`K^+`$, we show that the sensitivity of the observable can be improved by a convenient cut on the LAB muon energy. In addition, we perform a similar analysis for the decay $`K_L\mu ^+\mu ^{}`$. The study of the muon polarization is also important in this case, since it can provide a new signal of CP violation . From our analysis, it arises that the asymmetry $`\mathrm{\Delta }_{long}^{(\mathrm{LAB})}`$ is partially diluted when the decaying kaons are in flight.
The paper is organized as follows: in Section II we study the sensitivity of $`\mathrm{\Delta }_{long}^{(\mathrm{LAB})}`$ to the SM parameters for the process $`K^+\pi ^+\mu ^+\mu ^{}`$, and calculate the dependence of the observable with the $`K^+`$ energy. Then, in Section III, we perform a similar analysis for the process $`K_L\mu ^+\mu ^{}`$, in which the kinematics is simpler. In section IV we present our conclusions. Details on the LAB frame kinematics and phase space integrations are given in the Appendix.
## II Muon polarization and kinematics for $`K^+\pi ^+\mu ^+\mu ^{}`$
As stated, the decay rate for $`K^+\pi ^+\mu ^+\mu ^{}`$ is dominated by the one-photon exchange contribution, which is parity-conserving. The corresponding amplitude can be parametrized as
$$^{(PC)}=\frac{\alpha G_F\mathrm{sin}\theta _C}{\sqrt{2}}f(q^2)(p_K+p_\pi )^\mu \overline{u}(p_{},s_{})\gamma _\mu v(p_+,s_+),$$
(1)
where $`p_K`$, $`p_\pi `$ and $`p_\pm `$ are the four–momenta of the kaon, pion and $`\mu ^\pm `$ respectively, and $`q^2=(p_++p_{})^2`$ stands for the squared $`\mu ^+\mu ^{}`$ invariant mass. We consider the general case of polarized muons, being $`s_\pm `$ the corresponding polarization vectors.
In the Standard Model, in addition to the dominant term (1), the decay amplitude contains a parity–violating piece. This can be written in general as
$$^{(PV)}=\frac{\alpha G_F\mathrm{sin}\theta _C}{\sqrt{2}}\left[B(p_K+p_\pi )^\mu +C(p_Kp_\pi )^\mu \right]\overline{u}(p_{},s_{})\gamma _\mu \gamma _5v(p_+,s_+),$$
(2)
where the parameters $`B`$ and $`C`$ get contributions from both short– and long–distance physics. The short–distance contributions arise mainly from $`Z`$-penguin and $`W`$-box Feynman diagrams and carry clean information on the flavour structure of the SM. Hence, the experimental determination of $`B`$ and $`C`$ would be very interesting from the theoretical point of view, provided that the long–distance effects are under control. Since the total decay amplitude is dominated by the parity–conserving piece, to get this information one is lead to search for a parity–violating observable. The muon polarizations are immediate candidates in this sense.
It can be seen that, from the experimental point of view, the measurement of the $`\mu ^+`$ polarization is strongly favoured in comparison with that of the $`\mu ^{}`$. The reason is that the $`\mu ^{}`$ give rise to the formation of muonic atoms when they are stopped in materials, and this makes it difficult to measure the polarization . We will concentrate then in the polarization of the outgoing $`\mu ^+`$, summing over the final $`\mu ^{}`$ states. In an arbitrary reference frame, the decay rate for polarized $`\mu ^+`$ is given by
$$\mathrm{\Gamma }(s_+)=\frac{1}{2E_K}𝑑\mathrm{\Phi }\underset{s_{}}{}|(s_+)|^2,$$
(3)
where $`d\mathrm{\Phi }`$ is the Lorentz–invariant differential phase space,
$$d\mathrm{\Phi }=(2\pi )^4\delta ^{(4)}(p_Kp_\pi p_+p_{})\underset{a=\pi ,+,}{}\frac{d^3p_a}{(2\pi )^32E_a}.$$
(4)
The polarization asymmetry of the outgoing $`\mu ^+`$, in the direction given by $`s_+`$, is defined now as
$$\mathrm{\Delta }\frac{\mathrm{\Gamma }(s_+)\mathrm{\Gamma }(s_+)}{\mathrm{\Gamma }(s_+)+\mathrm{\Gamma }(s_+)}.$$
(5)
As long as $`s_+`$ transforms as a four–vector, this quantity is clearly Lorentz–invariant. However, one has to take some care when referring to the longitudinal or transverse muon polarizations. A polarization vector that is parallel to the muon momentum in a particular frame, acquires in general a transverse component when one moves to a boosted system. Hence, in principle, the value of the longitudinal polarization asymmetry $`\mathrm{\Delta }_{long}`$ obtained in an arbitrary reference frame may change, or even vanish, when the longitudinal polarization is measured in the LAB frame. The same can be applied to transverse polarizations, which also require the introduction of an additional reference plane (e.g. the plane of the decay, in the rest frame of the $`K^+`$).
It is usual (see for instance Refs. ) to define $`\mathrm{\Delta }_{long}`$ in the rest frame of the $`\mu ^+\mu ^{}`$ pair. As commented in the introduction, this represents a problem from the experimental point of view, since the theoretical prediction obtained for the asymmetry cannot be contrasted by measuring only the longitudinal muon polarizations in the LAB frame. One should instead fully reconstruct each observed event (not just look at the final $`\mu ^+`$), in order to boost all decay products to the particular frame proposed, and then perform the comparison with the theoretical value. Or, alternatively, one can boost the polarization vector $`s_+`$, defined to be parallel to the $`\mu ^+`$ momentum in the rest frame of the $`\mu ^+\mu ^{}`$ pair, to the LAB frame. Then, to compare with the theoretical prediction, one would have to measure the LAB $`\mu ^+`$ polarization along a different axis for each individual event, this axis being determined by the boost. Once again the analysis turns out to be quite involved.
Our proposal is simple: it just consists in considering an observable equivalent to (5), but defined directly in the LAB system. The advantage is that, once the process has been identified, the theoretical prediction can be contrasted just by analysing the final $`\mu ^+`$ polarization, without taking care about the energy and angular distribution of the remaining $`\mu ^{}`$ and $`\pi ^+`$. In particular, we can take $`s_+`$ to be longitudinal in the LAB system, and calculate the value for $`\mathrm{\Delta }_{long}^{(\mathrm{LAB})}`$. As we show below, the result is in general different from that obtained in the $`\mu ^+\mu ^{}`$ rest frame, and depends on the energy of the decaying $`K^+`$.
The detailed calculation of the decay rate for polarized $`\mu ^+`$ in the LAB reference frame is presented in the Appendix. We end up with the following expression:
$$\mathrm{\Gamma }(s_+)=\frac{(\alpha G_F\mathrm{sin}\theta _C)^2}{16\pi ^2E_K}_{E_{min}}^{E_{max}}𝑑E_+_{h(E_+)}^1d(\mathrm{cos}\theta )|\stackrel{}{p}_+|\left[g_0(z)+(s_+p_K)g_1(z)\right],$$
(6)
where $`z=(p_Kp_+)=E_KE_+|\stackrel{}{p}_K||\stackrel{}{p}_+|\mathrm{cos}\theta `$, and the integration limits $`E_{min}`$, $`E_{max}`$ and $`h(E_+)`$ are functions of the Lorentz factor $`\gamma `$ characterising the boost from the $`K^+`$ rest frame to the LAB system ($`\gamma =E_K/m_K`$). The functions $`g_{0,1}(z)`$, given in the Appendix, carry the information on the form factors $`f`$, $`B`$ and $`C`$ introduced in Eqs. (1) and (2).
We concentrate on the longitudinal $`\mu ^+`$ polarization, $`\mathrm{\Delta }_{long}^{(\mathrm{LAB})}`$, which can be trivially obtained from (5) and (6) by taking
$$s_+^\alpha =\frac{1}{m_\mu }(|\stackrel{}{p}_+|,\frac{E_+}{|\stackrel{}{p}_+|}\stackrel{}{p}_+),$$
(7)
with $`E_+`$, $`\stackrel{}{p}_+`$ in the LAB frame. Now, in order to determine the sensitivity of this observable to the parameters of interest, we need some theoretical input for the form factors $`f`$, $`B`$ and $`C`$. In the case of $`f(q^2)`$, which corresponds to the effective vertex $`K\pi \gamma ^{}`$, one can use the experimental information from the decay $`K^+\pi ^+e^+e^{}`$. It is seen that the absolute value of this form factor can be approximated by
$$|f(q^2)|=|f(0)|\left(1+\lambda \frac{q^2}{m_\pi ^2}\right),$$
(8)
with $`|f(0)|=0.294`$ and $`\lambda =0.105`$. On the other hand, from existing analyses within Chiral Perturbation Theory , one expects the imaginary part of $`f(q^2)`$ to be negligibly small compared with the real part.
In the case of the parity–violating amplitude, the situation is more complicated due to the interference between short– and long–distance contributions. As stated above, the long–distance effects arise from nonperturbative QCD and are very difficult to estimate . To get definite numerical results, we will concentrate here only on the effect produced by the short–distance part (in fact, the estimates in Ref. indicate that this should be the dominant one), in which the $`q^2`$ dependence of $`B`$ and $`C`$ can be obtained from semileptonic kaon decays. We have thus
$$B=f_+(q^2)\xi ,C=\frac{1}{2}f_{}(q^2)\xi ,$$
(9)
where $`f_+(q^2)`$ and $`f_{}(q^2)`$ are the well–known form factors for $`K_{l3}`$ decays. We will use here a standard parametrization , taking
$$f_\pm (q^2)=f_\pm (0)\left(1+\lambda _\pm \frac{q^2}{m_\pi ^2}\right),$$
(10)
with $`f_+(0)=0.99`$, $`\lambda _+=0.03`$, $`f_{}(0)=0.33`$ and $`\lambda _{}=0`$. The novel information is contained in $`\xi `$, which can be calculated in the SM in terms of the quark masses and mixing angles. One has
$$\xi \stackrel{~}{\xi }_c+\left[\frac{V_{ts}^{}V_{td}}{V_{us}^{}V_{ud}}\right]\stackrel{~}{\xi }_t,$$
(11)
where $`V`$ stands for the Cabibbo–Kobayashi–Maskawa matrix, and $`\stackrel{~}{\xi }_c`$ and $`\stackrel{~}{\xi }_t`$ arise from the contributions of $`Z`$-penguins and $`W`$-boxes. QCD corrections introduce some dependence on the renormalization scale, though this can be reduced with the inclusion of next–to–leading order contributions .
Here we just keep $`\xi `$ as a parameter, and refer the reader to Ref. for the detailed analysis of its explicit dependence on the quark masses and mixing angles. Since both $`B`$ and $`C`$ are linear in $`\xi `$, the muon longitudinal polarization asymmetry can be written as
$$\mathrm{\Delta }_{long}=\pm \mathrm{Re}\xi R,$$
(12)
where the $`\pm `$ signs correspond to $`|f(0)|=\pm f(0)`$ respectively. To see the sensitivity of $`\mathrm{\Delta }_{long}`$ to $`\xi `$, we concentrate on the value of the “kinematic” factor $`R`$, which can be computed numerically using the above inputs for the form factors.
We recall that $`\mathrm{\Delta }_{long}`$, and thus $`R`$, depend in general on the reference frame in which the polarization vectors are defined to be parallel to the $`\mu ^+`$ momenta. In the LAB system, $`R`$ can be calculated by means of Eqs. (6) and (7) in terms of the energy of the decaying $`K^+`$. The resulting curve is shown in Fig. 1. It can be seen that the asymmetry is maximized when the kaons are at rest, with $`R2.9`$, while the effect turns out to be diluted for in–flight $`K^+`$. For a dilation factor $`\gamma \mathrm{}`$ we end up with $`R1.6`$.
In the case of high–energy kaons the sensitivity can be improved by performing a convenient cut in the $`\mu ^+`$ energy. We have analysed the situation for a dilation factor $`\gamma =12`$, this means, a kaon energy of about 6 GeV. This is the energy of the $`K^+`$ beam in the experiment E865 at the BNL AGS, used to study $`K^+`$ decays involving three charged final particles, and suggested as one of the best candidates to perform the measurement of the $`\mu ^+`$ polarization in $`K^+\pi ^+\mu ^+\mu ^{}`$ . The dependence of the asymmetry with the chosen range of the outgoing $`\mu ^+`$ energy for $`\gamma =12`$ can be seen from Fig. 2, where we plot the differential rates
$`{\displaystyle \frac{d\mathrm{\Gamma }(s_+)}{dE_+}}+{\displaystyle \frac{d\mathrm{\Gamma }(s_+)}{dE_+}},{\displaystyle \frac{1}{\mathrm{Re}\xi }}\left({\displaystyle \frac{d\mathrm{\Gamma }(s_+)}{dE_+}}{\displaystyle \frac{d\mathrm{\Gamma }(s_+)}{dE_+}}\right)`$
—the latter, up to a global sign— in terms of the $`\mu ^+`$ energy $`E_+`$. As it is shown in the figure, there is a change of sign in the $`\mu ^+`$ polarization for $`E_+1`$ GeV. This leads to a reduction in the value of $`|R|`$ when integrating over the whole range of $`\mu ^+`$ energies. By taking a lower cut at $`E_+=1`$ GeV, the asymmetry increases from $`R1.6`$ to $`2.1`$, while the number of events gets reduced only by a factor 0.82.
Finally we notice that, by working in the $`\mu ^+\mu ^{}`$ rest frame, one obtains $`|R|=2.3`$ . Thus the best sensitivity for $`\mathrm{\Delta }_{long}`$, with no phase–space cuts, would be obtained from the decay of stopped kaons.
## III Muon polarization and kinematics for $`K_L\mu ^+\mu ^{}`$
The above discussion about the fermion polarizations and the dependence on the reference frame can be also applied to the decay $`K_L\mu ^+\mu ^{}`$. In this case, the longitudinal polarizations of the outgoing muons have also a considerable theoretical interest, since the measurement of nonzero polarizations would represent a new signal of CP violation . It is clear that, being $`K_L\mu ^+\mu ^{}`$ a two–body decay, the kinematics is now much simpler than in the $`K^+\pi ^+\mu ^+\mu ^{}`$ case.
Using a similar notation as in the previous section, the decay amplitude for $`K_L\mu ^+\mu ^{}`$ can be written in terms of two parameters $`A`$ and $`B`$,
$$=\overline{u}(p_{},s_{})(iB+A\gamma _5)v(p_+,s_+).$$
(13)
To study this process, it is usual to work in the kaon rest frame, where the analysis is simpler. One can define the longitudinal $`\mu ^+`$ polarization asymmetry $`\mathrm{\Delta }_{long}^{(\mathrm{rest})}`$ by using an expression similar to (5), and taking the polarization vectors $`\stackrel{}{s}_+`$ to be parallel to the $`\mu ^+`$ three–momenta in the kaon rest frame. As we have discussed below, $`\mathrm{\Delta }_{long}^{(\mathrm{rest})}`$ will in general be different from $`\mathrm{\Delta }_{long}^{(\mathrm{LAB})}`$, defined by taking $`\stackrel{}{s}_+`$ parallel to $`\stackrel{}{p}_+`$ in the LAB system, if the decaying $`K_L`$ are in flight.
Let us analyse the dependence of $`\mathrm{\Delta }_{long}^{(\mathrm{LAB})}`$ with the energy of the decaying $`K_L`$, $`E_K=\gamma m_K`$. As before, we sum over the final $`\mu ^{}`$ polarizations, obtaining
$$\underset{s_{}}{}||^2=m_K^2\left(|A|^2+\beta _0^2|B|^2\right)+4m_\mu \mathrm{Im}(BA^{})(s_+p_K),$$
(14)
where $`\beta _0=(14m_\mu ^2/m_K^2)^{1/2}`$, and the factor Im$`(BA^{})`$ carries the CP violation effects. For this process the integration over the phase space is straightforward, and we can work directly in the LAB system. The decay rate for polarized $`\mu ^+`$ is found to be
$$\mathrm{\Gamma }(s_+)=\frac{m_K^2}{16\pi E_K|\stackrel{}{p}_K|}_{E_{min}}^{E_{max}}𝑑E_+\left[|A|^2+\beta _0^2|B|^2+\frac{4m_\mu }{m_K^2}(s_+p_K)\mathrm{Im}(BA^{})\right],$$
(15)
where the limits of integration are
$$E_{min}=\frac{E_K\beta _0|\stackrel{}{p}_K|}{2},E_{max}=\frac{E_K+\beta _0|\stackrel{}{p}_K|}{2}.$$
(16)
In the case of longitudinal polarization vectors, the scalar product in (15) is given by
$$(s_+p_K)=\frac{1}{m_\mu |\stackrel{}{p}_+|}\left(\frac{E_+m_K^2}{2}E_Km_\mu ^2\right),$$
(17)
and we obtain for the total $`\mu ^+`$ polarization
$$\mathrm{\Delta }_{long}^{(\mathrm{LAB})}=\frac{\mathrm{Im}(BA^{})}{|A|^2+\beta _0^2|B|^2}\left[\frac{2\beta ^{}}{\beta \beta _0}\frac{(1\beta _0^2)}{\beta \beta _0}\mathrm{log}\left(\frac{1+\beta ^{}}{1\beta ^{}}\right)\right],$$
(18)
where
$$\beta =\sqrt{1\gamma ^2},\beta ^{}=\{\begin{array}{ccc}\beta & ,\hfill & \gamma <\frac{m_K}{2m_\mu }\\ \text{}\beta _0& ,\hfill & \gamma \frac{m_K}{2m_\mu }\end{array}.$$
(19)
Notice that the dependence of the observable in Eq. (18) with the $`K_L`$ energy is contained into the factor in square brackets, hence it does not depend on the dynamics. As in the case of $`K^+\pi ^+\mu ^+\mu ^{}`$, it is seen that the asymmetry is reduced when the kaons are more energetic, though the effect is rather small. In the limit $`\gamma \mathrm{}`$, the longitudinal polarization is reduced by a factor
$$r\frac{\mathrm{\Delta }_{long}^{(\gamma \mathrm{})}}{\mathrm{\Delta }_{long}^{(\mathrm{rest})}}=\frac{1}{\beta _0}\frac{(1\beta _0^2)}{2\beta _0^2}\mathrm{log}\left(\frac{1+\beta _0}{1\beta _0}\right)0.77.$$
(20)
Still this ratio can be slightly increased by taking into account the energy distribution of the muons in the LAB system. Since the CP–conserving terms in Eq. (14) are independent of the kinematic variables, the dependence of the $`\mu ^+`$ polarization with $`E_+`$ is basically given by the scalar product (17). For $`E_K>m_K^2/(2m_\mu )`$, it is seen that the polarization changes sign at $`E_+=E_02E_Km_\mu ^2/m_K^2`$, thus the sensitivity can be improved by making a lower cut on $`E_+`$. Taking e.g. $`E_+2E_0`$, one gets $`r=0.89`$, while the number of events is reduced by about 15%.
## IV Conclusions
We have analysed the decays $`K^+\pi ^+\mu ^+\mu ^{}`$ and $`K_L\mu ^+\mu ^{}`$, focusing our attention on the longitudinal polarization of the outgoing $`\mu ^+`$. For these processes, the asymmetry in the production of muons with opposite helicities has a significant theoretical interest in connection with the flavour mixing and the structure of the Standard Model.
The longitudinal polarization asymmetry $`\mathrm{\Delta }_{long}`$ depends in general on the chosen reference frame, since the helicity of a massive particle can change after a Lorentz transformation. Here we have considered $`\mathrm{\Delta }_{long}^{(\mathrm{LAB})}`$, that means, the longitudinal polarization asymmetry defined in the laboratory system. For the decay $`K^+\pi ^+\mu ^+\mu ^{}`$, the advantage of choosing this frame is that the theoretical predictions can be contrasted with experiment just by measuring the polarization of the outgoing $`\mu ^+`$, summing over all energies and angular distributions of the remaining $`\mu ^{}`$ and $`\pi ^+`$.
For both processes, we analyse the dependence of $`\mathrm{\Delta }_{long}^{(\mathrm{LAB})}`$ with the energy of the decaying kaons, showing that the asymmetry is partially diluted when the kaons are in flight. In the case of the decay $`K^+\pi ^+\mu ^+\mu ^{}`$, this is illustrated by the curve in Fig. 1 (we have neglected here long–distance contributions arising from two–photon exchange). We have considered in particular the case of in–flight kaons with energy of 6 GeV. For this energy, it is shown that there is a change of sign in the $`\mu ^+`$ polarization for $`\mu ^+`$ energies of about 1 GeV, thus a lower energy cut at this point allows to improve the asymmetry. On the other hand, in the case of the decay $`K_L\mu ^+\mu ^{}`$ it is shown that the dilution is purely kinematic, i.e. it does not depend at all on the dynamics of the process. In the limit of large $`K_L`$ energies, the asymmetry $`\mathrm{\Delta }_{long}^{(\mathrm{LAB})}`$ is found to be reduced by about 23% with respect to the value obtained when the decaying kaons are at rest.
###### Acknowledgements.
We thank J. Bernabéu for useful discussions and A. Pich and J. Portolés for the critical reading of the manuscript. D. G. D. has been supported by a grant from the Commission of the European Communities, under the TMR programme (Contract N ERBFMBICT961548). This work has been funded by CICYT (Spain) under the Grant AEN-96-1718 and by DGEUI (Generalitat Valenciana, Spain) under the Grant GV98-01-80.
##
We calculate here the decay rate $`\mathrm{\Gamma }(s_+)`$ for the process $`K^+\pi ^+\mu ^+\mu ^{}`$ in the LAB reference frame. From Eqs. (1), (2) and (3), we have
$$\mathrm{\Gamma }(s_+)=\frac{(\alpha G_F\mathrm{sin}\theta _C)^2}{2E_K}𝑑\mathrm{\Phi }\left[F_0+(s_+T)\right],$$
(21)
where
$`F_0`$ $`=`$ $`|f(q^2)|^2[2(2zq^2)(m_K^22z)4zm_\pi ^2],`$ (22)
$`T^\mu `$ $`=`$ $`\text{Re}(f(q^2)B^{})\left[(2m_K^22m_\pi ^2+q^24z)p_K^\mu +2(zm_K^2)p_{}^\mu \right]`$ (24)
$`+\text{Re}(f(q^2)C^{})(q^2p_K^\mu 2zp_{}^\mu ),`$
with $`z(p_Kp_+)`$.
Let us first perform the integration over the $`\mu ^{}`$ and $`\pi ^+`$ phase space variables. To do this, we write the differential phase space as
$`d\mathrm{\Phi }={\displaystyle \frac{d^3p_+}{(2\pi )^32E_+}}d\mathrm{\Phi }^{},`$
with
$`d\mathrm{\Phi }^{}=(2\pi )^4\delta ^{(4)}(p_Kp_\pi p_+p_{}){\displaystyle \frac{d^3p_{}}{(2\pi )^32E_{}}}{\displaystyle \frac{d^3p_\pi }{(2\pi )^32E_\pi }}.`$
Notice that $`F_0`$ is a function of the invariants $`q^2`$ and $`z`$. Then the integral over the $`\mu ^{}`$ and $`\pi ^+`$ momenta must be a function of $`z`$ only,
$$g_0(z)=F_0(z,q^2)𝑑\mathrm{\Phi }^{}.$$
(25)
In the same way, for the second term in the integrand of (21) we can write
$$T^\mu 𝑑\mathrm{\Phi }^{}=g_1(z)p_K^\mu +g_2(z)p_+^\mu .$$
(26)
Since $`s_+`$ is by definition orthogonal to $`p_+`$, the function $`g_2(z)`$ does not contribute to the decay rate (21) and we only need to compute $`g_1(z)`$. The latter can be written as
$$g_1(z)=F_1(z,q^2)𝑑\mathrm{\Phi }^{},$$
(27)
where the integrand is given by
$$F_1(z,q^2)=\frac{\left[z(p_+T)m_\mu ^2(p_KT)\right]}{z^2m_\mu ^2m_K^2}.$$
(28)
To perform the integrals in (25) and (27) explicitly, one can choose a convenient reference frame. Let us consider the system in which the kaon and the $`\mu ^+`$ three–momenta have equal magnitude and direction:
$$\stackrel{}{p}_K\stackrel{}{p}_+=\stackrel{}{p}_{}+\stackrel{}{p}_\pi =0.$$
(29)
Denoting by $`y`$ the cosine of the angle between the $`K^+`$ and $`\pi ^+`$ directions in this frame, the functions $`g_{0,1}(z)`$ can be obtained from
$$g_i(z)=F_i(z,q^2)𝑑\mathrm{\Phi }^{}=\frac{1}{16\pi }\frac{\left[(m_K^2m_\pi ^22z)^24m_\mu ^2m_\pi ^2\right]^{1/2}}{m_K^2+m_\mu ^22z}_1^1F_i(z,q^2)𝑑y,$$
(30)
where the $`\mu ^+\mu ^{}`$ invariant mass $`q^2`$ is given in terms of $`z`$ and $`y`$ by
$`q^2`$ $`=`$ $`m_K^2+m_\pi ^2{\displaystyle \frac{(m_K^2z)(m_K^2+m_\pi ^22z)}{m_K^2+m_\mu ^22z}}`$ (32)
$`+{\displaystyle \frac{(z^2m_K^2m_\mu ^2)^{1/2}[(m_K^2m_\pi ^22z)^24m_\pi ^2m_\mu ^2]^{1/2}}{m_K^2+m_\mu ^22z}}y.`$
The advantage of choosing this particular reference frame is that the integration limits for $`y`$ do not depend on the $`K^+`$ or $`\mu ^+`$ momenta. These only enter through the Lorentz invariant product $`z=(p_Kp_+)`$, which does not depend on $`y`$.
Being $`g_0(z)`$ and $`g_1(z)`$ Lorentz–invariant functions, we can move now easily to the LAB reference frame. The total $`K^+\pi ^+\mu ^+\mu ^{}`$ decay rate for polarized $`\mu ^+`$ will be given by
$`\mathrm{\Gamma }(s_+)`$ $`=`$ $`{\displaystyle \frac{1}{2E_K}}(\alpha G_F\mathrm{sin}\theta _C)^2{\displaystyle \frac{d^3p_+}{(2\pi )^32E_+}\left[g_0(z)+(s_+p_K)g_1(z)\right]}`$ (33)
$`=`$ $`{\displaystyle \frac{(\alpha G_F\mathrm{sin}\theta _C)^2}{16\pi ^2E_K}}{\displaystyle _{E_{min}}^{E_{max}}}𝑑E_+{\displaystyle _{h(E_+)}^1}d(\mathrm{cos}\theta )|\stackrel{}{p}_+|\left[g_0(z)+(s_+p_K)g_1(z)\right]`$ (34)
where $`\theta `$ stands for the angle between the $`\mu ^+`$ and the kaon in the LAB system, and $`z`$ is given by
$$z=E_KE_+|\stackrel{}{p}_K||\stackrel{}{p}_+|\mathrm{cos}\theta .$$
(35)
The limits of integration in (34) are found to be
$`\text{}E_{min}`$ $`=`$ $`\{\begin{array}{ccc}m_\mu & ,\hfill & \gamma \frac{E_0}{m_\mu }\\ \text{}\gamma E_0\gamma \beta |\stackrel{}{p}_0|& ,\hfill & \gamma >\frac{E_0}{m_\mu }\end{array}`$ (38)
$`\text{}E_{max}`$ $`=`$ $`\gamma E_0+\gamma \beta |\stackrel{}{p}_0|`$ (39)
$`\text{}h(E_+)`$ $`=`$ $`\{\begin{array}{ccc}1& ,\hfill & \hfill m_\mu E_+\gamma E_0\gamma \beta |\stackrel{}{p}_0|\\ \text{}\frac{\gamma E_+E_0}{\gamma \beta \sqrt{E_+^2m_\mu ^2}}& ,\hfill & \hfill \gamma E_0\gamma \beta |\stackrel{}{p}_0|<E_+\gamma E_0+\gamma \beta |\stackrel{}{p}_0|\end{array}`$ (42)
where $`\gamma `$ and $`\beta `$ are the Lorentz dilation factor and the velocity of the decaying kaon respectively,
$$\gamma =\frac{E_K}{m_K},\beta =\sqrt{1\gamma ^2}=\frac{|\stackrel{}{p}_K|}{E_K},$$
(43)
and $`E_0`$, $`|\stackrel{}{p}_0|`$ are defined as
$$E_0=\frac{m_K}{2}\left(1\frac{m_K^2}{m_\pi ^2}\right)\frac{m_\mu m_K}{m_\pi },|\stackrel{}{p}_0|=\sqrt{E_0^2m_\mu ^2}.$$
(44)
|
no-problem/9907/quant-ph9907085.html
|
ar5iv
|
text
|
# Output Spectrum of Single-Atom Lasers
## I Introduction
In recent years it has become possible to explore the dynamics of a single atom as it passes through a microcavity at low velocity. By this we mean that the transit time for the atom across the cavity mode is on the order of a hundred spontaneous emission lifetimes. This slow atomic beam is generated by dumping cold atoms from a magneto-optical trap into a microcavity . There has been much work in the past on the micromaser/microlaser, where two-level atoms in the excited state are flown through a microcavity of high-Q, and interact with the field mode of the cavity . These systems are quite interesting and exhibit a variety of nonclassical effects, but we wish to examine here the case where the atom starts in the ground state, lives in the cavity mode for many lifetimes with an essentially constant atom-field coupling strength. Of course this coupling strength will change with time as the atom moves through the Gaussian profile of the mode, but we will assume here that the atom-field dynamics are such that the atom is essentially stationary. Hence we have the one-atom limit of a laser pointer; we have a fixed gain medium in a cavity, pumped by an external source, but the gain medium is composed of a single three- or four-level atom. Some previous work has been done on this system. Smith and Gardiner were the first to consider such a system, but did not explore large enough atom-field couplings to obtain interesting results. The key result of their paper was a way to treat single atom systems in a Fokker-Planck approach. Mu and Savage considered several classes of single atom lasers and focused on the photon statistics. They showed that lasing was possible for such a system, and that for large atom-field coupling the single atom laser emitted amplitude squeezed, or antibunched light. For parameters for which the output light was amplitude squeezed, the linewidth of the laser was shown to increase with pump strength rather than decrease as in the usual Schawlow-Townes type fashion. Ritsch and Pellazari also examined the photon statistics of a single atom laser, and they too predict regions of parameter space where the output is amplitude squeezed. Ginzel et. al. considered various aspects of single-atom laser systems, including the observation of a vacuum-Rabi doublet in the output spectrum, but in their work the emphasis was on the application of a new computational approach, that of the damping basis. In that approach, the basis states were eigenstates of the dissipative part of the master equation. Loffler et. al. considered the spectrum of a three-level single atom laser, and predicted vacuum-Rabi structures in the output spectrum. In this work we show that the two-peaked vacuum-Rabi structure vanishes quickly as the pump strength is increased. More recently Jones et. al. have examined the photon statistics of single-atom laser systems, with particular emphasis on how the systems behaved as $`\beta `$ was changed. The parameter $`\beta `$ is the fraction of spontaneous emission into the lasing cavity. It is a parameter of much interest in the microlaser community. As $`\beta `$ tends towards unity, the laser output rises linearly with pump strength; this has been referred to as a “thresholdless” laser. For more on this subject, we refer the reader to recent work on macroscopic laser systems and their dependence on $`\beta `$.
In section 2, we discuss the output spectrum of a three-level incoherently pumped single-atom laser. Section 3 deals with the four-level incoherently pumped single-atom laser. In section 4 we examine the four-level model, but with coherent pumping, to determine if changing the pumping mechanism alters the results. Finally in section 5 we conclude.
A schematic of the system is shown in Figure 1. We adiabatically eliminate the fast transition from the topmost state to the upper lasing level, and use a two-level model, where the incoherent pump is modeled via the $`\mathrm{\Gamma }`$ term in the following master equation,
$`\dot{\rho }`$ $`=`$ $`{\displaystyle \frac{i}{\mathrm{}}}[H_s,\rho ]+\kappa (2a\rho a^{}a^{}a\rho \rho a^{})`$ (3)
$`+{\displaystyle \frac{\gamma }{2}}(2\sigma _{}\rho \sigma _+\sigma _+\sigma _{}\rho \rho \sigma _+\sigma _{})`$
$`+{\displaystyle \frac{\mathrm{\Gamma }}{2}}(2\sigma _+\rho \sigma _{}\sigma _{}\sigma _+\rho \rho \sigma _{}\sigma _+)`$
with the system Hamiltonian given by
$$H_S=i\mathrm{}g(a^{}\sigma _{}a\sigma _+)$$
(4)
The only nonzero density matrix elements are the diagonal elements (populations of the various levels) and coherences between states $`n,1`$ and $`n1,2`$. Here $`n`$ denotes the photon occupation number. The equations for these density matrix elements are
$`{\displaystyle \frac{d\rho _{n,1;n,1}}{dt}}`$ $`=`$ $`2\kappa (n+1)\rho _{n+1,1;n+1,1}\left(2\kappa n+\mathrm{\Gamma }\right)\rho _{n,1;n,1}`$ (6)
$`+\gamma \rho _{n,2;n,2}2\sqrt{n+1}g\rho _{n,1;n1,2}`$
$`{\displaystyle \frac{d\rho _{n,2;n,2}}{dt}}`$ $`=`$ $`2\kappa (n+1)\rho _{n+1,2;n+1,2}\left\{2\kappa n+\gamma \right\}\rho _{n,2;n,2}`$ (8)
$`+\mathrm{\Gamma }\rho _{n,1;n,1}+2\sqrt{n}g\rho _{n,1;n1,2}`$
$`{\displaystyle \frac{d\rho _{n,1;n1,2}}{dt}}`$ $`=`$ $`2\kappa \sqrt{n(n1)}\rho _{n+1,2;n,1}`$ (11)
$`\left\{\kappa (2n1)+{\displaystyle \frac{\mathrm{\Gamma }+\gamma }{2}}\right\}\rho _{n,2;n1,1}`$
$`+\sqrt{n}g\left(\rho _{n1,2;n1,2}\rho _{n,1;n,1}\right)`$
We solve these equations in the steady state, to obtain needed initial conditions for spectrum calculations. These matrix elements can also be used to calculate photon statistics, such as the mean photon number and Fano factor (variance over the mean). Typically we start by truncating the photon basis at some small number (3-10). Calculations are checked at the end to check that the population of the highest photon number states is less than $`10^4`$. If that were not the case, the program that does our calculations repeats the process, but increments the maximal photon number, and rechecks that we are keeping enough photon states. The calculation uses $`\underset{i,n}{}\rho _{i,n;i,n}=1`$ to solve for $`\rho _{0,;0,}`$ in terms of the other diagonal density matrix elements. In much of what follows, we will refer to $`\beta `$, the fraction of spontaneous emission into the cavity mode.
$$\beta =\frac{2g^2/(\gamma +\mathrm{\Gamma }+2\kappa )}{2g^2/(\gamma +\mathrm{\Gamma }+2\kappa )+\gamma /2}$$
(12)
We are interested in calculating the output spectrum of the laser,
$$S(\omega )=\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑\tau e^{i\omega \tau }a^{}(0)a(\tau )=2\mathrm{}\underset{0}{\overset{\mathrm{}}{}}𝑑\tau e^{i\omega \tau }a^{}(0)a(\tau ).$$
(13)
We calculate this spectrum using the quantum regression theorem,
$$a^+(0)a(\tau )=tr\left\{a(0)A(\tau )\right\}=\underset{i,n}{}\sqrt{n+1}i,n+1|A(\tau )|i,n$$
(14)
where $`A(0)=\rho _{SS}a^{}`$ and $`\dot{A}=A`$. The resulting equations can be written in the form
$$\frac{d\stackrel{}{A}}{dt}=\stackrel{}{M}\stackrel{}{A}$$
(15)
The relevant equations for the matrix elements of $`\stackrel{}{A}`$ are
$`{\displaystyle \frac{dA_{n+1,1;n,1}}{dt}}`$ $`=`$ $`2\kappa \sqrt{(n+2)(n+1)}A_{n+2,1;n+1,1}`$ (18)
$`\left(\kappa (2n+1)+\mathrm{\Gamma }\right)A_{n+1,1;n,1}+\gamma A_{n+1,2;n,2}`$
$`+g\sqrt{n+1}A_{n,2;n,1}+g\sqrt{n}A_{n+1,1;n1,2}`$
$`{\displaystyle \frac{dA_{n+1,2;n,2}}{dt}}`$ $`=`$ $`2\kappa \sqrt{(n+2)(n+1)}A_{n+2,2;n+1,1}`$ (21)
$`\left(\kappa (2n+1)+\gamma \right)A_{n+1,2;n,2}+\mathrm{\Gamma }A_{n+1,1;n,1}`$
$`g\sqrt{n+2}A_{n+2,1;n,2}g\sqrt{n+1}A_{n+1,2,n+1,1}`$
$`{\displaystyle \frac{dA_{n+2,1;n,2}}{dt}}`$ $`=`$ $`2\kappa \sqrt{(n+3)(n+1)}A_{n+3,1;n+1,2}`$ (24)
$`\left(\kappa (2n+1)+\gamma /2+\mathrm{\Gamma }/2\right)A_{n+2,1;1,2}`$
$`+g\sqrt{n+2}A_{n+1,2;n,2}g\sqrt{n+1}A_{n+2,1;n+1,1}`$
$`{\displaystyle \frac{dA_{n,2;n,1}}{dt}}`$ $`=`$ $`2\kappa (n+1)A_{n+1,2;n,1}\left(2\kappa +\gamma /2+\kappa /2\right)`$ (26)
$`+g\sqrt{n}A_{n,2;n1,2}g\sqrt{n+1}A_{n+1,1;n,1}`$
After taking the Fourier transform of the above differential equations we have
$$\stackrel{}{\stackrel{~}{A}}(\omega )=\left\{\stackrel{}{M}i\omega \stackrel{}{I}\right\}^1\stackrel{}{A}(0)$$
(27)
with $`\stackrel{}{\stackrel{~}{A}}(\omega )`$ composed of the Fourier transform of $`\stackrel{}{A}(\tau )`$ and then we can easily form the spectrum
$$S(\omega )=\underset{i,n}{}\sqrt{n+1}i,n+1|\mathrm{}\stackrel{~}{A}(\omega )|i,n.$$
(28)
In solving these equations, we truncate the photon basis at the same value of $`n`$ as in the density matrix element equations.
In Figure 2, we plot the output spectrum of the laser for $`g/\gamma =0.1`$ and $`\kappa /\gamma =0.1`$ as a function of pumping strength, $`\mathrm{\Gamma }`$. We see that the lineshape is approximately Lorentzian, and the width decreases initially as the pump strength is increased. All spectra in this paper are normalized so that the integrated spectrum is unity. At larger pumps we see that the linewidth begins to broaden. This is apparent most readily from examining the peak of the spectrum. It rises rapidly with pump rate, reaches a maximum, and slowly goes back down. As the area is the same (by our normalization) this behavior is indicative of the initial narrowing, which reaches a minimum before beginning to broaden. In Figure 3, we plot the linewidth, obtained by fitting a Lorentzian curve, versus pump strength. For comparison, we also plot $`\mathrm{\Delta }\omega _{ST}=\kappa /2n`$, the Schawlow-Townes result. Here the mean photon number $`n`$ has been directly calculated from the steady state density matrix elements via $`n=\underset{n,j}{}nn,j\rho _{SS}n,j`$ Here $`n`$ of course is the photon number index, and $`j=1,2`$ are the atomic level indices. We see that while the linewidth initially decreases with photon number, it is always broader than the Schawlow-Townes result for small pump rates. We see that as the laser turns off with increasing pump strength, the linewidth increases, and goes below the Schawlow-Townes result, but qualitatively it follows that trend. The mean intracavity photon number is plotted as a function of driving field strength, for these same parameters, in Figure 4. We see that the laser does turn off with pump strength. This is a feature of the incoherently pumped three-level laser, both for single atom and macroscopic systems, as shown by Mu and Savage , Jones et. al. , and Koganov and Shuker . This is due to the incoherent nature of the pumping process, which causes the atom to uncouple from the field for high pump rates. This is due to the incoherent pump process decohering the induced dipole on the lasing transition. This can also be viewed as $`\beta 0`$ as $`\mathrm{\Gamma }\mathrm{}`$, that the fraction of spontaneous emission into the cavity mode is pump dependent, and there is no spontaneous emission into the cavity for high pump rates. From the usual arguments of quantum electrodynamics, there is also no stimulated emission. From the viewpoint of quantum trajectory theory, the pump mechanism is a jump process, and the atom becomes trapped in the upper state of the lasing transition. As one increases $`g/\gamma `$ to 0.6, we have found that the single atom laser emits amplitude squeezed light , and that the linewidth indeed increases with pump strength, even for small pumps, as first predicted by Mu and Savage . Further increasing $`g/\gamma `$ to $`1.414`$, as in Figure 5, we see that the spectrum exhibits a double-peaked structure. This has been predicted by Loffler et. al. , and they identified the source of this structure vacuum-Rabi oscillations on the lasing transition. However we see that this structure only exists for very small pump strengths, when there are not very many photons in the cavity. This is not unexpected, as the vacuum-Rabi oscillations are associated with coherent oscillations of the one-photon states $`0,2`$ and $`1,1`$. When states with higher photon number are occupied, the vacuum-Rabi structure vanishes, as many transition frequencies between various dressed states start to appear. In Figure 6, we show the spectrum for larger pump values, where one can observe the vacuum-Rabi doublet disappear. Essentially the doublet is coming from spontaneous emission of the strongly coupled atom-cavity system, and in no sense from a laser. The mean photon number versus pump is exhibited in Figure 7. There is of course no well defined threshold for such a microscopic case. We see that the vacuum-Rabi oscillations vanish well before the mean photon number nears unity. Returning our attention to Figure 6 for a moment, we notice that again in this case, at high pump rates when the mean intracavity photon number begins to decrease, the linewidth begins to increase. Again this is easily apparent from the drop in the value of the normalized spectrum at line center, and is consistent with the turn off of the laser.
We gain further insight into this behavior by examining results of quantum trajectory simulations. The conditioned wave function is taken to be
$$|\psi _c(t)=\underset{n=0}{\overset{\mathrm{}}{}}C_{1,n}(t)e^{iE_{1,n}t}|1,n+C_{2,n}(t)e^{iE_{2,n}t}|2,n$$
(29)
The coherent evolution of the conditioned wave function obeys the Schroedinger equation with the following non-Hermitian Hamiltonian,
$`H_D`$ $`=`$ $`\mathrm{}(\omega i\kappa )a^+a+i\mathrm{}g\left(a^+\sigma _{32}a\sigma _{}\right)`$ (31)
$`i\mathrm{}{\displaystyle \frac{\gamma }{2}}\sigma _+\sigma _{}i\mathrm{}{\displaystyle \frac{\mathrm{\Gamma }}{2}}\sigma _{}\sigma _+`$
At each time step, the system is subject to collapses of the wavefunction according to the collapse operators
$`\widehat{F}_1`$ $`=`$ $`\sqrt{\gamma }\sigma _{}`$ (32)
$`\widehat{F}_2`$ $`=`$ $`\sqrt{\mathrm{\Gamma }}\sigma _+`$ (33)
$`\widehat{F}_3`$ $`=`$ $`\sqrt{2\kappa }a.`$ (34)
The probability of a collapse is proportional to the size of the time step multiplied by $`\psi _cF^{}F\psi _c`$. A separate random number is used to determine at each time step whether a particular collapse occurs. In the unlikely case that two collapses are to occur in a given time step, another random number is used to determine which actually occurs. Of course, we must choose our time step to be small compared to the fastest rates in the problem to minimize such occurrences. In Figures 8-9 we plot the induced dipole for the lasing transition, and population of the upper lasing state for two values of $`\mathrm{\Gamma }/\gamma `$ and $`g/\gamma =1.414`$. For smaller pump rates, as in Figure 8, there is an obvious vacuum-Rabi oscillation apparent. For larger pump strengths, as in Figure 9, there are effectively no coherent oscillations in the induced dipole or population, hence there is no vacuum-Rabi structure for large pumps. The incoherent pump process interrupts the coherent oscillations. In the quantum trajectory view, the incoherent pump is modeled as an upward jump. A pump event places the atom in the upper state of the lasing transition, and kills off the coherence between the two lasing states. As the pump rate increases, the atom becomes trapped in the upper level of the lasing transition, and decouples from the field. This eventually results in the mean photon number dropping to zero as the pump rate is increased.
## II Incoherently Pumped Four-Level Laser
This system is shown schematically in Figure 10. Again, we adiabatically eliminate the level above the upper lasing level. We are left with an effective three-level system, with the incoherent pump modeled as in the above work on the incoherently pumped three-level laser. The master equation for the incoherently pumped four-level laser is then
$`\dot{\rho }`$ $`=`$ $`{\displaystyle \frac{i}{\mathrm{}}}[H_s,\rho ]+\kappa (2a\rho a^{}a^{}a\rho \rho a^{}a)`$ (38)
$`+{\displaystyle \frac{\mathrm{\Gamma }}{2}}(2\sigma _{13}\rho \sigma _{31}\sigma _{31}\sigma _{13}\rho \rho \sigma _{31}\sigma _{13})`$
$`+{\displaystyle \frac{\gamma }{2}}(2\sigma _{32}\rho \sigma _{23}\sigma _{23}\sigma _{32}\rho \rho \sigma _{23}\sigma _{32})`$
$`+{\displaystyle \frac{\gamma _f}{2}}(2\sigma _{21}\rho \sigma _{12}\sigma _{12}\sigma _{21}\rho \rho \sigma _{12}\sigma _{21})`$
with
$$H_S=i\mathrm{}g(a^{}\sigma _{32}a\sigma _{23})$$
(39)
and the following definition for atomic raising and lowering operators
$$\sigma _{ij}=|ji|.$$
(40)
The equations for the nonzero density matrix elements are
$`{\displaystyle \frac{d\rho _{n,1;n,1}}{dt}}`$ $`=`$ $`2\kappa (n+1)\rho _{n+1,1;n+1,1}\left\{2\kappa n+\mathrm{\Gamma }\right\}\rho _{n,1;n,1}`$ (42)
$`+\gamma _f\rho _{n,2;n,2}`$
$`{\displaystyle \frac{d\rho _{n,2;n,2}}{dt}}`$ $`=`$ $`2\kappa (n+1)\rho _{n+1,2;n+1,2}\left\{2\kappa n+\gamma _f\right\}\rho _{n,2;n,2}`$ (44)
$`+\gamma \rho _{n,3;n,3}+2\sqrt{n}g\rho _{n,2;n1,3}`$
$`{\displaystyle \frac{d\rho _{n,3;n,3}}{dt}}`$ $`=`$ $`2\kappa (n+1)\rho _{n+1,3;n+1,3}\left\{2\kappa n+\gamma \right\}\rho _{n,3;n,3}`$ (46)
$`+\mathrm{\Gamma }\rho _{n,1;n,1}2\sqrt{n+1}g\rho _{n,3;n+1,2}`$
$`{\displaystyle \frac{d\rho _{n,2;n1,3}}{dt}}`$ $`=`$ $`2\kappa \sqrt{n(n1)}\rho _{n+1,2;n,3}\left\{\kappa (2n1)+{\displaystyle \frac{\gamma }{2}}\right\}\rho _{n,2;n1,3}`$ (48)
$`+\sqrt{n}g\left\{\rho _{n1,3;n1,3}\rho _{n,2;n,2}\right\}.`$
We again calculate this spectrum using the quantum regression theorem. The relevant equations for the matrix elements of $`A`$ in this case are
$`{\displaystyle \frac{dA_{n+1,1;n,1}}{dt}}`$ $`=`$ $`2\kappa \sqrt{(n+2)(n+1)}A_{n+2,1;n+1,1}\left(\kappa (2n+1)+\mathrm{\Gamma }\right)A_{n=1,1;n,1}`$ (50)
$`+\gamma _fA_{n+1,2;n,2},`$
$`{\displaystyle \frac{dA_{n+1,2;n,2}}{dt}}`$ $`=`$ $`2\kappa \sqrt{(n+2)(n+1)}A_{n+2,2;n+1,2}\left(\kappa (2n+1)+\gamma _f\right)A_{n+1,2;n,2}`$ (52)
$`+\gamma A_{n+1,3;n,3}+g\sqrt{n+1}A_{n,3;n,2}+g\sqrt{n}A_{n+1,2;n1,3},`$
$`{\displaystyle \frac{dA_{n+3,1;n,3}}{dt}}`$ $`=`$ $`2\kappa \sqrt{(n+2)(n+1)}A_{n+2,3;n+1,3}\left(\kappa (2n+1)+\gamma \right)A_{n+1,3;n,3}`$ (54)
$`+\mathrm{\Gamma }A_{n+1,1;n,1}g\sqrt{n+2}A_{n+2,2;n,3}g\sqrt{n+1}A_{n+1,3;n+1,2},`$
$`{\displaystyle \frac{dA_{n+2,2;n,3}}{dt}}`$ $`=`$ $`2\kappa \sqrt{(n+3)(n+1)}A_{n+3,2;n+1,3}`$ (57)
$`\left(\kappa (2n+2)+\gamma /2+\gamma _f/2\right)An+2,2;n,3`$
$`+g\sqrt{n+2}A_{n+1,3;n,3}g\sqrt{n+1}A_{n+2,2;n+1,2},`$
$`{\displaystyle \frac{dA_{n,3;n,2}}{dt}}`$ $`=`$ $`2\kappa (n+1)A_{n+1,3;n+1,2}\left(2\kappa n+\gamma /2\gamma _f/2\right)A_{n,3;n,2}`$ (59)
$`+g\sqrt{n}A_{n,3;n1,3}g\sqrt{n+1}A_{n+1,2;n,3}.`$
After taking the Fourier transform of the above equations, we have
$$\stackrel{}{\stackrel{~}{A}}(\omega )=\left\{\stackrel{}{M}i\omega \stackrel{}{I}\right\}^1\stackrel{}{A}(0)$$
(60)
with $`\stackrel{}{\stackrel{~}{A}}(\omega )`$ composed of the Fourier transform of $`\stackrel{}{A}(\tau )`$ and then we can easily form the spectrum
$$S(\omega )=\underset{i,n}{}\sqrt{n+1}i,n+1|\mathrm{}\stackrel{~}{A}(\omega )|i,n$$
(61)
In solving these equations, we truncate the photon basis at the same value of $`n`$ as in the density matrix element equations.
In Figure 11, we plot the spectrum of the laser for $`g/\gamma =1.4,\gamma _f/\gamma =10.0`$ and $`\kappa /\gamma =0.1`$. We see that the spectrum is a single peaked structure, whose linewidth decreases with pump strength initially, and asymptotically approaching a limiting value. In Figure 12, we plot the linewith of the laser spectrum versus pump strength, for various values of $`\beta `$, for $`\kappa /\gamma =0.1`$ and $`\gamma _f/\gamma =10.0`$. Recall that the fraction of spontaneous emission into the lasing cavity, $`\beta `$, is then determined by $`g`$, with $`\kappa `$, $`\gamma _f`$, and $`\gamma `$ fixed. This linewidth is again obtained by curve fitting a Lorentzian to the output spectrum. For comparison, we also plot the Schawlow-Townes result, $`\kappa /2n`$; the mean photon number again calculated from the steady state density matrix. We see that the linewidth is always broader than the Schawlow-Townes limit, but that the linewidth decreases with the inverse of the photon number. The photon number pins for large pump rates, as it does no good to pump the single atom laser faster than the fastest decay rate (usually $`\gamma _f`$), and the linewidth also pins at an asymptotic value. As one increases $`\beta `$ to the range $`0.60.9`$, (or increases $`g`$ given that $`\kappa /\gamma `$ and $`\gamma _f/\gamma `$ are fixed), the asymptotic value of the linewidth for large pumps begins to increase, as shown in Figure 13. For a value of $`\beta =0.998`$, with $`\kappa /\gamma =0.1`$, and $`\gamma _f/\gamma =100.0`$, we see that the linewidth increases with pump strength. As in the case of the three-level system, this is concurrent with the emitted light being amplitude squeezed. In fact for all values of $`\beta `$ above $`0.5`$ or so, the light is amplitude squeezed . For the uppermost curve in Figure 13, $`g/\gamma =100.0`$, but there is no chance of vacuum-Rabi oscillations as $`\gamma _f/\gamma =100.0`$, and so the decoherence rate $`\gamma _f=g`$. We can obtain vaccum-Rabi structure in the output spectrum in many cases, for example $`g/\gamma =10.0`$, $`\gamma _f/\gamma =2.0`$, and $`\kappa /\gamma =0.1`$ as shown in Figure 14. We see that there is a single peaked structure at small pump rate, which then evolves into a double peaked, vacuum-Rabi structure at large pumps, reaching an asymptotic spectrum for large pumps. The average photon number in the cavity reaches $`n=2.6`$ for those parameters and large pump, and is larger than one, as shown in Figure 15. The location of the two peaks are not well approximated by the complex part of the single-photon eigenvalues for this system, nor is the width well approximated by the real part. This is due to the fact that more than the one-photon state is involved in this process.
To understand this behavior, it is again instructive to look at quantum trajectory simulations. We take the conditioned wave function to be
$$|\psi _c(t)=\underset{n=0}{\overset{\mathrm{}}{}}C_{1,n}(t)e^{iE_{1,n}t}|1,n+C_{2,n}(t)e^{iE_{2,n}t}|2,n+C_{3,n}(t)e^{iE_{3,n}t}|3,n$$
(62)
where again, the unitary evolution of this wave function is governed by a Schrodinger equation with the following non-Hermitian Hamiltonian,
$`H_D`$ $`=`$ $`\mathrm{}(\omega i\kappa )a^+a+i\mathrm{}g(a^{}\sigma _{32}a\sigma _{23})`$ (64)
$`i\mathrm{}{\displaystyle \frac{\gamma }{2}}\sigma _{23}\sigma _{32}i\mathrm{}{\displaystyle \frac{\gamma _f}{2}}\sigma _{12}\sigma _{21}i\mathrm{}{\displaystyle \frac{\mathrm{\Gamma }}{2}}\sigma _{31}\sigma _{13}.`$
Here we have four associated collapse processes, that are governed by the following four collapse operators,
$`\widehat{F}_1`$ $`=`$ $`\sqrt{\gamma }\sigma _{32}`$ (65)
$`\widehat{F}_2`$ $`=`$ $`\sqrt{\gamma _f}\sigma _{21}`$ (66)
$`\widehat{F}_3`$ $`=`$ $`\sqrt{\mathrm{\Gamma }}\sigma _{13}`$ (67)
$`\widehat{F}_4`$ $`=`$ $`\sqrt{2\kappa }a.`$ (68)
These trajectories are generated in the same manner as those of the three-level system in the proceeding section. The derivation of $`\beta `$ is particularly transparent in the quantum trajectory formalism, using the equations for the probability amplitudes for the various states,
$`\dot{C}_{1,n}`$ $`=`$ $`\left({\displaystyle \frac{\mathrm{\Gamma }}{2}}+n\kappa \right)C_{1,n}`$ (69)
$`\dot{C}_{2,n+1}`$ $`=`$ $`\left({\displaystyle \frac{\gamma _f}{2}}+(n+1)\kappa \right)C_{2,n+1}+g\sqrt{n+1}C_{3,n}`$ (70)
$`\dot{C}_{3,n}`$ $`=`$ $`\left({\displaystyle \frac{\gamma }{2}}+n\kappa \right)C_{3,n}g\sqrt{n+1}C_{2,n+1}.`$ (71)
If $`\gamma _f>>\gamma ,g,\kappa ,\mathrm{\Gamma }`$, then we have
$$\dot{C}_{3,n}=\left(\frac{\gamma }{2}+n\kappa \right)C_{3,n}\frac{g^2}{\kappa (n+1)+\gamma _f/2}(n+1)C_{1,n+1}$$
(72)
In the case of $`n=0`$, we may read off
$$\beta =\frac{2g^2/(\gamma _f+2\kappa )}{2g^2/(\gamma _f+2\kappa )+\gamma /2}$$
(73)
as the fraction of spontaneous emission into the cavity mode.
Why is there no double-peaked structure in the spectrum for small pumps? Examining the temporal evolution of the induced dipole on the lasing transition in Figure 16, we see that for small pump strengths the dipole is usually zero, and an essentially random time occurs before the next oscillation occurs, which then lasts for some variable length of time. We see similar behavior in the population of the upper lasing level. For larger pump strengths, as in Figure 17, the dipole is often interrupted by a jump to the ground state of the system (a $`\gamma _f`$ event), which is then swiftly followed by a pump event. This sequence most often occurs when the atom has vacuum-Rabi flopped to the lower lasing level. Then the coherent vacuum-Rabi oscillations are begun again. The interruptions due to the collapses also broaden the vacuum-Rabi peaks, but the dipole is still mainly periodic if not pure sinusoidal. Again, we see similar behavior in the population of the upper lasing level. (We also note that the two peaked structure remains even when the mean intracavity photon number is well above one.)
## III Four-Level Coherently Pumped Laser
In this section we examine a coherently pumped four-level single atom laser, which is shown schematically in Figure 18. The master equation for this system is given by
$`\dot{\rho }`$ $`=`$ $`{\displaystyle \frac{i}{\mathrm{}}}[H_s,\rho ]+\kappa (2a\rho a^{}a^{}a\rho \rho a^{}a)`$ (76)
$`+{\displaystyle \frac{\gamma }{2}}(2\sigma _{32}\rho \sigma _{23}\sigma _{23}\sigma _{32}\rho \rho \sigma _{23}\sigma _{32})`$
$`+{\displaystyle \frac{\gamma _f}{2}}(2\sigma _{21}\rho \sigma _{12}\sigma _{12}\sigma _{21}\rho \rho \sigma _{12}\sigma _{21}),`$
with
$$H_S=i\mathrm{}g(a^{}\sigma _{32}a\sigma _{23})+i\mathrm{}E_{pump}(\sigma _{41}\sigma _{14}),$$
(77)
where again $`\sigma _{ij}=|ji|`$.
The nonzero density matrix elements satisfy the following equations,
$`{\displaystyle \frac{d\rho _{n,1;n,1}}{dt}}`$ $`=`$ $`2\kappa (n+1)\rho _{n+1,1;n+1,1}`$ (79)
$`2\kappa n\rho _{n,1;n,1}+\gamma _{21}\rho _{n,2;n,2}+2\mathrm{\Gamma }\rho _{n,4;n1,1}`$
$`{\displaystyle \frac{d\rho _{n,2;n,2}}{dt}}`$ $`=`$ $`2\kappa (n+1)\rho _{n+1,2;n+1,2}\left\{\kappa n+\gamma _{21}\right\}\rho _{n,2;n,2}`$ (81)
$`+\gamma \rho _{n,3;n,3}+2\sqrt{n}g\rho _{n,2;n1,3}`$
$`{\displaystyle \frac{d\rho _{n,3;n,3}}{dt}}`$ $`=`$ $`2\kappa (n+1)\rho _{n+1,3;n+1,3}\left\{\kappa n+\gamma \right\}\rho _{n,3;n,3}`$ (83)
$`+\gamma _{43}\rho _{n,1;n,1}2\sqrt{n+1}g\rho _{n,3;n+1,2}`$
$`{\displaystyle \frac{d\rho _{n,4;n,4}}{dt}}`$ $`=`$ $`2\kappa (n+1)\rho _{n+1,3;n+1,3}`$ (85)
$`\left\{2\kappa n+\gamma _{43}\right\}\rho _{n,3;n,3}2\mathrm{\Gamma }\rho _{n,4;n1,1}`$
$`{\displaystyle \frac{d\rho _{n,2;n1,3}}{dt}}`$ $`=`$ $`2\kappa \sqrt{n(n1)}\rho _{n+1,2;n,3}\left\{\kappa (2n1)+{\displaystyle \frac{\gamma }{2}}\right\}\rho _{n,2;n1,3}`$ (87)
$`+\sqrt{n}g\left\{\rho _{n1,3;n1,3}\rho _{n,2;n,2}\right\}`$
$`{\displaystyle \frac{d\rho _{n,1;n,4}}{dt}}`$ $`=`$ $`2\kappa \sqrt{n(n1)}\rho _{n+1,1;n+1,4}\left\{\kappa (2n1)+{\displaystyle \frac{\gamma }{2}}\right\}\rho _{n,1;n,4}`$ (89)
$`+\mathrm{\Gamma }\left\{\rho _{n,4;n,4}\rho _{n,1;n,1}\right\}.`$
We again calculate this spectrum using the quantum regression theorem. The relevant equations for the matrix elements of $`\stackrel{}{A}`$ in this case are
$`{\displaystyle \frac{dA_{n+1,1;n,1}}{dt}}`$ $`=`$ $`2\kappa \sqrt{(n+2)(n+1)}A_{n+2,1;n+1,1}\left(\kappa (2n+1)\right)A_{n+1,1;n,1}+\gamma _fA_{n+1,2;n,2}+`$ (91)
$`E\left(C_{n+1,4;n,1}+C_{n+1,1;n,4}\right)`$
$`{\displaystyle \frac{dA_{n+1,2;n,2}}{dt}}`$ $`=`$ $`2\kappa \sqrt{(n+2)(n+1)}A_{n+2,2;n+1,2}\left(\kappa (2n+1)+\gamma _f\right)A_{n+1,2;n,2}+\gamma A_{n+1,3;n,3}`$ (93)
$`+g\sqrt{n+1}A_{n,3;n,2}+g\sqrt{n}A_{n+1,2;n1,3}`$
$`{\displaystyle \frac{dA_{n+3,1;n,3}}{dt}}`$ $`=`$ $`2\kappa \sqrt{(n+2)(n+1)}A_{n+2,3;n+1,3}\left(\kappa (2n+1)+\gamma _4\right)A_{n+1,3;n,3}`$ (95)
$`g\sqrt{n+2}A_{n+2,2;n,3}g\sqrt{n+1}A_{n+1,3;n+1,2}`$
$`{\displaystyle \frac{dA_{n+1,4;n,4}}{dt}}`$ $`=`$ $`2\kappa \sqrt{(n+2)(n+1)}A_{n+2,4;n+1,4}\left(\kappa (2n+1)+\gamma _f\right)A_{n+1,4;n,4}`$ (97)
$`E\left(A_{n+1,1;1,4}+A_{n+1,4;n,1}\right)`$
$`{\displaystyle \frac{dA_{n+2,2;n,3}}{dt}}`$ $`=`$ $`2\kappa \sqrt{(n+3)(n+1)}A_{n+3,2;n+1,3}\left(\kappa (2n+2)+\gamma /2+\gamma _f/2\right)An+2,2;n,3`$ (99)
$`g\sqrt{n+2}A_{n+1,3;n,3}g\sqrt{n+1}A_{n+2,2;n+1,2}`$
$`{\displaystyle \frac{dA_{n,3;n,2}}{dt}}`$ $`=`$ $`2\kappa (n+1)A_{n+1,3;n+1,2}\left(2\kappa n+\gamma /2\gamma _f/2\right)A_{n,3;n,2}`$ (101)
$`+g\sqrt{n}A_{n,3;n1,3}g\sqrt{n+1}A_{n+1,2;n,3}`$
$`{\displaystyle \frac{dA_{n+1,2;n,2}}{dt}}`$ $`=`$ $`2\kappa \sqrt{(n+2)(n+1)}A_{n+2,4;n+1,4}\left(\kappa (2n+1)+\gamma _4\right)A_{n+1,2;n,2}`$ (103)
$`+E\left(A_{n+1,4;n,4}A_{n+1,1;n,1}\right).`$
As in the case of the incoherently pumped four-level laser,
$$\beta =\frac{2g^2/(\gamma _f+2\kappa )}{2g^2/(\gamma _f+2\kappa )+\gamma /2}.$$
(104)
Figure 19 presents the linewidth of the output spectrum for various values of $`\beta `$, with $`\gamma _4/\gamma =10.0,\kappa /\gamma =0.1`$, and $`\gamma _f/\gamma =10.0`$ The results are in qualitative agreement with those of the incoherently pumped four-level laser, although there is a notable difference in the rate of initial increase/decrease. In Figure 20, we plot the output spectrum in the regime where vacuum-Rabi structures are present. Again, the results are qualitatively the same as the incoherently pumped model, with the persistence of vacuum-Rabi structures for large pumps and mean intracavity photon numbers above unity.
## IV Conclusions
In this chapter we have examined the output spectrum of several types of single atom laser systems. For the incoherently pumped three-level model, we find that for atom-field couplings at the lower range of that needed to produce photons in the cavity, that the spectrum is approximately a Lorentzian with a width broader than the Schwalow-Townes width. The width initially decreases with increasing pump strength as the photon number increases. The laser linewidth then expands with further increases in pump strength, and the photon number decreases with increasing pump strength. If the atom-field coupling is increased so that $`\beta =0.5`$, the laser emits amplitude squeezed light. Since the output field of a laser is not a minimum uncertainty state due to phase diffusion, it is not necessary that with decreases in amplitude noise the phase noise (linewidth) must increase, but it does so here. If we increase the atom field coupling to a value larger than all the other rates in the system, we find a vacuum-Rabi structure in the output structure as predicted by Loffler et. al. This structure persists only for very small pump rates, as the incoherent pump rapidly decoheres the induced dipole. At moderate to larger pump rates, the spectrum is single-peaked.
The incoherently pumped four-level laser also has a single peaked spectrum for smaller atom-field couplings that is approximately Lorentzian. The linewidth decreases with the inverse of the photon number, but is always broader than the Schawlow-Townes limit. The photon and linewidth both pin at asymptotic values as the pump is increased. This is due to the fact that it does no good to pump the system at a rate faster than the ground state of the atom is replenished by decay from the lower lasing level. As $`\beta `$ is increased to $`0.5`$ or so, the system emits amplitude squeezed light, and the linewidth increases with pump strength as in the case of the three-level system. Finally, as $`g`$ is made larger than all the other rates in the system, the output spectrum has a vacuum-Rabi structure that persists for large pumps, even when the mean intracavity photon number is greater than unity. In this regime, for small pump rates, the spectrum is single-peaked however, even though $`g`$ is the largest rate, which might suggest a double-peaked spectrum. This has been explained using quantum trajectory simulations.
We have further considered a coherently pumped four-level single atom laser; the results are very similar to those of the incoherently pumped four-level laser. It is hoped that with recent advances in experimental techniques that these types of systems will be examined in the laboratory soon.
|
no-problem/9907/cond-mat9907473.html
|
ar5iv
|
text
|
# A mesoscopic Tera-hertz pulse detector
It is well known that the characteristic frequencies of electronic processees in mesoscopic systems are in the terahertz range. This follows from transition energies that are typically in the meV scale. For example, instabilities in transport through asymmetric double barrier systems (ADBS) may give rise to terahertz oscillations. Other time dependent processes, such as the charging and discharging of the well in these systems, are also expected to take place in the range of picoseconds. This suggests that an ADBS may react as a fast switch to the passage of terahertz pulse, a possibility that we explore in this work.
ADBS are characterized by a bistable region of the bias, produced by charge accumulation in the space between the barriers (the well). The collector barrier is made wider with the purpose of increasing the lifetime of the resonance in the well, thus enhancing the amount of charge that is retained when current goes through. At a critical bias $`V_c`$ the current drops abruptly due to a sudden emptying of the well, driven by an instability that may be understood using a nonlinear model. The dynamics is dominated by the accumulated charge which, in effect, lifts the bottom of the potential well thus retaining the resonance condition beyond $`V_c`$ for ballistic transmission of an incoming electron. There is a point at which this charge is so large that it becomes favourable to spill it out, the well is emptied and the resonance condition is lost, followed by a large current drop. Similarly, when the well is uncharged it will remain so, even as the bias is decreased to values smaller than $`V_c`$. At a second critical value $`V_c`$ the resonance condition is fullfiled again and current now flows. This completes the bistable cycle, a signature of which is the fact that $`V_c<V_c`$.
In this work we propose that an ADBS device biased slightly below $`V_c`$ will still undergo a transition, triggered by the passage of radiation in the terahertz region. The external field introduces an additional oscillating field that may in effect bring the bias to criticality. We also contend that a radiation field may trigger the onset of resonant transport if the system is biased slightly above $`V_c`$. Optical radiation will have no effect in either case since the field then oscillates so fast that the electrons have no time to respond. Only at terahertz and lower frequencies would one expect the system to switch from a state of high current to one of low flow of electrons in the presence of an incident radiation pulse, or vice-versa.
Consider an ADBS under bias and in the presence of an electromagnetic field polarized along the z-axis, the growth direction. In order to study the time evolution of the device we adopt a first-neighbors tight-binding model for the hamiltonian. The radiation field enters as a space and time dependent voltage. To a good approximation the longitudinal degrees of freedom are decoupled from the transverse motion and may be treated independently. The probability amplitude $`b_j^\alpha `$ for an electron in a time dependent state $`|\alpha >`$, to be at plane $`j`$ along $`z`$, is determined by the equation of motion
$`i\mathrm{}{\displaystyle \frac{db_j^\alpha }{dt}}`$ $`=`$ $`(ϵ_j(t)+U{\displaystyle \underset{\beta }{}}|b_j^\beta |^2)b_j^\alpha `$ (1)
$`+`$ $`v(b_{j1}^\alpha +b_{j+1}^\alpha 2b_j^\alpha ).`$ (2)
In this expression $`ϵ_j(t)`$ includes the fixed band contour, the external radiation-induced voltage $`\delta Esin2\pi \nu t`$ and the applied dc bias, the latter represented by a term linear in the spatial coordinate $`j`$. The sum over $`\beta `$ covers all occupied electron states and $`v`$ is the hopping matrix element between nearest neighbor planes. In writing Eq. (1) we have adopted a Hartree model for the electron-electron interaction, keeping just the intra-atomic terms as measured by the effective coupling constant U. As we will show in what follows, this nonlinear term is of key importance in the behavior of the system.
The time dependent Eq.(1) is solved using a half-implicit numerical method which is second-order accurate and unitary . Boundary conditions must be specified at the left ($`z=L`$) and right ($`z=L`$) edges of the structure. The approach taken here assumes that the wave function at time $`t`$ is given outside the structure by
$$b_j^\alpha (t)=(Ie^{ik_\alpha z_j}+R_j(t)e^{ik_\alpha z_j})e^{iϵ^\alpha t/\mathrm{}},z_jL$$
(3)
$$b_j^\alpha (t)=T_j(t)e^{ik_\alpha ^{}z_j}e^{iϵ^\alpha t/\mathrm{}}e^{i\delta Ecos(2\pi \nu t)/h\nu },z_jL.$$
(4)
Here $`k_\alpha `$ and $`k_\alpha ^{}=\sqrt{2m^{}[ϵ^\alpha ϵ_L]}/\mathrm{}`$ are the wavenumbers of the incoming and outgoing states, respectively, with $`ϵ^\alpha =4vsin^2(k_\alpha a/2)`$ the energy of the incoming particle. To model the interaction with the particle reservoir outside the structure the incident amplitude $`I`$ is assumed to be a constant independent of the coordinates. The envelope function of the reflected and transmitted waves, $`R_j`$ and $`T_j`$, are allowed to vary with $`j`$, however. Since far from the barriers these quantities are a weak function of the coordinate $`z_j`$ we restrict ourselves to the linear corrections only. This approximation is appropriate provided the time step $`\delta t`$ does not exceed a certain limiting value. For the results presented here, a value of $`\delta t=3\times 10^{17}`$ s was found sufficient to eliminate spurious reflections at the boundary while maintaining numerical stability up to 40$`\times 10^{12}`$s. In our numerical procedure the coefficients obtained without electromagnetic field for a given dc bias are used as initial condition when the THz radiation is turned on. With $`b_j^\alpha (t)`$ known, the time dependent current at site j is obtained numerically from
$$J_j(t)=\frac{e}{\mathrm{}}_0^{k_f}Im\{b_j^\alpha (b_{j+1}^\alpha b_j^\alpha )\}(k_f^2k_\alpha ^2)𝑑k_\alpha ,$$
(5)
where $`k_f=\sqrt{2m^{}ϵ_f}/\mathrm{}`$, with $`ϵ_f`$ the Fermi energy.
We next apply our model to an asymmetric GaAs/AlGaAs double barrier structure, with emitter and collector barrier thicknesess of 1.12 nm (2 sites) and 3.36 nm, (6 sites) respectively, and a well thickness of 11.2 nm (20 sites). The second barrier is made wider than the first in order to enhance the trapping of charge in the well. For this geometry the first resonance at zero bias occurs at 30 meV. The conduction band offset is set at 300 meV. The buffer layers are uniformly doped up to 3 nm from either barrier, so as to give a neutralizing free carrier concentration of 2$`\times `$ 10<sup>17</sup> cm<sup>-3</sup> at the contacts. In equilibrium, the Fermi level lies 19.2 meV above the asymptotic conduction band edge, so that the zero bias resonance lies well above the Fermi sea. The contribution to the potential due to the applied bias is taken into account throught a term linear in $`j`$, which is assumed to arise from fixed charges. The parameter values in Eq. (1) are set at $`v`$ = -2.16 eV and $`U`$ = 100 meV. The latter was chosen phenomenologically so as to fit the experimental J-V characteristic for a GaAs devices. The sample has 400 sites and the normalization of the wave functions is chosen so that charge from the electrons filling up to the Fermi energy exactly cancels the positive charge at the contacts. We solved Eq. (1) using the procedure described above, for an energy mesh appropriate to compute the integral in Eq.(4). Good convergence was found for a mesh of 100 points.
Figure 1 shows the current-voltage characteristic in the absence (solid line) and presence (dashed line) of radiation of amplitude $`\delta E`$ = 10 meV at $`\nu =`$ 1 THz. In the latter case we exhibit an average of the current over time. Note that at the chosen values of parameters $`V_c=`$ 0.320 V and $`V_c`$ = 0.282 V. It is clear from the figure that the radiation field narrows down the region of bistability, in agreement with previous results by Iñarrea and Platero . This effect is to be expected since the time dependent field added to the bias brings the system periodically to criticality when the applied dc bias has not reached this condition yet, thus triggering the charging or discharging of the well.
In Fig. 2 we show the time evolution of the charge density at the center of the well for different frequencies of the radiation field at $`V`$ = 0.310 V (empty circle in Fig. 1), a bias slightly below $`V_c`$. The condition for resonant tunneling is still met, conduction is allowed and the well is initially charged. A THz field of the same amplitude as for Fig. 1 is turned on at $`t=0`$, and as the radiation passes through the system the well empties, doing so in a few picoseconds time. Two characteristic times are involved in the data: the external radiation period $`T_r=1/\nu `$ and the time $`T_w`$ 3 ps it would take for our well to empty if charge is initially in it. When $`T_rT_w`$ the radiation field essentially acts as an added dc bias and the well empties within the time $`T_w`$, while in the other extreme $`T_rT_w`$ the oscillation is so fast that the electrons cannot respond and the system remains charged and conducting.
Figure 3 shows a situation in which the well is initially empty at a bias of $`V`$ = 0.285 V, slightly above the critical value $`V_c`$ (empty square in Fig. 1). It is physically reached by lowering the bias after it has gone beyond $`V_c`$. Once again the THz field is switched on at $`t=0`$. We observe that in all cases exhibited the well begins to charge, and after a transient time the system enters full resonance and current flows.
The above results assumed a radiation field of fixed amplitude $`\delta E`$ = 10 meV. One may ask how close to the critical value $`V_c`$ must the system be biased in order to act as a switch for weaker radiation fields. This is shown in Fig. 4 for $`\nu `$ = 0.33 THz and assuming the switching to take place at $`\tau `$ = 17 ps time. The bias offset is defined as $`\delta V=V_cV`$. The region above the curve (labeled YES) is where the potential drop takes place within the time $`\tau `$, while the region below (labeled NO) is where the switching does not take place in that time interval. Within the range of our calculations we found the shape of the curve in Fig. 4 to be generic, shifting upwards as the frecuency increases. Close to the origin the dependence is approximately linear and for the chosen frequency follows the relation $`\delta E\frac{1}{2}(1+\delta V)`$. Using this expression we get that at a bias $`\delta V`$ = 1 meV our device would switch under radiation of about $`50`$ Watt/cm<sup>2</sup> and stronger. The sensitivity could be improved using a wider collector barrier, thus having a narrower resonance (longer lifetime $`T_w`$). Because of limitations due to numerical instabilities this ansatz would be best tested experimentally.
In summary, we have shown that an asymmetric double barrier heterostructure may act as a switch triggered by the passage of electromagnetic radiation at frequencies in the terahertz region and below. The frequency threshold for this switching action depends on the barrier and enclosed well widths. Depending on the applied external bias, the passage of current is turned on or off by the radiation pulse. Our results rely on the current drop as the resonance in the well falls below the emitter conduction band edge, a feature also present in symmetric double barrier heterostructures. In the latter case however, the drop is not an instability of the system and does not take place abruptly, a desirable feature for a switching device.
Work supported by FONDECYT grants No. 1990425 and No. 1990443.
|
no-problem/9907/hep-th9907012.html
|
ar5iv
|
text
|
# References
Entropy Bounds and String Cosmology
G. Veneziano
Theoretical Physics Division, CERN
CH-1211 Geneva 23, Switzerland
Abstract
> After discussing some old (and not-so-old) entropy bounds both for isolated systems and in cosmology, I will argue in favour of a “Hubble entropy bound” holding in the latter context. I will then apply this bound to recent developments in string cosmology, show that it is naturally saturated throughout pre-big bang inflation, and claim that its fulfilment at later times has interesting implications for the exit problem of string cosmology.
Why is the second law of thermodynamics valid even when the microscopic evolution equations are invariant under time-reversal? The standard answer to this old question (see e.g. ) is simple: it is because the Universe started in a low-entropy state and has not yet reached its maximal attainable entropy. But then, which is this maximal possible value of entropy and why has it not already been reached after so many billion years of cosmic evolution? In this talk I will argue that, perhaps, there is a simple answer to these last two questions, at least in the context of string cosmology. But let us proceed step by step.
In 1981 J. Bekenstein proposed what he called a “universal” entropy bound for isolated objects. We will refer to it as the Bekenstein entropy bound (BEB) , which states that, for any isolated physical system of energy $`E`$ and size $`R`$, usual thermodynamic entropy is bound by <sup>1</sup><sup>1</sup>1Throughout this paper we stress functional dependences while ignoring numerical factors and set $`c=1`$.:
$$SS_{BB}=ER/\mathrm{}.$$
(1)
I will skip the arguments that led Bekenstein to formulate his bound and just stress that, in 18 years, no counterexample to it has been found.
The so-called holographic principle of ’t Hooft, Susskind and others , suggests an apparently unrelated holographic bound on entropy (HOEB) according to which entropy cannot exceed one unit per Planckian area of its boundary’s surface. In formulae:
$$SS_{HOB}=Al_P^2.$$
(2)
I will now argue that the BEB actually implies the HOEB. Indeed:
$$S_{BB}=GER/G\mathrm{}=R_sRl_P^2S_{HOB}=R_{eff}^2l_P^2,R_sGE,$$
(3)
where $`R_{eff}`$ appearing in the holography bound is $`R`$ if $`R>R_s`$ (a non-collapsed object), but has to be identified with $`R_s`$ if the object is inside its own Schwarzschild radius (is itself a black hole). In the latter case the two bounds coincide and are saturated.
Incidentally, the BEB has an amusing application to (weakly coupled) string theory. Since string entropy is $`O(\alpha ^{}E/l_s)`$ (one unit per string length $`l_s=\sqrt{\alpha ^{}\mathrm{}}`$), it satisfies the BEB iff $`R>l_s`$. Thus, in string theory, one cannot have black holes with Schwarzschild radius smaller than $`l_s`$ (with a Hawking temperature larger than the string’s Hagedorn temperature) .
The situation for isolated systems in flat space-time looks uncontroversial. How can we try to extend these considerations to a cosmological set up? Let us first pretend that we can use the naive BEB or holography bounds to an arbitrary sphere of radius $`R`$, cut out of a homogeneous cosmological space. Entropy in cosmology is extensive, i.e. it grows like $`R^3`$. But the boundary’s area grows like $`R^2`$: therefore, at sufficiently large $`R`$, the (naive) holography bound must be violated! On the other hand, $`S_{BB}ERR^4`$ appears to be safer at large $`R`$. How can this be, since we just argued that the BEB implies the HOEB? The explanation is simple: when $`R`$ becomes very large, the corresponding $`R_s`$ exceeds $`R`$; nevertheless, we kept using $`R`$ in the HOEB since we no longer had a black-hole interpretation for the sphere. Obviously, we have to rethink everything within a cosmological setting!
In order to show how inadequate the naive bounds are in cosmology, let us apply them at $`tt_P10^{43}\mathrm{s}`$, within standard FRW cosmology, to the region of space that has become our visible Universe today. The size of that region at $`tt_P`$ was about $`10^{30}l_P`$ and the entropy density was of Planckian order. Thus:
$`S(10^{30})^3=10^{90},`$ (4)
$`S_{BB}\rho R^4/\mathrm{}R^4/l_P^410^{120},S_{HOB}R^2l_P^210^{60}.`$
Clearly the actual entropy lies at the geometric mean between the two naive bounds, making one false and the other quite useless!
It was indeed realized by their respective proponents that both the BEB and the HOEB need revision in a cosmological context. In 1989 Bekenstein proposed that the BEB applies to a region as large as the particle horizon $`d_p`$:
$$d_p(t)=a(t)_{t_{beg}}^t𝑑t^{}/a(t^{}).$$
(5)
The same conclusion (with an important caveat, see below) was reached by Fischler and Susskind (FS) in their cosmological generalization of the HOEB.
There is one very welcome property of both the Cosmological BEB and the FS bound: they appear to be saturated around the Planck time (when they can be shown to be equivalent) and could thus justify the initially “low” entropy value. Actually, one finds that the bound is saturated at $`tN_{eff}^{1/2}t_p`$ and is violated at earlier times if one trusts General Relativity so far inside the strong-curvature region. This result was used by Bekenstein to argue that the Big Bang singularity must be spurious.
It is interesting to compare the two bounds again, now in their cosmological variants. They are related as follows:
$$S_{CBB}M(r<d_p)d_p/\mathrm{}=\rho d_p^4/\mathrm{}=(Hd_p)^2d_p^2/l_p^2=(Hd_p)^2S_{CHOB},$$
(6)
where, with increasingly baroque notation, we have added a $`C`$ to distinguish the cosmological versions of the two bounds and we have used Friedmann’s equation $`G\rho =H^2`$ to relate energy density to the Hubble parameter $`H=\dot{a}/a`$.
We note that the two bounds differ by a factor $`(Hd_p)^2`$. While such a factor is $`O(1)`$ in FRW-type cosmologies, it can be huge after a long period of inflation, i.e. $`O\left((a_{end}/a_{beg})^2\right)`$, the square of the total amount of red-shift suffered during inflation, which has to be at least as large as $`10^{60}`$. For this reason the CHOEB (FSB hereafter) appears to be stronger than the CBEB, just the opposite of what we argued to be the case for isolated systems.
The tight nature of the FSB led some authors to derive constraints from it on inflationary parameters. This, however, came from a misinterpretation of the FSB <sup>2</sup><sup>2</sup>2This point was clarified after my talk through several discussions with Fischler and Susskind, see also Ref. .. The logical implication of the FSB is that it does not apply to entropy produced by non-adiabatic processes occurring in the bulk. In any inflationary scenario, most of the present entropy is the result of processes of this type (reheating due to dissipation of the inflaton’s potential energy at the end of inflation ) and should therefore be excluded. As a result, the FSB puts no constraints on inflation, but also becomes phenomenologically uninteresting in recent epochs, since it ignores most of the present entropy. On the contrary, the FSB appears to exclude closed, recollapsing universes , or those driven by a small negative cosmological constant .
Two groups , tried to apply the FSB to pre-big bang (PBB) cosmology. A problem arises, however, since the particle horizon $`d_p`$ is infinite in PBB (the integral in Eq. (5) diverges at its lower limit, $`\mathrm{}`$). One of the groups insisted on using $`d_p`$ nonetheless, and concluded that the PBB initial state has to be empty. The second group replaced the particle horizon with the event horizon (which is finite in PBB and infinite in FRW) and found very mild constraints. Very recently, Bousso proposed to change the FS prescription by replacing $`d_p`$ with yet another scale, and thus managed to avoid the above-mentioned problems with a recollapsing universe. In the rest of this talk I will argue in favour of a different cosmological entropy bound, which is unambiguous and appears to give sensible results. I will then apply it to the PBB scenario.
Consider a sufficiently homogeneous Universe with its (local, time-dependent) Hubble expansion rate defined, in the synchronous gauge, by:
$$6H=2K_t(\mathrm{log}g),g\mathrm{det}(g_{ij}),$$
(7)
where, as usual, $`K`$ denotes the trace of the second fundamental form on constant $`t`$ hypersurfaces. We assume $`H`$ to vary little (percentage-wise) over distances $`O(H^1)`$. In this case $`H^1`$, the so-called Hubble radius, corresponds to the scale of causal connection, i.e. to the scale within which microphysics can act.
As long as we consider, on top of this homogeneous background, isolated lumps of size much smaller that $`H^1`$, the expansion of the Universe is irrelevant, and we should fall back on the non-cosmological, asymptotically flat case. In particular, we can imagine to put, in a single Hubble patch, several black holes and compute their entropy. We can make them coalesce and watch the consequent entropy increase (mass adds up, but entropy is proportional its square). However, this way of increasing entropy has some limit. It is hard to imagine that a black hole larger than $`H^1`$ can form, since different parts of its horizon would be unable to hold together. Actually, strong arguments in this direction were given long ago in the literature (see also ). Thus, the largest entropy we may conceive for a region of space larger than $`H^1`$ is the one corresponding to one black hole per Hubble volume $`H^3`$. Using the Bekenstein–Hawking formula for the entropy of a black hole of size $`H^1`$ leads to the proposal , of a “Hubble entropy bound” (HEB):
$$S(V)<S_{HB}n_HS_H=VH^3l_P^2H^2=VHl_P^2,$$
(8)
where $`n_H`$ is the number of Hubble-size regions within the volume $`V`$, each one carrying maximal entropy $`S_H=l_P^2H^2`$. A possible relation between the HEB and a generalized second law of thermodynamics has also been discussed .
Note that the bound (8) is partly holographic (the $`S_H`$ part goes like an area) and partly extensive (the $`n_H`$ part goes like the volume). If we apply the HEB to a region of size $`d_p`$ we find, amusingly:
$$S_{HB}=d_p^3Hl_P^2=S_{CBB}^{1/2}S_{FS}^{1/2}.$$
(9)
It is easy to show that the above relation is sufficient to avoid any problem with entropy produced at reheating after inflation. Also, the HEB coincides with the CBEB and FSB at Planckian times in FRW cosmology and it is thus as saturated as they are. In the rest of this talk I will concentrate on applying the HEB to the PBB scenario, showing that, in that context, the above saturation is not accidental.
In order to discuss various forms of entropy in the PBB scenario, let us recall some results that have emerged from recent studies of the question of initial conditions in string cosmology (see for a recent review). It has been argued that very natural initial conditions, corresponding to generic gravitational and dilatonic waves superimposed on the trivial, perturbative vacuum of critical superstring theory (flat space-time and constant dilaton), lead to a form of stochastic PBB. In the Einstein-frame metric, this can be seen as a sort of chaotic gravitational collapse which is bound to occur, owing to gravitational instability through the Hawking–Penrose theorems , provided a (scale and dilaton shift invariant) collapse criterion is met. Black holes of different sizes will thus form but, for an observer measuring distances inside each black hole with a stringy meter, this is experienced as a PBB inflationary cosmology in which the (hopefully fake) $`t=0`$ big bang singularity is identified with the (hopefully equally fake) black hole singularity at $`r=0`$ . Since the duration (and efficiency) of the inflationary phase is controlled by the size of the black hole, we are led to identify our observable Universe with what became of a portion of space that was originally inside a sufficiently large black hole.
It is helpful to follow the evolution of various contributions to (and bounds on) entropy with the help of Fig. 1. At time $`t=t_i`$, corresponding to the first appearance of a horizon, we can use the Bekenstein–Hawking formula to argue that
$$S_{coll}(R_{in}/l_{P,in})^2(H_{in}l_{P,in})^2=S_{HB},$$
(10)
where we have used the fact that the initial size of the black-hole horizon determines also the initial value of the Hubble parameter. Thus, at the onset of collapse/inflation, entropy, without any fine-tuning, is as large as allowed by the HEB. As a confirmation of this, note that $`S_{coll}`$ is also on the order of the number of quanta needed for collapse to occur . We have implicitly assumed the initial state of the system to be at small string coupling: consequently, quantum fluctuations are very small, and contribute, initially, a negligible amount $`S_{qf}`$ to the total entropy.
After a short transient phase, dilaton-driven inflation (DDI) should follow and last until $`t_s`$, the time at which a string-scale curvature is reached. We expect this classical process not to generate further entropy (unless more energy flows into the black hole, but this would only increase its total comoving volume), but what happens to the HEB? Well, it stays constant, thus keeping the bound saturated, as the result of a well known “conservation law” of string cosmology , which reads $`(l_P^2=e^\varphi l_s^2)`$
$$_t\left(e^\varphi \sqrt{g}H\right)=_t\left((\sqrt{g}H^3)(e^\varphi H^2)\right)=_t\left(n_HS_H\right)=0.$$
(11)
Comparing this with (8), we recognize that (11) simply expresses the time independence of the HEB during the DDI phase. At the beginning of the DDI phase $`n_H=1`$, and the whole entropy is in a single Hubble volume; however, as DDI proceeds, the same total amount of entropy becomes equally shared between very many Hubble volumes until, eventually, each one of them contributes a small number. Also, if we assume that the string coupling is still small at the end of DDI, we can easily argue that the entropy in quantum fluctuations remains at a negligible level during that phase.
Is this going to continue indefinitely? Hopefully not: we want to exit from the DDI phase and enter, eventually, some kind of FRW cosmology! This is the well-known exit problem of string cosmology . Two diagrams can be helpful when discussing this problem. In Fig. 2 we plot, on a linear scale, the Hubble parameter against a (duality-invariant) combination of the rate of growth of the dilaton and $`H`$. DDI lies in the first quadrant of this plane, FRW in the second. If exit occurs, the two branches should smoothly connect (dotted line). In Fig. 3, we show instead, on a log-log plot, $`H`$ as a function of the string coupling. DDI solutions now correspond to the parallel straight lines going upwards to the right. Different straight lines correspond to different initial conditions (different classical moduli). The horizontal boundary corresponds to the reach of string-scale curvatures, where $`\alpha ^{}`$ corrections should become essential in order to prevent the singularity.
Let us assume for the moment initial conditions such that we hit this boundary while the coupling is still small and ask whether the HEB may come to our help. In fact, since the HEB is saturated all the time during DDI, it cannot decrease after this phase ends. This condition reads:
$$_t(e^\varphi \sqrt{g}H)0(\dot{\varphi }3H)\dot{H}/H.$$
(12)
This constraint is very welcome. As $`\alpha ^{}`$ corrections intervene to stop the growth of $`H`$, the HEB forces $`\dot{\varphi }3H`$ to decrease and even to change sign if $`H`$ stops growing. But this is just what is needed to make the DDI branch flow into the FRW branch in Fig. 2!
Consider now the second possibility , the case in which strong coupling is reached first, i.e. while the curvature is still small in string units. In this case we can neglect $`\alpha ^{}`$ corrections but not loop corrections, particle production, and back-reaction effects. When will exit occur? It has been assumed that it does when the energy in the quantum fluctuations (which can be easily estimated ) becomes critical, i.e. when
$$\rho _{qf}N_{eff}H_{max}^4=\rho _{cr}=e^{\varphi _{exit}}M_s^2H_{max}^2,$$
(13)
where $`N_{eff}`$ is the effective number of particle species produced. This gives the exit condition $`N_{eff}e^{\varphi _{exit}}l_s^2H_{max}^2=1`$, i.e. the rightmost boundary in Fig. 3. Let us show that this is also the boundary where the HEB is saturated. Using known results on entropy production due to the cosmological squeezing of vacuum fluctuations , and the previous constraint, we find:
$$S_{qf}^{(\mathrm{ex})}N_{eff}H_{max}^3Ve^{\varphi _{exit}}l_s^2H_{max}Vl_P^2H_{max}VS_{HB}^{(\mathrm{ex})}.$$
(14)
Note that the existence of this boundary can also be argued for on the basis of $`M`$-theory: Kaluza–Klein modes living in the 11th dimension become tachyonic when this critical line is reached.
In conclusion, the entropy and arrow-of-time problems are neatly solved, in PBB cosmology, by the identification of our observable Universe with the interior of a large, primordial black hole. The entropy of the black hole is large, because of its size ($`>10^{20}l_s`$) and, therefore, as with other features of PBB cosmology, this can be objected to as huge fine-tuning . My answer to this objection, as to the others, is simple: the classical collapse/inflation process is scale-free; it should therefore lead to a flattish distribution of horizon sizes, extending from a minimal stringy size to very large “macroscopic” scales. Given such a size, no other ratio is tuned to a particularly large or small value. Next, there is a built-in mechanism to provide saturation of the HEB till the end of the DDI phase, and for the HEB to force an exit to the radiation-dominated FRW phase. From there on, the entropy budget story is simple: our entropy remains, to date, roughly constant and around $`10^{90}`$, while $`S_{BH}`$ keeps increasing –at somewhat different rates– during the radiation- and matter-dominated epochs, reaching about $`10^{120}`$ today. Thus our entropy has still a long way to go while it keeps fixing our arrow-of-time!
It is a pleasure to thank the organizers of this meeting for their kind invitation and to wish François many more years of highly rewarding research.
|
no-problem/9907/hep-ph9907417.html
|
ar5iv
|
text
|
# References
## Figure Captions
FIGURE 1. Comparison of our meson cloud model with data for $`(\overline{d}\overline{u})`$. The solid line is for $`\mathrm{\Lambda }_\pi =0.83`$ GeV, for which $`D=_0^1(\overline{d}\overline{u})𝑑x=1.0`$. The dashed lines are for $`\mathrm{\Lambda }_\pi =0.78`$ GeV and $`\mathrm{\Lambda }_\pi =0.88`$ GeV, the range of values constrained by the experimental error of $`\pm 0.18`$ in $`D`$.
FIGURE 2. Comparison of our meson cloud model with data for $`\overline{d}/\overline{u}`$. The solid line ($`\mathrm{\Lambda }_\pi =0.83`$) is for $`\frac{g_\omega ^2}{4\pi }=8.1`$ and $`\mathrm{\Lambda }_\omega =1.5`$ GeV. The dashed line shows our result if the $`\omega `$ cloud contribution is omitted.
|
no-problem/9907/astro-ph9907102.html
|
ar5iv
|
text
|
# An Ultraviolet Fe II Image of SN 1885 in M31
## 1. Introduction
SN 1885 (S Andromedae) in the Andromeda galaxy M31 was the first supernova recorded in another galaxy (Zwicky (1958)). The supernova, which occurred just $`15\stackrel{}{\mathrm{.}}6\pm 0\stackrel{}{\mathrm{.}}1`$ from the central nucleus of M31, reached a peak $`V`$ magnitude of 5.85 in August 1885 (de Vaucouleurs & Corwin (1985)). At the $`725\pm 70\mathrm{pc}`$ distance of M31, and allowing for 0.23 mag of extinction, this corresponds to an absolute magnitude of $`M_V=18.7`$ (van den Bergh (1994)), some 0.8 mag fainter than the peak magnitude $`M_V=19.48\pm 0.07`$ of normal SN Ia (Tammann & Reindl (1999)). This, combined with SN 1885’s unusually fast light curve and reddish color near maximum light (de Vaucouleurs & Corwin (1985)), point to a subluminous Type Ia event similar to SN 1991bg (Filippenko (1997)).
A little over a century later, the remnant of SN 1885 (SNR 1885) was detected through a near-UV filter ($`3900\pm 100\mathrm{\AA }`$) as a spot of absorption silhouetted against the starlight of M31’s central bulge (Fesen, Hamilton, & Saken (1989)). Fesen et al. attributed the absorption to the resonance line of Fe I $`3860\mathrm{\AA }`$, consistent with the expected presence of a large mass of iron in a Type Ia supernova.
Subsequent near-UV WFPC2 imaging and FOS spectroscopy with the Hubble Space Telescope (HST) revealed that the principal source of absorption is not Fe I, but rather Ca II H & K, freely expanding at velocities up to $`\mathrm{13\hspace{0.17em}100}\pm 1500\mathrm{km}\mathrm{s}^1`$ (Fesen et al. (1999), hereafter Paper 1). In addition to strong Ca II H & K absorption, the FOS spectrum showed similarly broad but weaker absorption from Ca I $`4227\mathrm{\AA }`$ and Fe I $`3720\mathrm{\AA }`$ ($`5v`$), and possibly $`3441\mathrm{\AA }`$ ($`6v`$) and $`3860\mathrm{\AA }`$ ($`4v`$).
The Fe I absorption observed in the FOS spectrum implies a mass of $`M_{\mathrm{FeI}}=0.013_{0.005}^{+0.010}\mathrm{M}_{\mathrm{}}`$ in the ejecta of SNR 1885. The observed relative strengths of the Ca II and Ca I lines indicate that calcium is mostly singly ionized, with $`M_{\mathrm{CaII}}/M_{\mathrm{CaI}}=16_5^{+42}`$, the large upward uncertainty reflecting the near saturation of the Ca II H & K feature. If the ionization state of iron is similar to that of calcium, with $`M_{\mathrm{FeII}}/M_{\mathrm{FeI}}10`$$`50`$, then the corresponding Fe II mass is $`M_{\mathrm{FeII}}0.1`$$`0.7\mathrm{M}_{\mathrm{}}`$. Such a large mass of iron is consistent with what is expected in normal or subluminous Type Ia supernovae (Höflich & Khokhlov (1996); Höflich, Wheeler & Thielemann (1998); Nomoto, Thielemann, & Yokoi (1984); Nomoto et al. (1997); Woosley & Weaver (1994); Woosley (1997)). Thus on both observational and theoretical grounds there is good reason to expect that the remnant of SN 1885 should show strong Fe II resonance line absorption.
In this paper we report the detection of Fe II absorption in a UV image of SNR 1885 obtained with the WFPC2 on HST.
## 2. Observations
### 2.1. Images
The strongest resonance lines of Fe II are the 2600, $`2587\mathrm{\AA }`$ ($`1uv`$) and 2382, 2344, $`2374\mathrm{\AA }`$ ($`2,3uv`$) complexes. A model fit to the near-UV spectrum reported in Paper 1 predicts that these Fe II resonance lines should form a broad, deep, blended profile that is fortuitously well matched to the WFPC2 F255W ($`2597\pm 200\mathrm{\AA }`$) filter on HST.
Three UV exposures were taken with the WFPC2 and F255W filter over 3 orbits on 16 Feb 1999. The bulge of M31, though bright enough to see with the naked eye in the visible, is quite faint in the UV, and special measures were taken to ensure detection of SNR 1885. While sky brightness was negligible (about 0.01 of the signal), both readout noise and dark counts were significant. Dark counts were reduced by centering SNR 1885 in the WF2 chip, which has the lowest dark count of the WF and PC chips. Readout noise was reduced by minimizing the number of readouts, which was accomplished by $`2\times 2`$ on-chip binning (AREA mode), and by extending each exposure over a full orbit, 2700 s each. Finally, the effect of hot and cold pixels was mitigated by dithering the three images along a diagonal line, by two binned pixels (4 unbinned pixels) in each of the horizontal and vertical directions.
The long exposures increased the risk of contamination by cosmic rays, but this risk was considered acceptable in the interest of reducing noise. Cosmic rays were removed by applying the crrej routine in STSDAS to the three exposures. Approximately 10% of the binned pixels in each 2700 s exposure were affected by cosmic rays. Cold pixels were removed by applying the cosmicray routine in IRAF to the negative of the cosmic-ray-removed WF2 image.
Figure 1 shows the resulting cleaned, coadded WF2 image. SNR 1885 shows up as a dark spot of Fe II absorption. Since the absorbing region at SNR 1885’s position was partially contaminated by cosmic rays in both the first and third images, the pattern of absorption visible in the UV close-up in Figure 1 is determined to a considerable degree by the second image. We estimate the diameter of the dark Fe II spot to be $`0\stackrel{}{\mathrm{.}}55\pm 0\stackrel{}{\mathrm{.}}15`$, slightly smaller than, but consistent with, the diameter $`0\stackrel{}{\mathrm{.}}70\pm 0\stackrel{}{\mathrm{.}}05`$ of the Ca II spot measured in Paper 1. The position of the dark spot is consistent with (within one binned pixel of) that measured from the higher quality Ca II WFPC2 image of Paper 1, which was $`15\stackrel{}{\mathrm{.}}04\pm 0\stackrel{}{\mathrm{.}}1`$ west and $`4\stackrel{}{\mathrm{.}}1\pm 0\stackrel{}{\mathrm{.}}1`$ south of the nucleus of M31.
The cleaned image shown in Figure 1 shows of the order of a hundred point sources. These point sources appeared only if the 3 exposures were correctly registered. If instead one or more exposures were misaligned, then most of the point sources disappeared, being rejected as cosmic rays by crrej. Visual inspection in several cases confirmed that the point sources that survive screening by crrej occur in all three exposures, and look like stars on each exposure. We therefore conclude that, while a handful of the point sources may be cosmic ray artifacts, the majority ($`90\%`$) of them are real stars.
Most of the stars in the Fe II ($`2600\mathrm{\AA }`$) image are not apparent in the Ca II ($`3900\mathrm{\AA }`$) image from Paper 1, although the mottled appearance of the Ca II image suggests incipient resolution into stars (Lauer et al. (1998)). That resolved stars appear only at shorter wavelengths is consistent with previous UV imaging of the bulge of M31 by Bertola et al. (1995), Brown et al. (1998), and Lauer et al. (1998). Current observational evidence and theoretical ideas, reviewed by O’Connell (1999), suggest that the UV ($`2500\mathrm{\AA }`$) light observed in old populations such as the bulge of Andromeda is dominated by low-mass, thin-envelope stars in extreme (hot) horizontal branch and subsequent phases of evolution. The more luminous UV-bright stars, such as those observed here, are undergoing hydrogen- and helium-shell burning in later phases of evolution following core helium burning on the horizontal branch. Further discussion of this issue goes beyond the scope of this paper.
### 2.2. UV Count Levels for SNR 1885
In the UV absorption image of SNR 1885, the observable quantity that can be compared to theoretical expectation is the fractional depth of absorption produced by the remnant against background starlight from the bulge of M31. The fractional depth of absorption follows from three quantities: (a) the zero-level of counts from dark current plus readout, (b) the background level of counts from starlight in regions adjacent to SNR 1885, and (c) the level of counts in SNR 1885 itself.
The zero-level of counts, from dark current plus readout, was estimated from averages of counts in the darkest regions of the cosmic-ray-removed WF2 image. Measurements at the centers of the darkest dust lanes and in regions farthest from the bulge gave a consistent zero-level of $`27\pm 1\mathrm{DN}`$ (data numbers) per $`2\times 2`$ binned pixel for the coadded $`3\times 2700\mathrm{s}=8100\mathrm{s}`$ image. At a gain of $`7.12\mathrm{cts}\mathrm{DN}^1`$, this corresponds to a zero-level of $`192\pm 7`$ counts per binned pixel. The uncertainty here is an estimate of the uncertainty in the mean zero-level, not a measure of the variation in pixel to pixel counts, which is larger. The measured zero-level agrees well with the expected zero-level, which comprises dark counts of $`(0.0030\pm 0.0005)\mathrm{cts}\mathrm{s}^1\mathrm{pix}^1\times 4\mathrm{pix}\times 3(2700+120)\mathrm{s}=102\pm 17\mathrm{cts}`$ (the uncertainty is the systematic variation in the dark current of the WF2 chip, while the $`120\mathrm{s}`$ added to each $`2700\mathrm{s}`$ exposure is the unexposed dark time), plus readout counts of $`3\times 5.51^2=91\mathrm{cts}`$, for a total expected zero-level of $`193\pm 17\mathrm{cts}`$ per binned pixel.
The level of background starlight against which SNR 1885 is seen in absorption was estimated from clean regions adjacent to SNR 1885. The average and dispersion of the counts in these regions was $`38\pm 4\mathrm{DN}`$ per binned pixel, equivalent to $`271\pm 28\mathrm{cts}`$ per binned pixel. Subtracting the zero-level of $`192\pm 7\mathrm{cts}`$ gives a background starlight level of $`79\pm 29\mathrm{cts}`$ per binned pixel. This translates into a surface brightness of $`2.0\pm 0.7\times 10^{17}\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2\mathrm{arcsec}^2\mathrm{\AA }^1`$ at $`2600\mathrm{\AA }`$ in the vicinity of SNR 1885. This surface brightness is consistent with the mean surface brightness measured by IUE of $`4.4\times 10^{17}\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2\mathrm{arcsec}^2\mathrm{\AA }^1`$ at $`2600\mathrm{\AA }`$ through a $`154\mathrm{arcsec}^2`$ racetrack-shaped aperture centered on the nucleus of M31 (Burstein et al. (1988); see also Bertola et al. (1995); Brown et al. (1998)).
Counts in SNR 1885 itself were estimated from the $`2\times 2`$ ($`0\stackrel{}{\mathrm{.}}4\times 0\stackrel{}{\mathrm{.}}4`$) block of binned pixels centered at the position measured from the Ca II image of Paper 1. In the Fe II image, the dark absorbing region associated with SNR 1885 appears to extend over a block 2 binned pixels ($`0\stackrel{}{\mathrm{.}}4`$) wide (east-west) by 3 binned pixels ($`0\stackrel{}{\mathrm{.}}6`$) high (north-south). However, since the northern 2 binned pixels in the $`2\times 3`$ block lie partially outside the Ca II absorbing region, we conservatively chose to estimate the counts in SNR 1885 only from the southern $`2\times 2`$ block of binned pixels.
In this $`2\times 2`$ block of binned pixels at SNR 1885’s position, the southern 2 of the 4 pixels were contaminated by cosmic rays in each of the first and third exposures, but the second exposure was clean of cosmic rays. Thus there are 8 independent measurements of counts in the interior of SNR 1885: from 2 pixels in each of the first and third exposures, and from 4 pixels in the second exposure. The average and dispersion of the 8 measurements was $`9.9\pm 1.2\mathrm{DN}`$ per binned pixel per exposure, which at $`7.12\mathrm{cts}\mathrm{pix}^1`$ is equivalent to $`70.4\pm 8.5\mathrm{cts}`$ per binned pixel per exposure. The dispersion of $`8.5\mathrm{cts}`$ is consistent with the dispersion $`\sqrt{70.4}=8.4`$ expected from counting noise. The uncertainty in the mean counts is $`1/\sqrt{8}`$ times the dispersion, that is, $`8.5\mathrm{cts}/\sqrt{8}=3.0\mathrm{cts}`$. Multiplying by 3 to scale to the coadded exposure time gives a mean count of $`211\pm 9\mathrm{cts}`$ per binned pixel over the combined $`8100\mathrm{s}`$ exposure. Subtracting the zero-level of $`192\pm 7\mathrm{cts}`$ yields a mean count of $`19\pm 11\mathrm{cts}`$ per binned pixel in the interior of SNR 1885.
The ratio of the net counts $`19\pm 11\mathrm{cts}`$ inside SNR 1885 to $`79\pm 29\mathrm{cts}`$ in adjacent regions yields the observed fractional depth of absorption, $`0.24\pm 0.17`$.
## 3. Analysis
The depth of absorption of SNR 1885 observed with the F255W filter, $`0.24\pm 0.17`$, may be compared to the ratio expected on the basis of a model fit to the 3240–$`4780\mathrm{\AA }`$ absorption line FOS spectrum reported in Paper 1.
The model spectrum includes absorption from all non-negligible resonance lines of neutral and singly-ionized species of C, O, Mg, Al, Si, S, Ar, Ca, and iron-group elements with ionic charges from 22 to 30, namely Ti, V, Cr, Mn, Fe, Co, Ni, Cu, and Zn. Relative masses of these elements were set equal to those in the normal SN Ia model DD21c of Höflich et al. (1998).
The adopted ejecta density profile is the best fit to the Ca I and Ca II absorption line profiles in the FOS spectrum reported in Paper 1. The best-fit density profile $`n(v)`$ is bell-shaped, a quartic function of free-expansion velocity $`v`$ up to a maximum velocity $`v_{\mathrm{max}}=\mathrm{13\hspace{0.17em}100}\mathrm{km}\mathrm{s}^1`$:
$$n(v)\left[1(v/v_{\mathrm{max}})^2\right]^2,(v<v_{\mathrm{max}}).$$
(1)
The available data offer no evidence that the ejecta are compositionally stratified: the Ca II and Fe II absorption images of SNR 1885 are consistent with being the same size, and the spectral absorption line profiles of Ca I, Ca II, and Fe I are similarly consistent with being the same. We therefore assume that the compositional structure is fully mixed, so that all elements follow the same density profile.
### 3.1. Photoionization
As described in Paper 1, the freely expanding ejecta in SNR 1885 appear to be in the process of becoming optically thin to photoionizing radiation, and are currently undergoing a period of photoionization by ambient UV starlight out of neutral into the singly-ionized state, as originally argued by Hamilton & Fesen (1991). Recombination is negligible, with recombination times exceeding a hundred times the age of the remnant.
Calculations of the photoionization of Ca and Fe, the two elements observed in the G400H FOS spectrum, were reported in Paper 1. Here we complete the account by reporting photoionization calculations for all elements of interest.
The expected ratios of singly-ionized to neutral species of various elements depend on how fast they are photoionized out of the assumed initially neutral state by ambient UV starlight. Photoionization timescales can be estimated fairly reliably, at least in the limit where the supernova ejecta are treated as optically thin, since the spectrum of photoionizing starlight in the bulge of M31 is observed directly with IUE (Burstein et al. (1988)) and HUT (Ferguson & Davidsen (1993)). Table 1 lists optically thin photoionization times of all elements considered here, computed as detailed in Paper 1. Photoionization cross-sections were taken from Verner et al. (1996). The uncertainty in the photoionization timescales quoted in Table 1 includes only that arising from uncertainty in the position of SNR 1885 relative to the central nucleus of M31 along the line of sight, as estimated in Paper 1, not uncertainty from the reddening correction or from photoionization cross-sections.
The photoionization timescales given in Table 1 are for optically thin ejecta, whereas in fact the ejecta are expected to be optically thick in broad bands of the ultraviolet, thanks to resonance line absorption by neutrals and singly-ionized species. Moreover, the expanding ejecta would have been more optically thick in the past. An accurate evaluation of the expected ionization structure of SNR 1885 would involve a self-consistent time-dependent computation of photoionization and radiative transfer in the freely expanding ejecta, such as was done for SNR 1006 by Hamilton & Fesen (1988). However, the present data are too limited, and the choice of underlying supernova model too uncertain, to warrant such a computation.
Instead, we estimate the ionization state of different elements from the optically thin photoionization timescales in Table 1, together with the observational datum from the FOS spectrum that calcium is mostly singly ionized, with $`M_{\mathrm{CaII}}/M_{\mathrm{CaI}}=16_5^{+42}`$. According to Table 1, the optically thin photoionization timescale of Ca I is $`t_{\mathrm{CaI}}=8_1^{+10}\mathrm{yr}`$. To reach the observed ionization state of calcium requires an effective time $`t`$ given by $`\mathrm{exp}(t/t_{\mathrm{CaI}})=M_{\mathrm{CaI}}/(M_{\mathrm{CaI}}+M_{\mathrm{CaII}})`$, implying $`t=23_6^{+30}\mathrm{yr}`$. In other words, it is as if calcium has been ionizing not for the full $`110`$ year (at the time of the 1995 FOS observation) age of the remnant, but rather only for $`20`$$`50`$ years, because the remnant is only now becoming optically thin to photoionizing radiation.
If this effective time $`t`$ is used instead of the age of the remnant, then the predicted ratio of singly-ionized to neutral species of element Z is
$$M_{\mathrm{ZII}}/M_{\mathrm{ZI}}=\left(1+M_{\mathrm{CaII}}/M_{\mathrm{CaI}}\right)^{t_{\mathrm{Ca}}/t_\mathrm{Z}}1,$$
(2)
values of which are given in the third column of Table 1. The quoted error on the ratio depends only on the uncertainty in the observed $`M_{\mathrm{CaII}}/M_{\mathrm{CaI}}`$ ratio, since uncertainty in the photoionization timescales cancels in the ratio $`t_{\mathrm{Ca}}/t_\mathrm{Z}`$, to the extent that uncertainties in the relative photoionization cross-sections are neglected, as here.
### 3.2. Model spectrum
The upper panel of Figure 2 shows a model absorption line spectrum based on the fit to the 3240–$`4780\mathrm{\AA }`$ spectrum observed with the G400H grating on the FOS (Paper 1). Since singly-ionized to neutral ratios in the best-fit model are skewed to the low end of the allowed range $`M_{\mathrm{CaII}}/M_{\mathrm{CaI}}=16_5^{+42}`$ (from which other ionization fractions follow in accordance with eq. ), we choose to show not the ‘best-fit’ model, with $`M_{\mathrm{CaII}}/M_{\mathrm{CaI}}=16`$, but rather a ‘typical’ model, with $`M_{\mathrm{CaII}}/M_{\mathrm{CaI}}=25`$, the geometric mean of the allowed range $`M_{\mathrm{CaII}}/M_{\mathrm{CaI}}=11`$–58. This spectrum is similar (differing in the adopted ionization fractions) to the model spectrum shown in Figure 4 of Paper 1, plotted there over the extended range 900–$`5000\mathrm{\AA }`$. The range 1900–$`3300\mathrm{\AA }`$ covered in Figure 2 here includes resonances lines from Mg I, Mg II, Al I, Si I, V I, V II, Cr II, Ti II, Mn I, Mn II, Fe I, Fe II, Ni I, and Zn II, although the contributions from Al I, Si I, and Ti II are negligible.
Starlight to the foreground of SNR 1885 is not absorbed. The fraction of foreground starlight was measured in Paper 1 from the depth and shape of the Ca II H & K lines to be $`0.21_{0.12}^{+0.06}`$. The model shown in Figure 2 uses the best-fit foreground starlight fraction of $`0.21`$.
While the ionization fractions in the model spectrum are fixed by the observed ionization state of Ca, and the relative masses of elements are fixed by Höflich et al.’s (1998) theoretical SN Ia model DD21c, the overall depth of absorption is scaled so that the depth of Fe I absorption is as observed in the FOS spectrum. The corresponding mass of neutral iron is $`M_{\mathrm{FeI}}=0.013\mathrm{M}_{\mathrm{}}`$. At the level of ionization adopted in Figure 2, the ionization state of iron is $`M_{\mathrm{FeII}}/M_{\mathrm{FeI}}=14`$, for a total Fe mass of $`M_{\mathrm{FeI}}+M_{\mathrm{FeII}}=0.20\mathrm{M}_{\mathrm{}}`$.
The lower panel of Figure 2 shows the expected spectrum from the bulge of M31 as measured by IUE (Burstein et al. (1988)), normalized to the surface brightness at the position of SNR 1885, and multiplied by the throughput of the F255W filter. The lower panel of Figure 2 also shows the expected filtered and absorbed spectrum at the position of SNR 1885, which is the filtered bulge spectrum multiplied by the absorption curve in the top panel. The ratio of the area under the SNR 1885 spectrum to the area under the bulge spectrum is the predicted fractional depth of absorption, the expected ratio of counts inside to outside SNR 1885 in the F255W image.
The ratio of counts inside to outside SNR 1885 in the model shown in Figure 2 is $`0.33`$, consistent with the observed ratio of $`0.24\pm 0.17`$. Increasing the level of ionization deepens the absorption slightly: for ionization states varying over $`M_{\mathrm{CaII}}/M_{\mathrm{CaI}}=16_5^{+42}`$, the fractional depth of absorption varies over $`0.35_{+0.02}^{0.05}`$, again consistent with the observed ratio of $`0.24\pm 0.17`$.
The model level of absorption can be deepened, but only slightly, from $`0.33`$ to $`0.30`$, by reducing the foreground starlight fraction below the best-fit value of $`0.21`$, and at the same time reducing the Ca II mass in order to maintain consistency with the depth of the Ca II H & K line profiles observed in the FOS spectrum.
While Fe II is the main contributor to the expected absorption in the F255W filter, Fe I also makes a significant contribution, and Mg I, Mg II and Mn I produce appreciable absorption along the red side of the filter. Because the Fe II absorption is heavily saturated, increasing the amount of Fe II changes little the fractional depth of absorption predicted by the model. The contribution of Fe I cannot be changed, since it is tied to the level of absorption observed with the FOS. Changing the abundance of Mg or Mn on the other hand does have some effect. For example, increasing the abundance of Mg by a factor of 5, which is plausible, deepens the fractional depth of absorption to $`0.29`$; increasing Mg to the point where the Mg lines are fully saturated deepens the fractional depth to $`0.26`$.
We therefore conclude that the model predicts a fractional depth of absorption in the rather narrow range $`0.33\pm 0.04`$. While the observed level of UV absorption, $`0.24\pm 0.17`$, is consistent with expectation, it does not constrain strongly the amount of Fe II in the ejecta of SN 1885.
## 4. Summary
Fe II imaging of the remnant of SN 1885 using the F255W filter on the WFPC2 reveals a dark spot of absorption, with position and diameter in accord with those measured from the higher quality WFPC2 Ca II absorption image from Paper 1.
The measured ratio of flux inside to outside SNR 1885 in the Fe II image is $`0.24\pm 0.17`$, in good agreement with the ratio $`0.33\pm 0.04`$ expected on the basis of a model fit to the near-UV FOS spectrum reported in Paper 1.
The observed depth of the Fe II absorption in SNR 1885 is consistent with Fe II being fully saturated, so that the present data constrain the mass of iron in the supernova ejecta only weakly. In particular, the iron mass is consistent with the range $`M_{\mathrm{Fe}}=0.1`$$`1.0\mathrm{M}_{\mathrm{}}`$ inferred in Paper 1. Figure 2 of the present paper illustrates a model UV spectrum with $`M_{\mathrm{Fe}}=0.20\mathrm{M}_{\mathrm{}}`$. The Figure indicates that, besides iron, ion species Mg I, Mg II, and Mn I probably make some contribution to the absorption in the F255W image.
Finally, the observed depth of Fe II absorption is consistent with the theoretical expectation that the remnant of SN 1885 should have a rich UV spectrum of broad absorption lines. Unfortunately, the faintness of the bulge of M31 in the UV means that the signal-to-noise ratio currently attainable with STIS on HST is marginal. If a UV spectrum with adequate S/N ratio could be obtained, then it should be possible to constrain the mass and velocity distribution of Mg, Si, Ca, V, Cr, Mn, Fe, Co, Ni, Cu, and Zn in the ejecta of SN 1885. From such observations it would be possible to learn a great deal not only about SN 1885 itself, but also about the increasingly important class of subluminous SN Ia in general (Filippenko (1997)).
We thank R. McCray for helpful conversations, and K. McLin for help with data reduction. RAF is grateful for support from a JILA Visiting Fellowship. Support for this work was provided by NASA through grant number GO-6434 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555.
|
no-problem/9907/astro-ph9907442.html
|
ar5iv
|
text
|
# Turbulent Bipolar Outflows in Young Stellar Objects: Multifractal Universality Classes and Generalized Scaling
## 1 Introduction
The presence of bipolar outflows in the vicinity of young stellar objects (YSOs) is a generic feature of early stellar evolution. The outflows consist mostly of cold molecular gas, ionized material, and dust grains ejected with velocity in the range 100-300 km/s. A second characteristic of the circumstellar environment is a Keplerian accretion disk surrounding the protostar; the existence of these optically thick disks has been suggested by various observations, such as the high degree of linear polarization shown to be the result of multiple scattering by material in the disk (Bastien & Ménard 1990), and the absence of a red-shifted component in forbidden-line emission (Appenzeller et al. 1984; Edwards et al. 1987).
The observation that the thrust of the outflowing material cannot be radiatively driven (Bally & Lada 1983; Cabrit 1989) suggests that the flows are hydromagnetic in nature. Moreover, experimental evidence of correlations between the physical properties of the disk and those of the ejected material hint that the accretion disk is a necessary ingredient in the driving mechanism. For example, continuum observations have indicated that YSOs with outflows of larger spatial extent correspond to stronger millimeter fluxes, which in turn can be related to the presence of a more massive disk (Cabrit & André 1991; Bontemps et al. 1996). Furthermore, there are correlations between the forbidden-line emission profiles which are generated in the outflows, and near-infrared excess believed to be related to the accretion flow evolution (Gomez de Castro & Pudritz 1992; Edwards et al. 1987).
Current models for the origin of the outflows involve magnetohydrodynamics (MHD) winds driven by the accretion disk, and can be divided into two categories. The first class of models postulates that the winds are driven by a magnetic field threading through the disk (Pelletier & Pudritz 1992; Ferreira & Pelletier 1993a,b, 1995): particles rotating in the vicinity of the disk are subjected to a centrifugal force whose component perpendicular to the magnetic field lines may be neglected (assuming the magnetic field to be sufficiently strong); this results in a (centrifugal) acceleration along the field lines, thus generating the outflows. A second popular interpretation is the so-called X-celerator model (Shu et al. 1988, 1994a,b); in the region of the stellar equatorial plane where the radial acceleration is approximately null (due to counterbalance of the gravitational and centrifugal forces), the strong magnetic field of the YSO takes over the accretion mechanism, thus generating outflows. For the sake of briefness, we shall refer to these two categories of models as the PP and Shu models, respectively.
It is clear that further experimental data and analysis methods are needed to clarify the situation. In this spirit, we believe that a study of the scaling properties of the outflows could provide complementary information on the dynamics involved. The search for scaling in the outflows is motivated by the observation that over all scales, the equations of MHD for a non-dissipative medium present no characteristic length (e.g., Carbone 1993); in addition, since the law of gravity is a scale invariant (power law) form, the gravitational field of the disk-protostar system will not introduce a characteristic scale, and hence a priori it is possible to obtain scale invariance over all scales ranging from the macroscopic size of the lobes down to the much smaller dissipation scale (i.e., the inner scale of the inertial range). A fundamental observation in the analysis of three-dimensional MHD turbulence is the existence of three physical quantities, namely the total energy (i.e. the sum of the magnetic and the kinetic energies), the cross-helicity, and the magnetic helicity <sup>1</sup><sup>1</sup>1The cross-helicity and magnetic helicity densities are defined by $`\stackrel{}{v}\stackrel{}{B}`$ and $`\stackrel{}{A}\stackrel{}{B}`$, respectively, where $`\stackrel{}{v}`$ is the velocity field, $`\stackrel{}{B}`$ is the magnetic field, and $`\stackrel{}{A}`$ is the vector potential., which are conserved by the non-linear terms in the ideal equations (i.e., without forcing terms, nor dissipation). Considering the disk-protostar system as injecting the corresponding fluxes at large scales, they subsequently propagate (or cascade) towards smaller scales until the scale where dissipation becomes important is reached. Furthermore, such propagation mechanisms are more efficient between scales of similar magnitude (usual MHD turbulence is correspondingly “local” in Fourier space; e.g., Biskamp 1993; Carbone et al. 1996), and over the inertial range, the (statistical) laws governing the propagation of flux are scale invariant. The combination of conserved fluxes and local interactions (in scale) is the basis for phenomenological cascade models of turbulence. In the case of YSOs, since MHD turbulence implies the existence of three conserved fluxes in the inertial range, the dynamics of the outflows can approximatively be described by three non-linearly coupled cascade processes (see Schertzer & Lovejoy 1995; Schertzer et al. 1997b), and, in the approximation that dust grains constitute a passive scalar quantity (i.e., they are advected by the velocity field without disturbing it), there will be an additional coupled cascade corresponding to the conserved flux of passive scalar variance (Obukhov 1949; Corrsin 1951).
While the idea of cascades in hydrodynamic turbulence was introduced by Richardson (1922), the first explicit cascade models were not developed until the 1960’s (Novikov & Stewart 1964; Yaglom 1966; Mandelbrot 1974), and have become since then the basic tools for studying turbulent intermittency. Developments in the following two decades led to the current understanding that cascade models generically produce multifractals (Schertzer & Lovejoy 1985, 1987b), hence establishing their relevance in analyzing and modeling scale invariant multifractal fields. Indeed, multifractals have already been applied to various astrophysical problems such as the large-scale structure of the universe (e.g., Wiedenmann et al. 1990; Coleman & Pietronero 1992; Borgani et al. 1993; Martinez & Coles 1994; Garrido et al. 1996; Sylos Labini & Montuori 1998; Sylos Labini et al. 1998; Lovejoy, Garrido & Schertzer 1999), Ly$`\alpha `$ clouds (Carbone & Savaglio 1996), the cosmic microwave background radiation (Pompilio et al. 1995), photospheric magnetic fields (Cadavid et al. 1994), photometric data of NGC 4151 (Longo et al. 1996), and the solar wind (Carbone 1993, Politano & Pouquet 1995).
A basic difficulty in multifractal analysis and modeling is that at a purely general level, multifractals implicitly involve an infinite number of parameters (e.g., the codimension function), and hence would be unmanageable if no further simplification could be made. Fortunately, there exist multifractal universality classes, which are stable attractors of multiplicative cascades (Schertzer & Lovejoy 1987b, 1989a,b, 1991, 1997a). Universality is of practical importance, since it reduces the number of parameters required for the description of the scaling function of multifractal fields to only three.
Although the cascades and corresponding multifractals usually discussed in the literature involve isotropic scaling, physical systems generally exhibit preferred spatial directions (for example, almost all scaling geophysical systems are strongly stratified due to gravity). The need to handle scaling freed from the constraints of isotropy led to the development of the formalism of Generalized Scale Invariance (hereafter GSI, Schertzer & Lovejoy 1985, 1987a,b, 1989a,b, 1991; Lovejoy & Schertzer 1985) which is the most general framework for describing anisotropic scaling. In the case of bipolar nebulae, the application of GSI to the study of their scaling properties is motivated by the observation that the outflowing material is not isotropically ejected from the disk-protostar system, and the anticipation that the direction and strength of the anisotropy may vary with scale.
In this paper, we present a multifractal analysis of near-infrared light scattered from dust grains in bipolar outflows, using images of V380 Orionis, V645 Cygni (GL 2789), LkH$`\alpha `$ 101/NGC 1579, LkH$`\alpha `$ 233, PV Cephei (for a discussion of their physical parameters, see Bastien & Ménard 1990), V633 Cassiopeiae (e.g., Asselin et al. 1996), and GGD 18 (Gyulbudaghian et al. 1978). An immediate issue in interpreting scattered light field is the relationship of radiative transfer to the density field of the lobes; indeed, this basic remote sensing problem of radiative transfer through multifractal clouds constitutes an important application of multifractals (e.g., Lovejoy et al. 1995; Lovejoy & Schertzer 1995). Since the radiative transfer equation has no characteristic scale, if the spatial distribution of scatterers is scale invariant so will the related radiation field. It is therefore reasonable to infer that if the scattered light field displays scale invariance over a wide range of scales, then the underlying matter distribution will also possess scale invariant statistics. We shall come back to previous attempts of quantifying the correlations between the respective multifractal parameters of the scattered light and matter fields.
Information concerning the acquisition of CCD images is provided in section 2. In section 3, the basic concepts of universality are presented, while section 4 summarizes the results of the universal parameters for the seven objects in the ensemble. An overview of the GSI formalism is the subject of section 5, while the measured anisotropy parameters of GGD 18 are presented in section 6. Finally, we discuss in section 7 the potential implications of similar analyses on current and future models of star formation.
## 2 Observations
Images of V380 Orionis, V645 Cyg (GL 2789), LkH$`\alpha `$ 101/NGC 1579, LkH$`\alpha `$ 233, PV Cep, and V633 Cas were obtained at the f/15 focus of the 1.6m Ritchey-Chrétien telescope at the Observatoire Astronomique du Mont Mégantic (hereafter OMM). The first five objects were observed on 1997 October 23 in the I bandpass using a 2048x2048 pixel<sup>2</sup>, 16-bit Loral CCD camera with a scale of 0.13” per pixel; with this optical configuration and at the time of observation, the seeing was of the order of 1.5”. On the other hand, V633 Cas was observed on 1989 September 25 in the I bandpass using a 512x320 pixel<sup>2</sup> RCA chip with 0.48” per pixel. The seeing for these images was of the order of 1.3”. Images of GGD 18 were obtained at the f/8 focus of the 3.6m Canada-France-Hawai Telescope (hereafter CFHT) with the RCA2 1024x640 pixel<sup>2</sup> CCD for a scale of 0.108” per pixel, during the period 1987 December 23-28; the seeing for these images was 0.5”. All images were processed to correct for cosmetic defects of the CCDs, for reading noise, and for cosmic rays. The resolution of images of GGD 18 was increased by deconvoluting the images with the method of maximal entropy (Gravel 1990), resulting in an effective seeing of 0.39”. Acquired images and exposures for all the afore-mentioned objects are summarized in Table 1.
As YSOs generically occupied only a fraction of the region of observation, it was necessary to cut the images into sections for the purpose of our analysis. Being physically distinct from the dust shells, background stars as well as the protostar were always excluded from these sub-images since their presence would bias the analysis of scattered light. Sections were also chosen to be far enough from the protostar to avoid problems of large intensity gradients associated with the protostar, such that the cloud–radiation physics (and hence statistics) for a given sub-image could be considered approximately invariant under translations. Finally, regions where the intensity was not significantly greater than the background noise were also excluded. A contour plot of a sample CCD image acquired at the OMM, namely PV Cep, is shown in Figure 1, where the box delimits the section used for the analysis.
Figure 2 presents a contour plot of the region of GGD 18 analyzed, divided into sub-regions that will be used for the subsequent multifractal analysis (sections 4 and 6).
The image of GGD 18 was the only one in the ensemble with sufficient intensity and spatial resolutions to allow a GSI analysis (see section 6), and it is thus worthwhile to describe some of its known features. This object is located at 30.25” NW of the binary GL 961 (Cohen 1973), and was first identified as a possible candidate for a HH object (Gyulbudaghian et al. 1978). A few years later, Lenzen et al. (1984) discovered an IR source embedded in the nebulous region of GGD 18 (it corresponds to the maximum of intensity of Figure 2, identified by a small cross), but did not speculate on its nature. Polarization maps from deconvoluted images revealed (Gravel 1990) a centrosymmetric pattern centered on the GGD 18 source showing that it was a YSO, along with a region of aligned vectors near the source indicating the presence of an optically thick disk (the annular structure near the source and along the NE and SW directions is believed to correspond to a fraction of the disk that is visible); the inclination of the latter with respect to the line of sight was estimated at $`30^o`$ (the northern lobe points toward the observer), and the symmetry axis of the nebula points at $`30^o`$ NW (Gravel 1990). Finally, spiral-like polarization patterns were discovered in the vicinity of the source, indicating a complex distribution of matter that may be the result of rotation in the material near the source, combined with the effects of possible local inhomogeneities in the disk. An unresolved issue is the extent at which the dynamics of the outflows are influenced by the proximity of the binary GL 961; evidence that the CO bipolar jet of GL 961 reaches and influences the outflow dynamics of GGD 18 was provided by Gravel (1990), but a quantitative study of such interactions is still needed. To summarize his argument, we notice that both lobes are more extended in the western direction, and the apparent contraction on the eastern side is believed to result from interactions with the jet; the latter appears as a faint island (approximately 12” in length) located in (1,1), and points in the NW direction. It is more visible in (1,2) and enters both (1,3) and (2,3), where it turns around towards the SW direction; such deviations could be the result of interactions with nebulous structures to the north of GGD 18. The local maximum in section (0,0) corresponds to the western component of the binary GL 961; since GL 961 is an intense source of unscattered light, the region (0,0) will be excluded to allow an unbiased analysis of scattered light in the outflows. For similar reasons, section (2,0) will also be excluded from the analysis.
## 3 Review of Multifractal Processes
### 3.1 Universal Multifractals
It is well known that the Navier-Stokes (NS) equations are invariant under the rescaling $`xx\lambda ^1`$, $`vv\lambda ^H`$, and $`tt\lambda ^{H1}`$. Assuming that $`ϵ`$, the energy flux to smaller scales, is a scale invariant quantity, it is found that $`H=1/3`$, and dimensional analysis leads to the famous scaling law $`E(k)k^{5/3}`$ for the energy density in momentum space (Komolgorov 1941; hereafter K41). While the equations of MHD satisfy similar scaling relations, there is no consensus on what the analogue of K41 should be (the corresponding dimensional analysis no longer gives a unique dimensional combination, hence unique exponent). In terms of Elsasser variables, eddies fall into two classes depending on their direction of propagation along the magnetic field lines: interactions between eddies belonging to different classes are less likely, thus weakening the energy transfer (Iroshnikov 1963; Kraichman 1965; hereafter IK). Consequently, the characteristic interaction time $`\tau _{eddy}`$ (i.e. the eddy turnover time) is increased to $`(\tau _{eddy}/\tau _A)^a`$, where $`\tau _A`$ is the characteristic time for Alfvén waves, and $`a`$ is some positive constant (Politano & Pouquet 1995). The scaling relation becomes:
$$E(k)k^{\left(1+\frac{2}{a+3}\right)},$$
(1)
with $`a=1`$ corresponding to the IK theory.
It is found experimentally that in hydrodynamical media K41 is generally not respected for individual realizations – even in estimates of the ensemble average, the exponent differs from 5/3; the discrepancy can be attributed to intermittency (fluctuations in $`ϵ`$ due to small-scale non-linear structures). As discussed in the introduction, multiplicative cascades model the propagation of conserved fluxes in turbulent intermittent media, and in general present the expected characteristics of a fully developped turbulent field. The general outcome of such cascades is a multifractal field which is scale invariant over the inertial range, and whose flux density at resolution $`\lambda `$, denoted by $`ϵ_\lambda `$, is described by (Schertzer & Lovejoy 1987b):
$$ϵ_\lambda ^q=\lambda ^{K(q)},$$
(2)
where the brackets indicate an average over many realizations, and $`K(q)`$ is the moment scaling function. The arbitrariness of $`H`$ in the rescaling of the NS or MHD equations allows the possibility of scaling of different moments of the intensity spectrum, and this feature is what equation 2 describes.
While in general $`K(q)`$ need only be convex, cascades possess stable, attractive universality classes (Schertzer & Lovejoy 1987b, 1997) whose description requires only three parameters, namely $`\alpha `$, $`C_1`$ and $`H`$, with the corresponding moment scaling function determined by the universality relation:
$$K(q)qH=\frac{C_1}{\alpha 1}(q^\alpha q).$$
(3)
The significance of each of the three universality parameters on the multifractal field can be described as follows:
* $`C_1`$ corresponds to the codimension of the mean field, and thus distinguishes between a field whose mean is dominated by a few localized intense peaks (large $`C_1`$), and one with a mean dominated by a larger proportion of its surface (small $`C_1`$ — for non-fractal such as white noise, $`C_1=0`$);
* $`H`$ is a measure of the degree of (scale by scale) non-conservation of the field, or qualitatively a measure of its smoothness (with large values of $`H`$ corresponding to smoother fields, see eqs. 4 and 9). For example, in usual hydrodynamic turbulence the energy flux to smaller scales is conserved ($`H`$=0) whereas the velocity shears have the Komolgorov value $`H=1/3`$;
* $`\alpha `$ is the degree of multifractality, i.e., a measure of the deviation from the monofractal case. As $`\alpha `$ is the Lévy index of the multifractal generator, we have the restriction $`0\alpha 2`$, with $`\alpha =0`$ and $`\alpha =2`$ corresponding to monofractal ($`\beta `$ model) and log-normal models, respectively <sup>2</sup><sup>2</sup>2Note that the frequently used expressions “log-Lévy” and “log-normal” are rather misleading because of the divergence of high order statistical moments; the statistics will only be approximately “log-normal” and “log-Lévy” up to the given critical order of divergence of the moments..
### 3.2 Analysis of Physical Fields
A preliminary verification of the existence of scale invariance is that the spectral energy density satisfies a general (isotropic) scaling law:
$$E(k)k^\beta ,$$
(4)
where $`\beta `$ is the scaling exponent, or spectral slope.
After the existence of a scaling regime is established, one can proceed to compute the scaling function $`K(q)`$ and test for universal multifractal behavior. An efficient technique for that purpose is the Double Trace Moment (DTM) method (Lavallée 1991; Lavallée et al. 1991, 1993). A new function $`K(q,\eta )`$ is first defined similarly to $`K(q)`$ in equation 2, but for a field $`ϵ_\mathrm{\Lambda }`$ at its maximal resolution $`\mathrm{\Lambda }`$, raised to the $`\eta `$ power (i.e., $`ϵ_\mathrm{\Lambda }ϵ_\mathrm{\Lambda }^\eta `$), and renormalized by spatial averaging. Hence, writing $`(ϵ_\mathrm{\Lambda }^\eta )_\lambda ^q`$ to indicate the $`\lambda `$-resolution $`q^{\mathrm{th}}`$ moment of $`ϵ_\mathrm{\Lambda }^\eta `$, we obtain the following generalization of equation 2:
$$(ϵ_\mathrm{\Lambda }^\eta )_\lambda ^q=\lambda ^{K(q,\eta )}.$$
(5)
While it can be shown (Lavallée 1991) that $`K(q,\eta )`$ and $`K(q)`$ are related by
$$K(q,\eta )=K(q\eta )qK(\eta ),$$
(6)
the advantage of the DTM technique for testing and characterizing universality compared with other methods (e.g., Schmitt et al. 1995) is realized when $`K(q)`$ is universal (eq. 3), in which case the $`\eta `$-dependence factorizes:
$$K(q,\eta )=\eta ^\alpha K(q).$$
(7)
The DTM method allows the computation of $`K(q,\eta )`$, and assuming universality, the Lévy index $`\alpha `$ can be deduced from equation 7. The remaining parameters $`C_1`$ and $`H`$ are also determined from a knowledge of $`K(q,\eta )`$. Explicitly, one finds:
$$C_1=(\alpha 1)\frac{K(q,1)}{q^\alpha q},$$
(8)
and
$$H=\frac{\beta 1}{2}C_1\frac{2^{\alpha 1}1}{\alpha 1}.$$
(9)
We conclude this section with a few comments concerning the range of validity of the above equations when analyzing images of physical fields. In practice, the finite size of the sample implies that sufficiently high order moments are dominated by the largest value assumed by the field, and therefore underestimate the true ensemble moments. Beyond some threshold $`q_s`$, which for a single realization is given by
$$q_s=\left(\frac{d}{C_1}\right)^{1/\alpha },$$
(10)
equation 3 is no longer expected to hold true, and $`K(q)`$ becomes linear. Here, $`d`$ is the dimension of the space over which the analysis is made ($`d=2`$ for the images discussed here). One therefore encounters a “multifractal phase transition” (Schertzer et al. 1992).
## 4 Double Trace Moments Results
As discussed in section 3, scale invariance is a necessary condition for a physical field to be multifractal, and its existence should be verified before computing the universal parameters. A first analysis is to consider the isotropic power spectrum since the spectral energy density $`E(k)`$ of an isotropic MHD turbulent medium is expected to obey a scaling law with exponent $`\beta `$ (see eq. 4). Examples of the power spectra obtained are shown in Figure 3a) and b) for a sub-region of GGD 18 and LkH$`\alpha `$ 101/NGC 1579, respectively, where $`E(k)`$ has arbitrary units.
In each case, we note a break in scaling occurring at small scales (i.e., large wavenumber), followed by a regime of constant or increasing energy density. We shall argue that the scale at which scaling breaks corresponds to the resolution at which structures of the intensity field become dominated by noise. This is easily seen in the case of GGD 18, where the break occurs at a resolution of approximately 0.3”, while the seeing at the time of data acquisition was estimated at 0.4”. On the other hand, the high frequency behavior of $`E(k)`$ for the spectrum of LkH$`\alpha `$ 101 is linear ($`\beta 1`$) in wavenumber, as expected for Gaussian white noise in 2D space <sup>3</sup><sup>3</sup>3In the case of LkH$`\alpha `$ 101, the fact that the break is not a manifestation of the seeing results from the limited exposures compared to those of GGD 18.. These two sample power spectra illustrate a feature common to all power spectra in the ensemble, namely isotropic scaling down to the scale where noise dominates the statistics (the scaling anisotropy – see sections 5 and 6 – is removed by the angular integration). Least-squares fits over the respective linear regions of Figure 3a) and b) yield $`\beta `$=2.4$`\pm `$0.3 and 2.0$`\pm `$0.2, respectively, where the uncertainties are estimated from the fitting procedure.
With the existence of scaling established, a DTM analysis was performed on each sub-image in the ensemble using four values of $`q`$, namely $`q`$=0.5, 0.6, 0.75, and 0.9. As an example of the DTM results, Figure 4 presents $`K(\eta ,q)`$ with $`q=0.5`$ for a sub-image of LkH$`\alpha `$ 233, from which we note a power-law dependence over the range 0.06$`\eta `$1, in agreement with equation 7.
The slope of the linear region yields $`\alpha =1.93`$, while the intercept $`K(q,1)`$ gives $`C_1=0.023`$ (eq. 8). Finally, there is a departure from the power-law dependence in the range $`4\eta 10`$, beyond which $`K(\eta ,q)`$ becomes independent of $`\eta `$. As explained in section 3, equation 7 breaks down beyond max$`(q\eta ,\eta )q_s`$ due to the finite size of analyzed samples. To confirm that this is indeed the cause for the multifractal phase transition, the substitution of the above values of $`\alpha `$ and $`C_1`$ in equation 10 gives $`q_s10`$; on the other hand, $`K(\eta ,q)`$ becomes horizontal at $`\eta 10`$, which implies that max$`(q\eta ,\eta )=10`$ (recall that $`q=0.5`$), as expected.
Variations of the universality parameters over different sub-regions of a given nebula were in general observed to be within the known uncertainties, and this was explicitly verified in the case of GGD 18. Consequently, it was reasonable to average the universal parameters over all sub-regions for each object, and the results are summarized in Table 2.
Note that the accuracy on the parameters of individual objects can be roughly estimated from the accuracy of the DTM method (measured from numerical simulations in Lavallée et al. 1991) to be $`\pm 0.1`$ for $`\alpha `$, $`\pm 0.05`$ for $`C_1`$, and $`\pm 0.2`$ for $`H`$. An immediate observation from Table 2 is that $`\alpha `$ is very close to 2 for every object in the ensemble, a value which corresponds to the highest degree of multifractality. A second observation is the uniformity of the parameters over the ensemble, allowing one to compute ensemble averages listed in Table 2, where the uncertainties quoted for the ensemble averages correspond to one standard deviation from the mean. While these parameters describe the statistics of the field of scattered light, it is important to consider their relationship with those of the underlying field of matter; simulations (Naud et al. 1996) and simple theoretical arguments (Schertzer et al. 1997) have suggested that only $`H`$ is significantly affected by the scattering process, with the radiative value observed to be larger (corresponding to a smoother texture).
Recall from section 3 that equation 7 follows from equation 6 provided that the scaling function $`K(q)`$ is universal. To test the validity of this assumption, one can compute $`K(q)`$ directly and compare the result with the prediction of the universality relation (eq. 3). Such a comparison is illustrated in Figure 5 for a sub-region of GGD 18.
According to Figure 5, we note that the two curves agree up to $`q5`$, beyond which $`K(q)`$ becomes linear; from equation 10 we find $`q_s6`$, indicating that the linear dependence is another manifestation of the finiteness of the sample. Similar agreement was observed for the other images in the ensemble.
## 5 Generalized Scale Invariance
Our discussion has so far been constrained to the case of isotropic, or self-similar, scale invariance, for which structures at different scales are related by isotropic magnifications. However, more general anisotropic scaling “zooms” are possible and may arise as a result of forces inducing stratification and differential rotation in the dynamics, for instance. Rather than imposing, a priori, an isotropic notion of scale, the latter may be determined by the nonlinear dynamics; in this section, we attempt to empirically characterize this anisotropic scaling.
The need for a more general framework led to the Generalized Scale Invariance (GSI) formalism, in which three ingredients are necessary for the description of scaling: (i) a unit ball $`B_1`$ consisting of all vectors of unit length, which can implicitly be defined by:
$$B_1\{\stackrel{}{x}:g_1(\stackrel{}{x})1\},$$
(11)
where $`g_1`$ determines the notion of unit length; (ii) a scale changing operator $`T_\lambda `$ mapping a vector between two scales of ratio $`\lambda `$; once a unit ball is specified, all other scales can be identified by repeated applications of $`T_\lambda `$, thus generating a family of balls, $`B_\lambda `$; (iii) a definition of a measure of the $`B_\lambda `$, such as the volume or a power of the volume.
It follows from the definition of $`T_\lambda `$ that these operators form a one-parameter multiplicative group, with corresponding generator $`G`$ given by:
$$T_\lambda =\lambda ^G.$$
(12)
While equation 12 allows $`G`$ to be non-linear (Schertzer & Lovejoy 1985), in order to simplify the computations we shall make a linear approximation by assuming that $`G`$ and $`T_\lambda `$ are real matrices with constant coefficients, which is equivalent to assuming that the anisotropy of the scales is translationally invariant within the image analyzed. Since GSI analyses are conventionally performed in Fourier space, the operator of interest is the Fourier analogue of $`G`$, which shall also be denoted $`G`$ by an abuse of notation (in the case of linear GSI, these matrices are transpose of each other). Expanding the matrix generator in terms of pseudo-quaternions, we write:
$$G=1+f\sigma _xie\sigma _y+c\sigma _z=\left(\begin{array}{cc}1+c& fe\\ f+e& 1c\end{array}\right),$$
(13)
where $`1`$ denotes the unit matrix, $`\sigma _x`$, $`\sigma _y`$, and $`\sigma _z`$ are the $`SU(2)`$ generators in the Pauli representation, and $`c,e,f`$ are real (note that since the coefficient of the identity matrix would correspond to an isotropic magnification, it has been set equal to unity without loss of generality). This choice of basis is particularly convenient for GSI analysis as it decouples rotation and stratification. It follows from equations 12 and 13 that, within the approximation of linear GSI, scale transformations are completely determined by the specification of $`c`$, $`e`$, and $`f`$.
It is instructive at this point to discuss a few examples of $`G`$, and corresponding family of balls obtained by repeated applications of $`T_\lambda `$ on the unit ball. For the purpose of this discussion, we assume that $`G`$ is a 2x2 matrix, corresponding to the analysis of two-dimensional data with translationally invariant statistics. The simplest example is the case where $`G`$ is the identity matrix, corresponding to $`T_\lambda =\lambda ^1`$, or self-similar scaling. In addition, if the unit ball is chosen to be the unit circle, the family of balls generated by $`T_\lambda `$ are circles as well. More generally, the scaling may be different in two or more preferred directions. For simplicity, suppose there are two such directions assumed to coincide with the x and y axes, in which case the scaling is said to be self-affine: the off-diagonal elements of $`G`$ remain null, but the diagonal entries may be different from unity. If the unit ball is taken to be the unit circle, the corresponding balls are ellipses with principal axes pointing in the x and y directions. Furthermore, as illustrated in Figure 6a), one finds that the balls are horizontally elongated for large wavenumbers ($`k>1`$) and vertically elongated for small ones ($`k<1`$).
An example of physical systems presenting approximately such scaling are vertical cross-sections of the atmosphere where self-affinity is caused by the (stratifying) gravitational field of the Earth (Pflug et al. 1991, 1993). Finally, a matrix generator with non-zero off-diagonal elements indicates differential rotation in the scaling (see Figure 6b)); in the atmosphere for instance, the observed differential rotation can be generated by the Coriolis force.
The numerical calculation of the anisotropy parameters $`c`$, $`e`$, and $`f`$ is performed using the Scale Invariant Generator (henceforth, SIG) technique (Lewis et al. 1999). Let us first define $`P(\stackrel{}{k})`$ to be the modulus squared of the Fourier transform of the field at wave vector $`\stackrel{}{k}`$. It follows from this definition that an angular integration of $`P(\stackrel{}{k})`$ yields $`E(k)`$, as defined in section 3. As a generalization of equation 4, we have the scaling relation:
$$P(\stackrel{}{k})\stackrel{}{k}^s,$$
(14)
where the norm is with respect to $`G`$, the brackets indicate an average over all realizations, and $`s`$ is a generalized scaling exponent given in (isotropic) 2-D by $`s=\beta +1`$. It follows that $`P(\stackrel{}{k})`$ is in fact a function of $`G`$, $`B_1`$, $`s`$, and $`\stackrel{}{k}`$, that is, $`P(\stackrel{}{k})P(c,e,f,B_1,s,\stackrel{}{k})`$. If the analyzed sample consists of $`N`$ data points denoted by $`\stackrel{}{k}_i`$, $`i=1,\mathrm{},N`$, we can define an error function $`E_r`$ by:
$$E_r^2(G)\frac{1}{N}\underset{i,j}{\overset{N}{}}\left[\mathrm{ln}P\left(\lambda _i^G\stackrel{}{k}_j\right)+s\mathrm{ln}\lambda _i\mathrm{ln}P(\stackrel{}{k}_j)\right]^2,$$
(15)
where the sum is over the data points ($`\stackrel{}{k}_j`$) and scale ratio ($`\lambda _i`$). The anisotropic parameters are then estimated by minimizing this error function.
## 6 Scale Invariant Generator Results
The evaluation of the anisotropy exponents with the SIG technique requires data over a wide range of scales (sufficient resolution), and good signal-to-noise ratio, such that the (approximate) isotropic scaling is valid over many scales. As mentioned in section 2, the images obtained at the OMM did not show sufficiently good statistics for the type of analysis described in this section.
Figure 7 displays the results of the SIG analysis performed on the sub-images of GGD 18 (defined in fig. 2), where the quoted uncertainties on the parameters $`c`$, $`e`$, and $`f`$ were estimated from the numerical optimization of the error function $`E_r`$ defined in equation 15.
Since the error surface near its minimum was typically wider in the $`e`$ direction, this exponent is known with less accuracy than the other two. Note that boxes (2,0) and (0,0) were left empty in Figure 7 as they contain the source of GGD 18 and the western component of the binary GL 961. An immediate observation is the substantial variations of the parameters from one sub-region to the next; while the choice of linear GSI simplifies the numerical calculations, the fact that the generator $`G`$ is not constant over the region analyzed suggests that the framework of non-linear GSI may be more appropriate in this case. We shall nevertheless assume that linear GSI is a good approximation over each subimage.
The parameters $`e`$ and $`f`$ which determine the off-diagonal elements of $`G`$ (eq. 13), are non-zero for most sub-regions, hence providing evidence for the existence of differential rotation in the outflow dynamics. They are comparatively larger in sections (1,0), (2,1) and (3,1), which cover the portions of the southern and northern lobes that are the closest to the protostar–disk system. As discussed in section 2, boxes (1,2) and (2,3) contain parts of a jet of matter presumably emitted by GL 961 at some point in the past. The large values of the off-diagonal elements of $`G`$ in these regions suggest that there is some rotation induced in the interaction region of this jet with GGD 18 material. However, it is not clear at this moment where the rotation exactly occurs (e.g. in the jet, the perturbed material, or both) since the linear SIG technique doesn’t resolve variations of the exponents within subimages.
It has been noted by Gravel (1990) that the eastern edge of the northern lobe of GGD 18 appears to be pushed westward presumably by the jet of material of GL 961. It is interesting to note that the region (1,1) where the jet and the northern lobe presumably comes the closest, is the only one with a positive value of $`e`$.
Let us divide the image into two subregions (see Figure 7) with region I covering what is morphologically identifiable as the lobes of the nebula, and region II covering the rest of the observed field (with the exception of the emission features). The assignation of subimages to either region was performed using the following criteria: subregions belonging to region I (i) were close to the $`30^o`$ symmetry axis of the nebula, (ii) the corresponding light intensity was more important, and (iii) their intensity contours had conical shapes (indicative of ejection). With this classification, there appears to be a systematic difference in the mean value of $`c`$ between regions I and II: quantitatively, the difference in mean values, $`c_Ic_{II}`$ is found to lie 4.0 standard deviations away from the case $`c_I=c_{II}`$. Although a precise identification of the stratifying factors is beyond the scope of our phenomenological framework, the observed stratification appears primarily dynamical in nature since $`c`$ is not observed to decrease monotonically with distance from the protostar (as would be expected for gravitationally generated stratification), and since values of $`c`$ present clear differences within and outside the lobes of the nebula.
## 7 Conclusion
The statistics of scattered light in a small ensemble of YSOs have been shown to obey different scaling relations depending on the moment $`q`$ (“multiscaling”), with the relation between scaling laws and statistical moments given by $`K(q)`$. It was also shown that these $`K(q)`$ functions fit reasonably well into a multifractal universality class, and that the corresponding universal parameters are fairly uniform over the ensemble (with $`\sigma _\alpha /\alpha 1\%`$, $`\sigma _{C_1}/C_147\%`$, and $`\sigma _H/H28\%`$). Although universality may be thought of as describing the attractors of multiplicative cascades, and consequently is not sufficiently sensitive on the initial conditions of the cascade to allow the observation and characterization of fine details in the dynamics of a given multifractal field, the reasonable uniformity of the results presented suggests that the seven objects in the ensemble have similar (presumably turbulent) dynamics, as expected on theoretical grounds.
Finally, although obvious differences (e.g. the existence of a magnetic field) exist between the dynamics of the radiative fields of YSOs and that of terrestrial water clouds, our results suggest that the corresponding statistics are similar. Indeed, the spectral slope and the universality parameters of atmospheric clouds ($`\beta 2.2,\alpha =1.79,H=0.63`$ and $`C_1=0.061`$ – Sachs et al. 1999) are close to that of bipolar nebulae (see table 2). Since empirically cloud liquid water behaves statistically approximately like a passive scalar (Lovejoy & Schertzer 1995), its statistical similarity with YSOs would make sense if dust grains in bipolar outflows constituted a passive scalar as well. Furthermore, the physics of radiative transfer is also similar since in each case it is dominated by scattering rather than absorption/emission processes.
All the objects in the ensemble presented reasonable isotropic scaling, that was systematically broken near the scale where noise becomes dominant in the measured signal. Only GGD 18 presented sufficient resolution to allow a GSI analysis of its statistics. Most of the sub-regions analyzed had matrix generators with non-zero off-diagonal elements, revealing the existence of differential rotation. The origin of the latter and its influence on mass ejection mechanisms should be accounted for in models of star formation; an obvious source of rotation in YSOs is the rotation of the disk-protostar system, and some of the properties of the coupling of this rotation to the ejected material might be studied by techniques similar to those used in this work. All sub-images presented a non-zero value of $`c`$, indicating a possible stratifying force in the outflow mechanism. It was argued in section 6 that such stratification would not be primarily gravitational in nature, but instead could result from dynamical pressure gradients related to physical forces, such as the centrifugal acceleration in the PP models, or the magnetic force in the Shu models. Finally, an important issue in the dynamics of GGD 18 is its interactions, if any, with the neighbouring YSO GL 961. Showing that such interactions are indeed involved would confirm that GGD 18 is located at approximately the same distance from the Earth as GL 961, namely 1.6 kpc. We found that the generator of (1,1) was the only one to have off-diagonal elements of opposite sign; as this box is believed to contain the region of closest approach between the northern lobe of GGD 18 and the jet from GL 961, the peculiarity of its anisotropy parameters could be a sign of interactions between the two YSOs.
It should be kept in mind that linear GSI is probably not accurate enough to probe fine details in the dynamics, and is in fact increasingly understood as measuring local multifractal textures (Pecknold et al. 1996, 1997). While developments in non-linear GSI or models involving non-scalar cascades, along with an increased ensemble of sufficient resolution, are probably necessary to obtain statistically robust statements concerning the outflow dynamics, it is hoped that our analysis has provided a foretaste of the vast possibilities of multiscaling analyses.
###### Acknowledgements.
P.B. thanks the director of the Canada-France-Hawai telescope for a generous time allotment. S.L. was partly supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada. REFERENCES Appenzeller, I., Jankovics, I., & Ostreicher, R. 1984, A&A, 141, 108
Asselin, L., Ménard, F., Bastien, P., Monin, J.-L., Rouan, D. 1996, ApJ, 472, 349A
Bachiller, R. 1996, ARA&A, 34, 111
Bally, J., & Lada, C. J. 1983, ApJ, 265, 824
Bastien, P., & Ménard, F. 1990, ApJ, 364, 232
Beckwith, S. V. W., Sargent, A. I., Koresko, C. D., & Weintraub, D. A. 1989, ApJ, 343, 393
Biskamp, D. 1993, Nonlinear Magnetohydrodynamics (Cambridge: Cambridge University Press)
Blitz, L. & Thaddeus, P. 1980, ApJ, 241, 676
Bontemps, S., André, P., Terebey, S., & Cabrit, S. 1996, A&A, 311, 858
Borgani, S., Murante, G., Provenzale, A., Valdarnini, R. 1993, Phys. Rev. E, 47, 3879
Cabrit, S. 1989, in ESO Workshop on Low Mass Star Formation and Pre-Main Sequence Evolution, ed. B. Reipurth (Garching: ESO), p. 119
Cabrit, S., & André, P. 1991, ApJ, 379, L25
Cadavid, A. C., Lawrence, J. K., Ruzmaikin, A. A., Kayleng-Knight, A. 1994, ApJ, 429, 391
Carbone, V., 1993, Phys. Rev. Lett., 71, 1546
— & Savaglio, S. 1996, MNRAS, 282, 868
—, Veltri, P., Bruno, R. 1996, Nonlin. Proc. Geophys., 3, 247
Cohen, M. 1973, ApJ, 185, L75
Coleman, P. H., & Pietronero, L. 1992, Phys. Rep., 213, 311
Corrsin, S. 1951, J. Applied. Phys., 22, 469
Edwards, S., Cabrit, S., Strom, S. E., Heyer, I., Strom, K. M., & Anderson, E. 1987, ApJ, 321, 473
Ferreira, J. & Pelletier, G. 1993a, A&A, 276, 625
— & — 1993b, A&A, 276, 637
— & — 1995, A&A, 295, 807
Garrido, P., Lovejoy, S., Schertzer, D. 1996, Physics A, 225, 294
Gomez de Castro, & A. I., Pudritz, R. E. 1992, ApJ, 395, 501
Gravel, P. 1990, M.Sc. thesis, Université de Montréal, Montréal, Québec, Canada
Gyulbudaghian, A. L., Glushkov, Y. I., Demisyuk, E. K. 1978, ApJ, 224, L137
Halsey, T. C., Jensen, M. H., Kadanoff. L. P., Procaccia, I., Schraiman, B. 1986, Phys. Rev. A, 33, 1141
Iroshnikov, P. 1963, Astron. Zh., 40, 742
Kraichnan, R. H. 1965, Phys. Fluids, 8, 1385
Lada, C. J. 1985, ARA&A, 23, 267
—, Gauthier III, T. N. 1982, ApJ, 261, 161
Lavallée, D. 1991, Ph.D. thesis, McGill University, Montréal, Canada
—, Schertzer, D., Lovejoy, S. 1991, in Scaling, Fractals and Non-Linear Variability in Geophysics, eds. D. Schertzer and S. Lovejoy (Klumer), p. 99
—, Lovejoy, S., Schertzer, D., Ladoy, P. 1993, in Fractals in Geography, eds. L. De Cola, N. Lam (Prentice Hall), p. 158
Lenzen, R., Hodapp, K. W., Reddman, T. 1984, A&A, 137, 365
Lewis, G., Lovejoy, S., Schertzer, D., Pecknold, S. 1999, Computers in Geophys (in press)
Longo, G., Vio, R., Paura, P., Provenzale, A., Rifatto, A. 1996, A&A, 312, 424
Lovejoy, S., & Schertzer, D. 1985, Wat. Resour. Res., 21, 1233
—, — 1990, Physics in Canada, 46, 4, 46
—, Watson, B., Schertzer, D., Brosamlen, G. 1995, in Particle Transport in Stochastic Media, ed. L. Briggs (Portland: American Nuclear Society), p. 750
—, & — 1995, in Fractals in Geoscience and Remote Sensing, ed. G. Wilkinson (Luxembourg: Office for Official Publications of the European Communities), p. 102
—, Schertzer, D., Tessier, Y., 1998, Int. Journal. of Remote Sensing (submitted)
—, Garrido, P., Schertzer, D. 1998, Physica A (submitted).
Mandelbrot, B. B. 1974, J. Fluid Mech., 62, 331
Martinez, V. J., Coles, P. 1994, ApJ, 437, 550
Naud, C., Schertzer, D., Lovejoy, S. 1996, in Stochastic Models in Geosystems, eds. S. A. Molchansov and W. A. Woyczynski (Spinger-Verlag), p. 239
Novikov, E. A., & Stewart, R. 1964, Izv. Akad. Nauk. SSSR. Ser. Geofiz., 3, 408
Obukhov, A. 1949, Izv. Akad. Nauk. SSSR. Ser. Geogr. I Geofiz, 13, 55
Parisi, G., & Frisch, U. 1985, in Turbulence and Predictability in Geophysical Fluid Dynamics and Climate Dynamics, eds. M. Ghil, R. Benzi, and G. Parisi (North-Holland), p. 72
Pecknold, S., Lovejoy, S., Schertzer, D. 1996, in Stochastic Models in Geosystems, eds. S. A. Molchansov and W. A. Woyczynski (Spinger-Verlag), p. 269, 85
Pecknold, S., Lovejoy, S., Schertzer, D., Hooge, C. 1997, in Scale in Geophysical Information Systems, eds. D. Quattrochi and M. F. Goodchild (Florida: CRC Press), p. 361
Pelletier, G., & Pudritz, R. E. 1992, ApJ, 394, 117
Pflug, K., Lovejoy, S., & Schertzer, D. 1991, in Nonlinear Dynamics of Structures, eds. R. Z. Sagdeev, U. Frisch, A. S. Moiseev, and A. Erokhim (World Scientific), p. 72
—, —, & — 1993, J.Atmos.Sci, 50, 538
Politano, H., & Pouquet, A. 1995, Phys. Rev. E, 52, 636
Pompilio, M. P., Bouchet, F. R., Murante, G., Provenzale, A. 1995, ApJ, 449, 1
Richardson, L. F. 1922, Weather prediction by numerical process (Cambridge University Press)
Sachs, Lovejoy, S., Schertzer, D. 1999 (in preparation for Fractals)
Schertzer, D., & Lovejoy, S. 1983, in Forth Symposium on Turbulent Shear Flows, Karlshule, Germany
—, & — 1985, Phys. Chem. Hydrodyn., 6, 623
—, & — 1987a, Ann. Sci. Math. du Québec, 11, 139
—, & — 1987b, J.Geophys.Res.D., 92, 9693
—, & — 1989a, in Fractals: Physical Origin and Consequences, ed. L.Pietronero (Plenum), p. 49
—, & — 1989b, Pageoph, 130, 57
—, & — 1991, in Scaling, Fractals and Non-Linear Variability in Geophysics, eds. D. Schertzer and S. Lovejoy (Kluwer), p. 41
—, & — 1995, in Space/time Variability and Interdependence for various hydrological processes, ed. R. A. Feddes (Cambridge: Cambridge University Press), p. 153
—, —, Schmitt, F., Chigirinskaya, Y., Marsan, D. 1997, Fractals 5, 427
—, & — 1997a, J. Applied Meteorology, 36, 1296
—, & — 1997b, ARM Proceedings
Schertzer, D., Schmitt, F., Naud, C., Marsan, D., Chigirinskaya, Y., Margeurite, C., Lovejoy, S., 1997, 7$`\mathrm{th}`$ Atmos. Rad. Meas. Sci. Team Meeting, 327-335
Schmitt, F., Lavallée, D., Schertzer, D., Lovejoy, S. 1992, Phys. Rev. Lett., 68, 305
Schmitt, F., Lovejoy, S., Schertzer, D. 1995, Geophys. Res. Lett., 22, 1689
Shu, F. H., Lizano, S., Ruden, S. P., Najita, J. 1988, ApJ, 328, L19
—, Najita, J., Ostricker, E., Wilkin, F., Ruden, S., Lizano, S. 1994a, ApJ, 429, 781
—, —, Ruden, S., Lizano, S., 1994b, ApJ, 429, 797
Sylos Labini, F., Montuori, M. 1998, A&A, 331, 809
Sylos Labini, F., Montuori, M., Pietronero, L., 1998, Phys. Rep., 293, 726
Turner, D. G. 1976, ApJ, 276, 65
Wiedenmann, G., Atmanspacher, H., Scheingraber, H. 1990, Can. Journ. Phys., 69, 9, 827
Yaglom, A. M. 1966, Sov. Phys. Dokl., 2, 26
|
no-problem/9907/nucl-th9907016.html
|
ar5iv
|
text
|
# Probing the width of compound states with rotational gamma rays
## Abstract
The intrinsic width of (multiparticle-multihole) compound states is an elusive quantity, of difficult direct access, as it is masked by damping mechanisms which control the collective response of nuclei. Through microscopic cranked shell model calculations, it is found that the strength function associated with two-dimensional gamma-coincidence spectra arising from rotational transitions between states lying at energies $`>`$ 1 MeV above the yrast line, exhibits a two-component structure controlled by the rotational (wide component) and compound (narrow component) damping width. This last component is found to be directly related to the width of the multiparticle-multihole autocorrelation function.
PACS: 21.10.Ky, 21.10.Re, 21.60.-n, 23.20.Lv, 25.70.Gh
Keywords: compound damping width, rotational damping, high spin states, quasi-continuum gamma spectra.
In deformed nuclei, the observed discrete rotational bands are often successfully described as states of a cranked mean field . For fixed angular momentum and increasing excitation energy, the residual interaction not included in the mean field will eventually generate compound states, which are superpositions of the many-particle many-hole mean field states. As a result, each basis band state $`|\mu `$ becomes distributed over the compound states $`|\alpha `$ within an energy interval known as the compound state damping width $`\mathrm{\Gamma }_\mu `$ .
The quantity $`\mathrm{\Gamma }_\mu `$ plays a central role in the study of basic nuclear phenomena, like the statistical and chaotic features of energy levels , or the damping of collective vibrations . However, it also appears to be inaccessible by direct experimental means, since it is essentially not possible to excite a pure many-particle many-hole state. We shall demonstrate that the spectrum of collective E2-gamma rays emitted by the compound states built out of rotational bands carries information about $`\mathrm{\Gamma }_\mu `$. This is true also for the unresolved gamma rays, which are far too weak to allow for construction of a level scheme with present experimental techniques.
Although rotational damping is a phenomenon which is independent of compound damping, being controlled by fluctuations in the alignment of the single-particle states, the occurence of compound states in rotational nuclei is usually accompanied by damping of rotational motion . In what follows we shall study the interplay between these two independent phenomena, namely rotational damping and compound damping, as a function of spin and excitation energy, making use of a cranked shell model which has been applied earlier to the study of rotational damping and of the statistical properties of spectral fluctuations and level distances . The calculations have been performed for the rare-earth nucleus <sup>168</sup>Yb, for which the quasi-continuum gamma spectrum has been analyzed in detail experimentally. The shell model Hamiltonian, consisting of the cranked Nilsson mean-field and the surface-delta interaction acting as the residual two-body force, is diagonalized using the lowest 2000 many-particle many-hole configurations based on the cranked Nilsson single-particle orbits for each value of average angular momentum $`I`$ and the parity $`\pi `$. This provides the lowest 600 energy levels for each $`I^\pi `$ covering an energy range up to about 2.5 MeV above the yrast line. (See ref. for further details). In the calculation, rotational damping sets in at about 1 MeV above the yrast line (in agreement with experiments) as a consequence of the spreading of the unperturbed rotational bands having specific and simple shell model configurations in a rotating deformed mean-field. Above the onset energy and up to a few MeV, two-particle two-hole (2p2h) and three-particle three-hole (3p3h) configurations are the dominant configurations forming the compound states. The compound damping width $`\mathrm{\Gamma }_\mu `$ of interest is the spreading width of these many-particle many-hole ($`n`$p-$`n`$h) configurations (which we label by $`|\mu `$) over the compound states $`|\alpha `$.
The spreading width $`\mathrm{\Gamma }_\mu `$ is, by definition, the energy interval over which the strength of a given $`|\mu `$ state is distributed. The distribution may formally be represented by the strength function
$$S_\mu (E)=\underset{\alpha }{}|\alpha |\mu |^2\delta (E(E_\alpha \overline{E}_\mu )),$$
where $`\alpha |\mu `$ is the amplitude of the $`n`$p-$`n`$h $`|\mu `$-state contained in the compound level $`|\alpha `$ of energy $`E_\alpha `$, while $`E`$ refers to the energy relative to the centroid $`\overline{E}_\mu `$ of the strength distribution. Calculated examples of the above function are shown in Fig.1(a). It is noted that the strength spreads over a limited number of energy levels, and never shows a smooth profile, because of the discreteness of the energy levels. Furthermore, the strength function varies strongly from state to state. A smoother behaviour is obtained by taking the average of $`S_\mu (E)`$ over all $`|\mu `$ states lying within an energy bin and spin interval, trimming the delta functions with a smoothing function (in the present analysis we use a Strutinsky’s Gaussian function with the Laguerre orthogonal polynomial of 10 keV width). The averaged strength function $`S_\mu (E)`$ thus obtained is shown in Fig.1(b). It is customary to define the spreading width by the FWHM of $`S_\mu (E)`$, denoted by $`\mathrm{\Gamma }_\mu ^s`$, with the label $`s`$ referring to the average strength function.
Another definition of the spreading width is possible, making use of the autocorrelation function applied to the strength function $`S_\mu (E)`$ of individual $`n`$p-$`n`$h states. The autocorrelation function
$$C_\mu (e)=S_\mu (E+e)S_\mu (E)𝑑E$$
expresses the probability of pairwise strengths in $`S_\mu (E)`$ being located relative to another at the energy distance $`e`$. If the strength function $`S_\mu (E)`$ were of Breit-Wigner shape of width $`\mathrm{\Gamma }`$, the autocorrelation function would also have a Breit-Wigner shape, displaying twice the width as that of the original strength functions. The autocorrelation function $`C_\mu (e)`$ has a physical interpretation as the Fourier transform of the “survival probability” $`P_\mu (t)=|\mu |\mu (t)|^2`$, which measures the probability of remaining in the state $`|\mu `$ during its time evolution $`|\mu (t)=e^{iHt}|\mu `$. For the case of the Breit-Wigner strength function, $`P_\mu (t)`$ decays exponentially with a decay constant given by $`\mathrm{}/\mathrm{\Gamma }`$. We average $`C_\mu (e)`$ over many $`|\mu `$ states in an energy bin and spin interval and make the same smoothing as described above for the strength function $`S_\mu (E)`$. It is remarked that the autocorrelation function $`C_\mu (e)`$ contains a delta-function peak at $`e=0`$ proportional to $`_\alpha |\alpha |\mu |^4`$, which we remove in the following analysis, since this peak corresponds to the asymptotic value of $`P_\mu (t)`$ at the $`t\mathrm{}`$ limit. The resultant autocorrelation function $`C_\mu (e)`$ is shown in Fig.1(c). The correlational spreading width can be defined as half the value of FHWM of the autocorrelation function $`C_\mu (e)`$. In order to distinguish from the previous definition $`\mathrm{\Gamma }_\mu ^s`$ in terms of the averaged strength function, we denote this new quantity $`\mathrm{\Gamma }_\mu ^{corr}`$ making use of the label ’corr’.
The most immediate feature observed in the calculated autocorrelation function $`C_\mu (e)`$ as compared to the average strength function $`S_\mu (E)`$ is its narrower profile. Correspondingly, the correlational spreading width $`\mathrm{\Gamma }_\mu ^{corr}=41`$ keV extracted from the autocorrelation function shown in Fig.1(c) is about a factor four smaller than $`\mathrm{\Gamma }_\mu ^s`$.
In order to understand this difference it is useful to look at the details of the strength functions associated with ’individual’ $`n`$p-$`n`$h states (cf. Fig.1(a)). The strength distribution of individual states is typically clustered within a narrower energy interval than that associated with the average strength function $`S_\mu (E)`$ (cf. e.g. the strength function associated with the 74-th and 75-th $`n`$p-$`n`$h states of angular momentum and parity $`I^\pi =40^+`$). Also, the position of the dominant strengths deviates from the centroid position $`(E=0)`$ and varies between different $`\mu `$ configurations. This variation results in a broad profile of the average strength function $`S_\mu (E)`$. In contrast, the width of the individual autocorrelation functions $`C_\mu (e)`$ reflects the clustering of strengths. Thus, the averaged autocorrelation $`C_\mu (e)`$ forms a peak around $`e=0`$ whose width is not influenced by the energy shift of the dominant strength, which only gives rise to wide tails stretching out to large positive and negative energies. Since the energy shift does not imply spreading nor influence the survival probability, we posit that the correlational width $`\mathrm{\Gamma }_\mu ^{corr}`$ is more appropriate to characterize the spreading width than the quantity $`\mathrm{\Gamma }_\mu ^s`$. The difference between $`\mathrm{\Gamma }_\mu ^{corr}`$ and $`\mathrm{\Gamma }_\mu ^s`$ decreases gradually with increasing excitation energy of the $`n`$p-$`n`$h states. However, we find from a calculation using an extended basis of 6000 $`n`$p-$`n`$h states that $`\mathrm{\Gamma }_\mu ^{corr},\mathrm{\Gamma }_\mu ^s=133,305`$ keV for the levels $`\mathrm{\#}1800\mathrm{\#}2100`$ at $`I=40,41`$ indicating that around $`U3`$MeV there is a difference of about a factor of 2 between these two quantities. At this energy, while the strength of individual $`|\mu `$ states is spread over several hundreds of levels, the distribution still displays, in most cases, a strong clusterization around a few big peaks, and does not show a smooth Breit-Wigner distribution.
Our studies have also shown that the difference found between $`\mathrm{\Gamma }_\mu ^{corr}`$ and $`\mathrm{\Gamma }_\mu ^s`$ is related to the nature of the two-body residual interaction used in the calculations (cf. Figure 2). Replacing the surface delta interaction (SDI) by a volume-type delta force ($`V(1,2)=v_\tau \delta (\stackrel{}{x}_1\stackrel{}{x}_2)`$), the ratio between $`\mathrm{\Gamma }_\mu ^s`$ and $`\mathrm{\Gamma }_\mu ^{corr}`$ is as large as for the SDI. On the other hand, using a random two-body interaction for which the two-body matrix elements $`v_{ijkl}=ij|V(1,2)|kl`$ are replaced with Gaussian random numbers, it is found that the resulting $`\mathrm{\Gamma }_\mu ^{corr}`$ approximately coincides with $`\mathrm{\Gamma }_\mu ^s`$, irrespective of the average strength of the matrix elements.
Before discussing the physics which is at the basis of these results, it is reasonable to mention that the SDI or the delta residual interaction are a better representation for nuclear structure calculations at moderate excitation energies above the yrast line, of the residual interaction acting among nucleons, than that provided by a random force. It is well known that the SDI (or the delta interaction) and the random interaction differ dramatically in the statistical distribution of two-body matrix elements $`v_{ijkl}`$. In fact, the distribution $`P(v_{ijkl})`$ for the SDI, plotted in Fig.3 (and for the delta interaction, not shown here), exhibits a strong skewness. In other words, it has a significant excess for large matrix elements $`|v_{ijkl}|>60`$ keV compared with a Gaussian distribution having the same r.m.s value $`\sqrt{<v_{ijkl}^2>}=19`$ keV. In fact, the large matrix elements of the SDI contribute to the r.m.s. value as much as the small ones $`|v_{ijkl}|<60`$ keV, as seen in the right panel plotting $`v_{ijkl}^2P(v_{ijkl})`$, although large matrix elements appear quite rarely (only 2% of the total number of matrix elements). On the other hand, the Gaussian random interaction contains no such contribution from large matrix elements. The role of the large (and rare) matrix elements of the SDI can be made even clearer through a calculation of $`S_\mu (E)`$ and $`C_\mu (e)`$ carried out with a truncated SDI, where only the small matrix elements $`|v_{ijkl}|<60`$ keV are kept. This truncation has a significant effect on the calculated average strength function $`S_\mu (E)`$, diminishing $`\mathrm{\Gamma }_\mu ^s`$ to less than half of its original value. On the other hand, the average autocorrelation function $`C_\mu (e)`$ remains almost unchanged, keeping the original value of $`\mathrm{\Gamma }_\mu ^{corr}`$ (cf. Fig.2). The large matrix elements of the SDI tend to shift the energies of the levels, rather than mixing the $`n`$p-$`n`$h configurations around the energy shell. As a consequence, they have a strong effect on $`\mathrm{\Gamma }_\mu ^s`$, but not to $`\mathrm{\Gamma }_\mu ^{corr}`$.
Seen from the perspective of gamma decay cascades, the strengths $`S_\mu (E)`$ and $`C_\mu (e)`$ are zero-step functions, describing the coupling of $`n`$p-$`n`$h states locally at one value of the angular momentum $`I`$. On the other hand, the gamma transitions $`|\alpha (I)\stackrel{E_\gamma }{}|\alpha ^{}(I2)`$ taking place between compound energy levels of angular momenta $`I`$ and $`I2`$ are described by the one-step E2 strength function $`S_\alpha ^{(1)}(E_\gamma )`$ while the consecutive gamma transitions $`|\alpha (I)\stackrel{E_{\gamma 1}}{}|\alpha ^{}(I2)\stackrel{E_{\gamma 2}}{}|\alpha ^{\prime \prime }(I4)`$ are described by the two-step strength functions $`S_\alpha ^{(2)}(E_{\gamma 1},E_{\gamma 2})`$. Figure 4 shows examples of these two types of strength functions. Individual one-step strength functions $`S_\alpha ^{(1)}(E_\gamma )`$ display considerable fine structures (Fig.4(a)) which vary for different initial $`|\alpha `$ states while their average over many states becomes a rather featureless function(Fig.4(b)), from which one can extract only the rotational damping width $`\mathrm{\Gamma }_{rot}`$. The two-step function $`S_\alpha ^{(2)}(E_{\gamma 1},E_{\gamma 2})`$, on the other hand, exhibits a two-component structure even after averaging over many states as shown in Fig.4(c,d) and discussed earlier . Projected on the $`E_{\gamma 1}E_{\gamma 2}`$ axis, the two components are characterized by wide and narrow widths, $`\mathrm{\Gamma }_{wide}`$ and $`\mathrm{\Gamma }_{narrow}`$ (cf. Fig.4(d)). On the basis of our results for the autocorrelation function of the zero-step mixing discussed above, we shall show below that the narrow component in the two-step function can be given a more precise interpretation as a doorway phenomenon related to the compound damping width. Thus, the two-step function carries information on the compound damping width $`\mathrm{\Gamma }_\mu `$ as well as on the rotational damping width $`\mathrm{\Gamma }_{rot}`$.
The admixture of $`n`$p-$`n`$h states $`|\mu `$ into each compound state $`|\alpha `$ produces strengths $`|\alpha |\mu ^2|`$ which fluctuate strongly, even at high excitation energies above the yrast line ($``$ 3 MeV), where their distribution is expected to approach a Porter-Thomas shape . E2 transitions from a given state $`|\alpha `$ at angular momentum $`I`$ will single out states $`|\alpha ^{}`$ at $`I2`$, which contain strong components of the same $`|\mu `$ states as in $`|\alpha `$, and this will also take place in the second transition to $`I4`$. In this sense, the dominant components $`|\mu (I2)`$ at the midpoint of the two consecutive decay steps act as ”doorway states” in the two-step cascade. If the spreading width $`\mathrm{\Gamma }_\mu `$ of the ”doorway states” is considerably smaller than the rotational damping width $`\mathrm{\Gamma }_{rot}`$, the E2 strength distribution will exhibit structures which are associated with the ”doorway states” having the rotational energy correlation, and smeared by $`\mathrm{\Gamma }_\mu `$ in both of the decay steps. Assuming a Gaussian shape (or a Breit-Wigner) for the strength function of the $`|\mu `$ states, one finds $`\mathrm{\Gamma }_{narrow}=2\mathrm{\Gamma }_\mu `$ (or $`2.9\mathrm{\Gamma }_\mu `$) for the width of the narrow component. On the other hand, the gamma rays that pass through different $`|\mu `$ configurations in the consecutive steps loose the rotational correlation up to the energy scale of $`\mathrm{\Gamma }_{rot}`$, contributing to the wide component, whose width $`\mathrm{\Gamma }_{wide}`$ is thus related to the rotational damping width as $`\mathrm{\Gamma }_{wide}2\mathrm{\Gamma }_{rot}`$. One can estimate that the intensity $`I_{narrow}`$ of the narrow component should be inversely proportional to $`n_{door}`$, which is the number of doorway $`|\mu `$ states contained in a typical compound level $`|\alpha `$. In terms of $`\mathrm{\Gamma }_\mu `$ and the average level spacing $`D`$, one finds, assuming fluctuations to have a Porter-Thomas shape, that $`I_{narrow}=1/n_{door}2D/\mathrm{\Gamma }_\mu `$ for Gaussian, and $`D/\mathrm{\Gamma }_\mu `$ for Breit-Wigner distributions, respectively.
As seen in Fig.5, the expected relation between the narrow width $`\mathrm{\Gamma }_{narrow}`$ of the two-step function $`S_\alpha ^{(2)}(E_{\gamma 1},E_{\gamma 2})`$ and the spreading width $`\mathrm{\Gamma }_\mu `$ of the $`n`$p-$`n`$h states is verified by the numerical calculations. The correlational spreading width $`\mathrm{\Gamma }_\mu ^{corr}`$ exhibits a clear relation to the narrow component width $`\mathrm{\Gamma }_{narrow}`$ for the different interactions discussed before. These quantities satisfy the relation $`\mathrm{\Gamma }_{narrow}(23)\mathrm{\Gamma }_\mu `$ expected from the above consideration. Figure 5 indicates that the intensity of the narrow component, $`I_{narrow}`$, also follows the theoretical expectation. The agreement within a factor of two between calculated and estimated values is regarded as satisfactory, since such estimates emphasize the basic physics mechanism, while effects of coherence between different $`|\mu `$ states are not included. It is noted that the spreading width $`\mathrm{\Gamma }_\mu ^s`$ extracted from the average strength function $`S_\mu (E)`$ does not exhibit any correlation with $`\mathrm{\Gamma }_{narrow}`$ (cf. Fig. 5).
Experimentally, hints of a two-component structure in the two-dimensional spectra exist , but they are not easy to extract from a dominant background of non-consecutive coincidences. The narrow component occurs in the same region of energies as that associated with the so called ”first ridge”, which consists of transitions along unmixed rotational bands. Techniques to study this narrow component will probably include analysis of fluctuations and spectra of dimension higher than two .
The numerical calculations were performed at the Yukawa Institute Computer Facility. The work is supported by the Grant-in-Aid for Scientific Research from the Japan Ministry of Education, Science and Culture (No. 10640267).
|
no-problem/9907/cond-mat9907320.html
|
ar5iv
|
text
|
# Power-Law Distributions and Lévy-Stable Intermittent Fluctuations in Stochastic Systems of Many Autocatalytic Elements
## I Introduction
The origins of power-law distributions as well as their conceptual implications have been an active topic of research in recent years. Power laws are intrinsically related to the emergence of macroscopic features which are scale invariant within some bounds and distinct from the microscopic elementary degrees of freedom. Often, these features are insensitive to the details of the microscopic structures. Well known examples of power law distributions include the energy distribution between scales in turbulence , the distribution of earthquake magnitudes, the diameter distribution of craters and asteroids, the distribution of city populations , the distributions of income and of wealth , the size-distribution of business firms and the distribution of the frequency of appearance of words in texts. The fact that multiplicative dynamics tends to generate power-law distributions was intuitively invoked long ago but the limitations in computer simulation power kept the models under the constraints imposed by the applicability of analytical treatment. More recently, a broader class of models has been studied combining computer simulations with theoretical analysis within the Microscopic Representation paradigm proposed in Ref. . In particular, it was shown that power laws appear in a variety of dynamical processes and are maintained even under highly non-stationary conditions.
In this paper we consider a generic model of stochastic dynamics with many degrees of freedom $`w_i(t)`$, $`i=1,\mathrm{},N`$. The time evolution of the $`w_i`$’s is described by an asynchronous update mechanism in which at each time step one variable is chosen randomly and is multiplied by a factor $`\lambda `$ taken from a predefined distribution. In addition, there is a global coupling constraint which does not allow the $`w_i`$’s to fall below the lower cutoff given by $`c\overline{w}`$, where $`\overline{w}`$ is the momentary average of the $`w_i`$’s and $`0<c<1`$ is a constant. The dynamic variables $`w_i`$ are found to exhibit a power-law distribution of the form $`p(w)w^{1\alpha }`$. The exponent $`\alpha `$ is found to be insensitive to the distribution $`\mathrm{\Pi }(\lambda )`$ of the random factor $`\lambda `$. However, $`\alpha `$ is non-universal, and increases monotonically as a function of $`c`$. In the limit $`c=0`$ (where the $`w_i`$’s become decoupled) $`\alpha =0`$ for any finite $`N`$. However, in the ”thermodynamic” limit $`N=\mathrm{}`$, $`\alpha 1`$ for any positive $`c`$. Thus the two limits do not commute. This is important for applications since typically in empirical systems $`\alpha 1`$ unlike the case of the free multiplicative random walk which predicts a log-normal distribution corresponding to $`\alpha =0`$ .
The time evolution of $`\overline{w}(t)`$ exhibits intermittent fluctuations parametrized by a truncated Lévy-stable distribution with the same index $`\alpha `$. This intricate relation between the distribution of the $`w_i`$’s at a given time and the temporal fluctuations of their average is examined and its relevance to empirical systems is discussed. Our model indicates that in certain cases the scaling exponent may be insensitive to the distribution of the multiplicative (random) factor $`\lambda `$ and depends only on the ”lower bound” features which control the smallest values of the elementary variables. The relation between the limiting conditions and the power law exponent is to be applied in each particular case and it constitutes a strong instrument in identifying and validating the relevant degrees of freedom responsible for the emergence of scaling.
The present paper proposes to consolidate by numerical simulations the control one has on a specific model and help in this way its further application to additional systems. The paper is organized as follows. In Sec. II we present the model. Simulations and results are reported in Sec. III, followed by a discussion in Sec. IV and a summary in Sec. V.
## II The Model
### II.1 Formal Definition
The model describes the evolution in discrete time of $`N`$ dynamic variables $`w_i(t)`$, $`i=1,\mathrm{},N`$. At each time step $`t`$, an integer $`i`$ is chosen randomly in the range $`1iN`$, which is the index of the dynamic variable $`w_i`$ to be updated at that time step. A random multiplicative factor $`\lambda (t)`$ is then drawn from a given distribution $`\mathrm{\Pi }(\lambda )`$, which is independent of $`i`$ and $`t`$ and satisfies $`_\lambda \mathrm{\Pi }(\lambda )𝑑\lambda =1`$. This can be, for example, a uniform distribution in the range $`\lambda _{min}\lambda \lambda _{max}`$, where $`\lambda _{min}`$ and $`\lambda _{max}`$ are predefined limits. The system is then updated according to the following stochastic time evolution equation
$`w_i(t+1)`$ $`=`$ $`\lambda (t)w_i(t)`$
$`w_j(t+1)`$ $`=`$ $`w_j(t),j=1,\mathrm{},N;ji.`$ (1)
This is an asynchronous update mechanism. The average value of the system components at time t is given by
$$\overline{w}(t)=\frac{1}{N}\underset{i=1}{\overset{N}{}}w_i(t).$$
(2)
The term on the right hand side of Eq. (1) describes the effect of auto-catalysis at the individual level. In addition to the update rule of Eq. (1), the value of the updated variable $`w_i(t+1)`$ is constrained to be larger or equal to some lower bound which is proportional to the momentary average value of the $`w_i`$’s according to
$$w_i(t+1)c\overline{w}(t)$$
(3)
where $`0c<1`$ is a constant factor. This constraint is imposed immediately after step (1) by setting
$$w_i(t+1)\mathrm{max}\{w_i(t+1),c\overline{w}(t)\},$$
(4)
where $`\overline{w}(t)`$, evaluated just before the application of Eq. (1), is used. This constraint describes the effect of auto-catalysis at the community level.
### II.2 Main Features
Our model is characterized by a fixed (conserved) number of dynamic variables $`N`$, while the sum of their values is not conserved. The conservation of the number of dynamic variables, which is enforced through the lower cutoff constraint is essential since otherwise the system dwindles over time. The non-conservation of the sum of the values of the dynamic variables is important as well. It allows to perform the multiplicative updating on a single variable at a time with no explicit binary interactions since a gain in $`w_i`$ does not require a corresponding immediate loss by other $`w_j`$’s. In fact, the interactions between the dynamic variables are implied only in the step of Eq. (4) in which the lower cutoff is imposed. The dynamic rule (1) can be described by a master equation for the probability distribution $`p(w)`$ of the form
$$p(w,t+1)p(w,t)=\frac{1}{N}\left[_\lambda \mathrm{\Pi }(\lambda )p(w/\lambda ,t)𝑑\lambda p(w,t)\right],$$
(5)
where the $`1/N`$ factor takes into account the fact that only one of the $`w_i`$’s is updated in each time step. This description applies for the bulk of the distribution of the $`w_i`$’s but not in the vicinity of the lower cutoff where the step of Eq. (4) which is not taken into account by Eq. (5) may be dominant.
For the following analysis it is convenient to normalize the $`w_j`$’s according to
$$w_j(t)w_j(t)/\overline{w}(t),j=1,\mathrm{},N.$$
(6)
As a result, the new average $`\overline{w}(t)`$ is normalized to
$$\overline{w}(t)=_c^Nwp(w,t)𝑑w=1,$$
(7)
while $`_iw_i(t)=N\overline{w}=N`$. Performing this normalization step after each iteration removes the non stationary part of the distribution and amounts statistically to an overall multiplicative factor. This (time dependent) factor which represents a global inflation rate can be recorded at each step. It is convenient to represent the dynamics (5) on the logarithmic scale. In terms of the new variables
$$W_i=\mathrm{ln}w_i,$$
(8)
Eq. (1) defines a random walk with steps of random size $`\mathrm{ln}\lambda `$:
$$W_i(t+1)=W_i(t)+\mathrm{ln}\lambda .$$
(9)
The corresponding probability distribution $`P(W)`$ becomes
$$P(W)=e^Wp(e^W).$$
(10)
In terms of $`P`$ and $`W`$, the master equation (5) becomes:
$`P(W,t+1)P(W,t)=`$ (11)
$`{\displaystyle \frac{1}{N}}\left[{\displaystyle _\lambda }\mathrm{\Pi }(\lambda )P(W\mathrm{ln}\lambda ,t)𝑑\lambda P(W,t)\right].`$
The asymptotic stationary solution, is found to be
$$P(W)e^{\alpha W}.$$
(12)
In terms of the original variable $`w_i`$, we get according to Eq. (10) a power law distribution:
$$p(w)=Kw^{1\alpha }.$$
(13)
The value of the exponent $`\alpha `$ is determined by the normalization condition \[Eq. (7)\] divided by the probability normalization condition $`_c^Np(w,t)𝑑w=1`$ (in order to eliminate the constant factor $`K`$), which yields:
$$N=\frac{\alpha 1}{\alpha }\left[\frac{\left(\frac{c}{N}\right)^\alpha 1}{\left(\frac{c}{N}\right)^\alpha \left(\frac{c}{N}\right)}\right].$$
(14)
The exponent $`\alpha `$ is given implicitly as a function of $`c`$ and $`N`$ by Eq. (14). We identify two regimes within $`0c<1`$ in which Eq. (14) can be simplified and $`\alpha `$ can be obtained explicitly. For a given $`N`$ and values of $`c`$ in the range $`1/\mathrm{ln}Nc<1`$ one obtains $`\alpha >1`$ as well as $`(c/N)^\alpha c/N1`$. Consequently, in this range, one can neglect the $`(c/N)^\alpha `$ terms in Eq. (14) to obtain to a good approximation
$$N=\frac{\alpha 1}{\alpha }\left[\frac{1}{\left(\frac{c}{N}\right)}\right].$$
(15)
which gives the explicit, $`N`$-independent solution
$$\alpha \frac{1}{1c}.$$
(16)
This relation is exact in the ”thermodynamic” limit $`N=\mathrm{}`$. The relation (16) has two remarkable properties: (a) it does not depend on the distribution $`\mathrm{\Pi }(\lambda )`$; (b) it gives rise to $`\alpha `$ values in the experimentally realistic range $`\alpha 1`$.
For finite $`N`$ and values of $`c`$ lower than $`1/\mathrm{ln}N`$ the approximation Eq. (16) breaks down and values $`\alpha <1`$ become possible. However, for any finite N, another approximation holds in the range $`c1/N<1`$. In this range $`(c/N)(c/N)^\alpha 1`$ and therefore one can neglect $`(c/N)^\alpha `$ in the numerator of Eq. (14) and $`c/N`$ in the denominator to obtain:
$$N=\frac{\alpha 1}{\alpha }\left[\frac{1}{\left(\frac{c}{N}\right)^\alpha }\right].$$
(17)
By taking the logarithm on both sides and neglecting terms of order $`1`$ we obtain
$$\alpha \frac{\mathrm{ln}N}{\mathrm{ln}(N/c)}.$$
(18)
Note that even for systems in which the lower bound (which is due to some microscopic discretization) given by $`c`$, is orders of magnitude smaller than $`1/N`$, the resulting $`\alpha `$ may differ significantly from the free multiplicative random walk result $`\alpha =0`$. Since $`c`$ enters in the formula (18) for $`\alpha `$ through its logarithm, the system gives away information on its microscopic scale cut-off $`c`$ through the exponent $`\alpha `$ of its macroscopic power law behavior.
One should emphasize that in the region where $`\alpha <1`$ the average $`\overline{w}`$ of the distribution $`p(w)`$ in Eq. (7) is not well defined and in fact one expects in the actual runs very wide macroscopic fluctuations of this mean. These fluctuations are however never infinite because according to the formulae above, as one increases the size of the system $`N`$, the region along the $`c`$ axis where $`\alpha <1`$ shrinks to 0. For $`1<\alpha <2`$ it is only the standard deviation of the distribution $`p(w)`$ which is formally divergent. This gives rise in the actual computer simulations to wide fluctuations of the individual values of $`w_i`$. However, this divergence is kept in check too by the fact that no $`w_i`$ can possibly exceed $`N\overline{w}`$, namely $`p(N\overline{w})=0`$. This amounts to a truncation from above of the power law Eq. (13).
## III Numerical Simulations and Results
Numerical simulations of the stochastic multiplicative process described by Eqs. (1) and (4), confirm the validity of Eq. (13) for a wide range of lower bounds $`c`$. It appears that the exponent $`\alpha `$ is largely independent of the shape of the probability distribution $`\mathrm{\Pi }(\lambda )`$. Fig. 1 shows the distribution of $`w_i`$, $`i=1,\mathrm{},N`$, obtained for $`N=1000`$, $`c=0.3`$, and $`\lambda `$ uniformly distributed in the range $`0.9\lambda 1.1`$. A power law distribution is found for a range of three decades between $`w_{min}=0.0003`$ and $`w_{max}=0.3`$. The slope of the best linear fit within this range is given by $`\alpha =1.4`$, in agreement with Eqs. (14) and (16). On the horizontal axis of this graph the sum of all $`w_i`$’s is normalized to 1 and therefore $`\overline{w}=0.001`$. The exponent $`\alpha `$ as a function of the lower cutoff $`c`$ is shown in Fig. 2. Numerical results are presented for $`N=100`$ (empty dots), $`1000`$ (full dots) and $`5000`$ (squares). The prediction of Eq. (14) is shown for $`N=1000`$ (solid line), which is in good agreement with the numerical results for all values of $`c`$. The approximate expression Eq. (16) is also shown (dashed line). It is observed that for $`N=1000`$ this approximation gradually starts to hold as $`c`$ is increased beyond $`1/\mathrm{ln}(1000)`$, in agreement with the theoretical analysis. In general, for a given $`N`$, $`\alpha `$ is monotonically increasing as a function of $`c`$, starting from $`\alpha =0`$ (which corresponds to $`1/w`$ distribution) at $`c=0`$, where the $`w_i`$’s are uncoupled. It is also observed that as $`N`$ is raised, the value of $`\alpha `$ which corresponds to a given $`c`$ increases monotonically. As a result, the range of validity $`1/\mathrm{ln}Nc<1`$ of the approximation Eq. (16) is extended and the knee adjacent to $`c=0`$ sharpens and becomes a discontinuity for $`N\mathrm{}`$. The range $`0c<1/N`$ in which the approximation of Eq. (18) is valid, shrinks correspondingly.
Let us turn now to the dynamics of the system as a whole. The dynamics of the system involves, according to Eq. (1), a generalized random walk with step sizes distributed according to Eq. (13). Therefore, the stochastic fluctuations of $`\overline{w}(t)`$ after $`\tau `$ time steps:
$$r(\tau )=\frac{\overline{w}(t+\tau )\overline{w}(t)}{\overline{w}(t)}$$
(19)
are governed by a truncated Lévy distribution $`L_\alpha (r)`$.
In Fig. 3 we show the distribution of the stochastic fluctuations $`r(\tau )`$ for $`\tau =50`$, which is given by a (truncated) Lévy distribution $`L_\alpha (r)`$. According to Ref. , the peak of the (truncated) Lévy-stable distribution scales with $`\tau `$ as
$$L_\alpha (r=0)\tau ^{1/\alpha }$$
(20)
where $`\alpha `$ is the index of the Lévy distribution. In Fig. 4 we show the height of the peak P$`(r=0)`$ of Fig. 3 as a function of $`\tau `$. It is found that the slope of the fit in Fig. 4 is $`0.71`$, which following the scaling relation (20) means that the index of the Lévy distribution in Fig. 3 is $`\alpha =1/(0.71)=1.4`$. These results were obtained for the same parameters which gave rise to the power law distribution with $`\alpha =1.4`$ in Fig. 1. Thus, the prediction that the fluctuations of $`\overline{w}`$ in Fig. 3 follow a (truncated) Lévy-stable distribution with an index $`\alpha `$ which equals the exponent $`\alpha `$ of the power-law distribution in Fig. 1, is confirmed.
## IV Discussion
The model considered in this paper may be relevant to a variety of empirical systems in the physical, biological and social sciences which can be described by a set of interacting dynamic variables which follow a stochastic multiplicative dynamics. Such dynamical processes may play a role in the formation of the mass distribution in the universe where clusters of galaxies accumulate and eventually form super-clusters. In a different context, the growth of cities is basically a multiplicative process governed by the reproduction rate of the local population in addition to mobility between cities.
Enhanced diffusion processes, which can be described by the Lévy-stable distribution have been observed in a variety of nonlinear dynamical systems . Unlike the stochastic model studied here, these systems are governed by deterministic rules. They exhibit intermittent chaotic motion which gives rise to enhanced diffusion.
In population dynamics, the number of individuals in each specie varies stochastically from one season to the next with a multiplicative factor which depends on the local conditions. The lower bound may represent the minimal number of individuals required for the species to survive in the given environment. In this case the number of species may not be strictly a constant, but species that are wiped out may be replaced by others which invade their area. In this context it was found that the number of species of a given size often follows a decreasing power-law distribution as a function of their size (see e.g. Ref. ).
In the economic context of a stock-market system the dynamic variables $`w_i`$, $`i=1,\mathrm{},N`$ may represent the wealth of individual investors. In this case the dynamics represents the increase (or decrease) by a random factor $`\lambda (t)`$ of the wealth $`w_i`$ of the investor $`i`$ between times $`t`$ and $`t+1`$. The lower bound may represent a minimal wealth required in order to participate in stock market trading. In a more general economic model, this lower bound may be related to a basket of basic publicly funded services which every individual receives. In another possible interpretation, the $`w_i`$’s represent the capitalization (total market value) of the firm $`i`$, which may increase (or decrease) by a factor $`\lambda (t)`$ at each time step. In this case the lower bound may represent the minimal requirements for a company stock to be publicly traded.
Studies of the distribution of wealth in the general population revealed a power-law behavior (see e.g. Ref. ). More recently it was shown that the distribution of individual wealth of the 400 richest people in the United States (Forbes 400) corresponds to a power law with $`\alpha =1.36`$ \[more precisely $`W(n)=Cn^{1/\alpha }`$ where $`W(n)`$ is the wealth of the $`n`$-th richest person on the list\]. Recent analysis of stock market returns, measured over many years found a truncated Lévy distribution $`L_\alpha (r)`$ with the index $`\alpha =1.4`$ for an extended (but finite) range of returns $`r`$ . These results indicate that the property observed in our model, namely that the same value of the index $`\alpha `$ appears both in the power law distribution and in the Lévy-stable distribution of the fluctuations may be of relevance in the economic context. To further explore this possibility it would be interesting to examine whether the distribution of total market values of companies in the stock market exhibits a power law behavior of the form (13) with $`\alpha =1.4`$.
## V Summary
We have studied a generic model of stochastic auto-catalytic dynamics of many degrees of freedom using computer simulations. The model consists of dynamic variables $`w_i`$, $`i=1,\mathrm{},N`$ which are updated randomly one at a time through an autocatalytic process at the individual level. In addition, the variables are coupled through a lower bound constraint which enhances the variables which fall below a fraction of the global average. The model may describe a large variety of systems such as stock markets and city populations. The distribution $`p(w,t)`$ of the system components $`w_i`$ turns out to fulfill a power law distribution of the form $`p(w,t)w^{1\alpha }`$. In the limit $`N=\mathrm{}`$, $`c0`$ one obtains the case often encountered in nature: $`\alpha 1`$. The average $`\overline{w}(t)`$ exhibits intermittent fluctuations following a Lévy-stable distribution with the same index $`\alpha `$. This relation between the distribution of system components and the temporal fluctuations of their average may be relevant to a variety of empirical systems. For example, it may provide a connection between the distribution of wealth/capitalization in a stock market and the distribution of the index fluctuations.
|
no-problem/9907/hep-lat9907011.html
|
ar5iv
|
text
|
# Comparing lattice Dirac operators with Random Matrix TheorySupported by Fonds zur Förderung der Wissenschaftlichen Forschung in Österreich, Project P11502-PHY.
## 1 INTRODUCTION
In recent work we have been studying various aspects of the lattice Schwinger model . This model is a 2D U(1) gauge theory of photons and one or more fermion species. Of particular interest is the situation of massless fermions. In the quantized theory chiral symmetry is broken by the anomaly. The one flavor-model should exhibit a bosonic massive mode.
For the non-perturbative lattice formulation chirality is a central issue. The Wilson Dirac operator explicitly breaks chiral symmetry. The Ginsparg-Wilson condition defines a class of lattice actions with minimal violation of chirality. An explicit realization is Neuberger’s overlap Dirac operator . In another approach one attempts to construct so-called quantum perfect actions, or fixed point actions (classically perfect actions) , also obeying the Ginsparg-Wilson condition .
In the Schwinger model framework we have been studying several of these suggestions. In the (approximate) fixed point Dirac operator was explicitly constructed. It has a large number of terms but has been shown to have excellent scaling properties for the boson bound state propagators. This is not the case for the Neuberger operator ; there scaling is not noticeably improved over the Wilson operator. The overlap operator has eigenvalues distributed exactly on a unit circle in the complex plane; for the (approximate) fixed point operator our study shows small (with smaller $`\beta =1/g^2`$ increasing) deviations from exact circularity. In both cases we could identify chiral zero modes. Their occurrence was strongly correlated to the geometric topological charge of the gauge configuration $`\nu _{\text{geo}}=\frac{1}{2\pi }_x\text{Im}\text{ln}U_{12}(x)`$ (henceforth called $`\nu `$ for brevity) with a rapidly improving agreement with the Atiyah-Singer Index Theorem (interpreted on the lattice) towards the continuum limit.
Studying the spectra of the Dirac operators suggests comparison with Random Matrix Theory (RMT). There the spectrum is separated in a fluctuation part and a smooth background. Exact zero modes are disregarded. The fluctuation part, determined in terms of the so-called unfolded variable (with average spectral spacing normalized to 1), is conjectured to follow predictions lying in one of three universality classes. For chiral Dirac operators these are denoted by chUE, chOE and chSE (chiral unitary, orthogonal or symplectic ensemble, respectively) . Various observables have been studied in this theoretical context. Comparison of actual data should verify the conjecture and allows one to separate the universal features from non-universal ones. In particular it should be possible to determine in this way the chiral condensate.
On one hand the limiting value of the density for small eigenvalues and large volume,
$$\pi \underset{\lambda 0}{lim}\underset{V\mathrm{}}{lim}\rho (\lambda )=\overline{\psi }\psi ,$$
(1)
provides such an estimate due to the Banks-Casher relation. This information is contained in the smooth average (background) of the spectral distribution. However, also the fluctuating part, in particular the distribution for the smallest eigenvalue $`P(\lambda _{\text{min}})`$ contains this observable: Its scaling properties with $`V`$ are given by unique functions of a scaling variable $`z\lambda V\mathrm{\Sigma }`$, depending on the corresponding universality class. Usually this is the most reliable approach to determine $`\mathrm{\Sigma }`$, which then serves as an estimate for the infinite volume value of the condensate in the chiral limit. This method does not involve unfolding, averaging or extrapolation.
Here we concentrate on our results for the staggered Dirac operator. It is anti-hermitian and (for $`m=0`$) its spectrum is located on the imaginary axis, but it has no exact zero modes. RMT predictions for the staggered action and the trivial topological sector have been confirmed also in 4D lattice studies . Here we emphasize, however, the rôle of non-zero topological charge.
## 2 METHOD AND RESULTS
In our study we construct sequences of (5000-10000) uncorrelated quenched gauge configurations for several lattices sizes ($`16^2`$, $`24^2`$, $`32^2`$) and values of $`\beta `$ (2, 4, 6). For these sets we then determine the various Dirac operators and study their spectral distribution. This way we can compare directly the effect of identical sets of gauge configuration on the fermionic action. In we discuss our results for the Neuberger- and the fixed point operator. Since these spectra lie on or close to a circle in the complex plane, one has to project them to the (tangential) imaginary axis. We find that they exhibit the universal properties of the (expected) chUE-class, unless the physical lattice volume is too small.
In Fig. 1 we demonstrate the relevance of topological modes. The e.v. distribution density is first shown without distinguishing between different $`\nu `$ and we notice a pronounced peak at small eigenvalues. Splitting the contributions according to $`|\nu |=0`$ and 1 we observe, that the peak is due to the non-trivial sectors $`\nu 0`$. The trivial sector has a behavior typical for the shapes predicted from chRMT. For larger $`\beta `$ and $`V`$ the peak becomes more pronounced, justifying the hypothesis that it represents the “would-be” zero modes.
Since RMT discusses the distribution excluding exact zero-modes we expect problems whenever one is in a situation without possibility to separate topological sectors (upper-most figure in Fig. 1), if one then tries to represent the distribution for the smallest observed eigenvalue by chRMT functions. This is demonstrated in Fig. 2 where we plot the histograms for the smallest and the 2nd smallest (shaded histogram) eigenvalues in the $`|\nu |=1`$ sector. For small $`\beta `$, strong coupling, the histogram for the smallest e.v. behaves like the $`\nu =0`$ sector prediction. For large $`\beta `$ the 2nd smallest e.v. follows a distribution expected for the smallest e.v. in the $`|\nu |=1`$ sector.
The level spacing distribution (determined in the unfolded variable) clearly has chUE (Wigner surmise) shape (Fig. 3) for all sizes and $`\beta `$.
Having all eigenvalues we can of course calculate the fermion determinant for every gauge configuration and include dynamic fermions by explicit multiplication. These “unquenched” results will be presented elsewhere.
|
no-problem/9907/hep-ph9907415.html
|
ar5iv
|
text
|
# Zeros, dips and signs in pp and p𝐩̄ elastic amplitudes
## I Introduction
The elastic differential cross sections of pp and p$`\overline{\mathrm{p}}`$ scattering at high energies present a strong forward peak, decreasing exponentially from $`|t|=0`$, and forming a dip in the range of values of transferred momentum between 1.3 and 1.5 $`\mathrm{GeV}^2`$. For larger values of $`|t|`$ there is a flatter tail that, for beam energies above 400 GeV ($`\sqrt{s}=27.5\mathrm{GeV}`$), seems to be independent of the energy.
Chou and Yang studied in an eikonal framework model the hadron-hadron scattering at ultra-high energies, predicting the existence of many dips. Using an impact parameter representation for the scattering amplitude, with a $`t`$ dependence inspired in the proton electromagnetic form factor, Bourrely, Soffer and Wu were able to reproduce the general features of the ISR experiments. França and Hama described pp scattering under the assumption of a pure imaginary amplitude with two zeros parametrized as a sum of exponentials. With a similar parametrization and including a real part in the amplitude, Carvalho and Menon gave a more detailed representation for pp differential cross sections at the ISR energies. Geometrical models in the eikonal approximation, including the pomeron exchange, were studied by Covolan and collaborators . Extensive descriptions of the phenomenology of the elastic hadron scattering can be seen in the review articles of Bloch and Cahn, and of Jenkovszky .
In a somewhat detailed dynamical scheme, Donnachie and Landshoff described the structure of the dip in high energy scattering through the interference of single-pomeron, double-pomeron and three-gluon exchanges, predicting that the dip in p$`\overline{\mathrm{p}}`$ would be less pronounced than in pp scattering.
Table I shows the available data on total cross section, slope parameter $`B`$ and ratio $`\overline{\rho }`$ of the forward real to imaginary parts of the amplitudes in pp and p$`\overline{\mathrm{p}}`$ scattering, which come from Fermilab \[a,f\], CERN-ISR \[b,c,d\] and CERN-SPS \[e\]. Most measured differential cross sections are limited to $`|t|<10\mathrm{GeV}^2`$, while large angle data are available only at $`\sqrt{s}=27`$ GeV , presenting a $`|t|`$ dependence approximately of the form $`|t|^8`$ , the magnitude of $`d\sigma ^e\mathrm{}/dt`$ at a given large $`|t|`$ being nearly energy independent . In measurements of the differential cross section at $`\sqrt{s}=19`$ GeV for values of $`|t|`$ in the range 5 - 12 $`\mathrm{GeV}^2`$ the data points converge to those of $`\sqrt{s}=27`$ GeV for $`|t|11\mathrm{GeV}^2`$. In order to maintain the universality of the tail we have adopted in our parametrization for all energies between 19 and 63 GeV the same 27 GeV values for large $`|t|`$.
In this work we explain the detailed shapes of the dips appearing in $`d\sigma ^e\mathrm{}/dt`$ in terms of the locations of the only one zero of the imaginary part and of the two zeros (in the pp case) of the real part. The amplitudes are obtained from the parametrization of the amplitudes that was inspired in the Model of the Stochastic Vaccum (MSV) , with additional freedom in parameters. The differential cross section at large $`|t|`$ is described through a term in the real part of the amplitude, and a change of sign of this term leads from the pp to the p$`\overline{\mathrm{p}}`$ system, the effect being illustrated by the analysis of the 53 GeV data.
The paper is organized as follows. In Sec. 2 we recall the parametrization used to describe the total and differential cross section data . In Sec. 3 the behavior of the dips of the differential cross section is described in terms of the locations of the zeros of the real and imaginary amplitudes. Sec. 4 discusses the large $`|t|`$ behavior of the differential cross section of pp and p$`\overline{\mathrm{p}}`$ systems, and finally in Sec. 5 we present comments and conclusions.
## II Parametrization of the amplitudes
We use the dimensionless scattering amplitude
$$T(s,t)=4\sqrt{\pi }s[i(t)+(t)],$$
(1)
with the elastic differential cross section given by
$$\frac{d\sigma ^e\mathrm{}}{dt}=\frac{1}{16\pi s^2}|T(s,t)|^2.$$
(2)
The imaginary and real parts of the amplitude are respectively parametrized in the forms
$$(t)\alpha _1\mathrm{e}^{\beta _1|t|}+\alpha _2\mathrm{e}^{\beta _2|t|}+\lambda 2\rho \mathrm{e}^{\rho \gamma }A_\gamma (t)$$
(3)
and
$$(t)\alpha _1^{}\mathrm{e}^{\beta _1^{}|t|}+\lambda ^{}2\rho \mathrm{e}^{\rho \gamma ^{}}A_\gamma ^{}(t),$$
(4)
where
$$A_\gamma (t)\frac{\mathrm{e}^{\gamma \sqrt{\rho ^2+a^2|t|}}}{\sqrt{\rho ^2+a^2|t|}}\mathrm{e}^{\rho \gamma }\frac{e^{\gamma \sqrt{4\rho ^2+a^2|t|}}}{\sqrt{4\rho ^2+a^2|t|}},$$
(5)
with $`\rho =3\pi /8`$, and we have grouped the factors $`2\rho e^{\rho \gamma }A_\gamma (t)`$ in order to have $`2\rho e^{\rho \gamma }A_\gamma (0)=1`$.
These apparently complicated forms were inspired in the MSV parametrization for the imaginary part of the scattering amplitude. The use of $`A_\gamma (t)`$ in the real part, has a more convenient structure, compared to simple exponentials, to fill the dip left by the zero of the imaginary part. On the other hand, the simple exponential term $`\alpha _1^{}\mathrm{exp}(\beta _1^{}|t|)`$ was included in the real part specifically to describe the large $`|t|`$ ($`5<|t|<15\mathrm{GeV}^2`$) data at 27 GeV. This term is universal (energy independent) for all pp and has opposite sign for p$`\overline{\mathrm{p}}`$ ISR data. The exponential is made numerically equivalent to $`|t|^8`$ in the $`|t|`$ range of interest, and was used to avoid the singularity at the origin. This term is not used in the description of the 546 and 1800 GeV data, where large $`|t|`$ values have not been measured.
At $`t=0`$, the optical theorem and the value of $`\overline{\rho }=(s,0)/(s,0)`$ fix the constraints
$$(0)=\alpha _1+\alpha _2+\lambda =\frac{\sigma ^T}{4\sqrt{\pi }},$$
(6)
and
$$(0)=\lambda ^{}+\alpha _1^{}=\overline{\rho }(\alpha _1+\alpha _2+\lambda ).$$
(7)
The values of the parameters are given in Table II, which is an update of our previous determination , and now includes the $`\sqrt{s}=19.4\mathrm{GeV}`$ data. The smoothness of the energy dependence of all parameters must be remarked. In comparison to the previous values, the parameters $`\gamma `$ and $`\gamma ^{}`$ have been slightly modified to improve the description of the data in the region of the dips of the pp differential cross sections as shown in Fig. 1 for $`\alpha _1^{}>0`$ (solid lines). The case $`\alpha _1^{}<0`$ (dashed lines), which applies to p$`\overline{\mathrm{p}}`$ scattering, is discussed latter.
## III Zeros of the amplitudes and dips in pp scattering
The characteristic shape of the dip region of the differential cross sections has been described by Donnachie and Landshoff in terms of various mechanisms of pomeron and gluon exchanges. The low $`|t|`$ region is described by the single-pomeron exchange (P), which is dominant in this region and gives the value to the slope $`B`$. In order to describe the data, the double-pomeron exchange (PP) was introduced with the magnitude of its imaginary part chosen so as to cancel that of the P mechanism in the region where a dip is to be formed. To yield a dip, the real part of the P term is partially cancelled by the three-gluon exchange (ggg), which is dominant for large $`|t|`$, has opposite signs for pp and p$`\overline{\mathrm{p}}`$ systems, and led to the prediction that at high energies the dips would be less pronounced in p$`\overline{\mathrm{p}}`$ scattering . In addition to these contributions there is the gg ($`\rho ,\omega ,f,A_2`$) exchange which is important only at very small $`|t|`$. We remark that the parametrization given by Eqs. (3) and (4) incorporates all information about the dynamical mechanisms of pp scattering and corresponds to the sum of all terms discussed by Donnachie and Landshoff .
According to phenomenological descriptions, the structure of forward pp scattering is determined mainly by the imaginary part of the amplitude which decreases exponentially from $`|t|=0`$ and vanishes with a zero located in the interval $`|t|=1.31.5\mathrm{GeV}^2`$. This zero is partially filled by the real part of the amplitude which is positive at $`|t|=0`$ and also decreases nearly exponentially, becoming negative as $`|t|`$ reaches $`0.20.3\mathrm{GeV}^2`$.
Fig. 2 shows the separate contributions $`|\mathrm{Im}T(s,t)|^2`$ and $`|\mathrm{Re}T(s,t)|^2`$ to the differential cross section at $`\sqrt{s}=23\mathrm{GeV}`$. The real part in pp scattering (dashed line) has two zeros, becoming negative in the region between them. The zero of $`\mathrm{Im}T(s,t)`$ (solid line) is situated between the two zeros of $`\mathrm{Re}T(s,t)`$.
This description is valid for all energies of pp scattering from 19 to 63 GeV. The zeros of the real and imaginary parts of the pp amplitude as a function of the energy are shown in Fig. 3. The position of the zeros of $`\mathrm{Im}T(s,t)`$ and the first zeros of $`\mathrm{Re}T(s,t)`$ decrease monotonically with the energy, while that of the second zero of $`\mathrm{Re}T(s,t)`$ oscillates, being lowest at about 30 GeV, as shown in Fig. 3. The position of the dips in the differential cross sections are always situated between the zeros of $`\mathrm{Im}T(s,t)`$ and the second zeros of $`\mathrm{Re}T(s,t)`$. The dips are close to the zeros of $`\mathrm{Im}T(s,t)`$, but the shapes in the dip region are strongly influenced by the distance between the zeros of $`\mathrm{Im}T(s,t)`$ and $`\mathrm{Re}T(s,t)`$. At about 30 GeV they are closest, and correspondingly the dips are remarkably pronounced (narrow and deep), as can be seen in Fig. 1 (solid lines), where the changes in the form and position of the dips in the interval 19 - 63 GeV are exhibited in an extended scale.
A similar behavior is also shown by the magnitudes of the differential cross sections at the dip. Fig. 4 shows the minimum values of $`d\sigma ^e\mathrm{}/dt`$ as a function of $`\sqrt{s}`$, the lowest dip occurring at 30 GeV. The values at 546 and 1800 GeV must be understood only as estimates, while at 630 GeV \[e\] the dip is clearly seen in the data.
## IV Signs of the large angle pp and p$`\overline{𝐩}`$ amplitudes
Donnachie and Landshoff observed that the quasi-$`|t|^8`$ dependence of $`d\sigma ^e\mathrm{}/dt`$ at large $`|t|`$ observed experimentally can be described by a three-gluon exchange mechanism, which is energy independent . Other terms contributing to the amplitude are the two-gluon-one-pomeron exchanges (ggP) and three pomeron exchanges (PPP). The $`A_{ggP}(s,t)`$ and $`A_{PPP}(s,t)`$ terms have very little effect for the amplitude $`A(s,t)=A_{ggg}(s,t)+A_{ggP}(s,t)+A_{PPP}(s,t)`$, the dominant term at large $`|t|`$ being $`A_{ggg}(s,t)`$. This term has the form
$$A_{ggg}(s,t)=\frac{N}{|t|}\frac{5}{54}\left[4\pi \alpha _s(|\widehat{t}|)\frac{1}{m^2(|\widehat{t}|)+|\widehat{t}|}\right]^3,$$
(8)
where $`m^2(|\widehat{t}|)`$ is an effective gluon mass, $`\alpha _s(|\widehat{t}|)`$ is the running coupling constant, $`\widehat{t}t/9`$, $`\mathrm{\Lambda }=200\mathrm{MeV}`$ and $`m_0=340\mathrm{MeV}`$ and for $`\widehat{t}1`$ yields the $`|t|^8`$ dependence of the large $`|t|`$ differential cross section. The normalization factor $`N`$ is negative and determined by the proton wave function.
To avoid the non regular behavior at small $`|t|`$, we use in the real amplitude an exponential form $`\alpha _1^{}\mathrm{exp}(\beta _1^{}|t|)`$ rather than a negative power $`|t|^8`$ to reproduce the tail. This term describes the large $`|t|`$ behavior at 53 GeV where both pp and p$`\overline{\mathrm{p}}`$ differential cross sections have been measured beyond the dip region. To show the effect of the sign of the first term in Eq.(4), in Fig. 1 we disregard the differences in the experimental values of $`\sigma ^T`$, $`B`$ and $`\overline{\rho }`$ for pp and p$`\overline{\mathrm{p}}`$ systems and also keep the numerical values of all parameters of the real and imaginary parts of the amplitude, only changing the sign of $`\alpha _1^{}`$ in Eq. (4). Fig. 1 shows the effects of this change of sign in the dip region, with the dashed lines for $`\alpha _1^{}<0`$ representing p$`\overline{\mathrm{p}}`$ scattering.
Fig. 5 shows $`d\sigma ^e\mathrm{}/dt`$ for pp ($`\alpha _1^{}>0`$) and p$`\overline{\mathrm{p}}`$ ($`\alpha _1^{}<0`$) scattering at 53 GeV, where we have used, in the constraint imposed through Eq. (7), the realistic values of $`\overline{\rho }`$ given for each case in Table I. The p$`\overline{\mathrm{p}}`$ dip is less deep, and becomes still flatter as the energy increases. The explanation for this behavior in terms of the zeros of the amplitudes is shown in Fig. 2. The real part of the amplitude with a positive tail ($`\alpha _1^{}>0`$) presents two zeros, the zero near the origin having no influence in the dip region. The second zero, which is close to the zero of the imaginary part (solid line) is responsible for the pronounced form of the dip. Changing the sign of $`\alpha _1^{}`$, the second zero does not exist (or it would be located very far away). The change of sign in $`\alpha _1^{}`$ makes, in the region of the dip, the ($``$) line higher than the (+) line, flattening the dips.
## V Comments and conclusions
In the present work, starting from a parametrization suggested by the Model of the Stochastic Vaccum , we relate the behavior of the dips of the elastic differential cross section to the locations of the zeros of the amplitudes in pp and p$`\overline{\mathrm{p}}`$ scattering, and investigate the role of the sign of the large $`|t|`$ tail for these systems, in the range $`\sqrt{s}=`$19 - 63 GeV.
The pp amplitude presents one zero in the imaginary part and two zeros in the real part. The depth of the dips is determined by the proximity between the zeros of the imaginary part and the second zeros of the real part, so that the dips become deeper when these zeros are closer to each other. In p$`\overline{\mathrm{p}}`$ scattering the real amplitude has only the first zero, which occurs far away from the dip region.
The large $`|t|`$ tail of the differential cross section is described by an exponential term included in the real part of the amplitude. This term, which is the same (except for a sign) for pp and p$`\overline{\mathrm{p}}`$ at all energies from 19 to 63 GeV, is responsible for the change of shape of the dip region when we change from pp to p$`\overline{\mathrm{p}}`$ systems. The sign (which is positive for pp scattering) determines flatter dips in p$`\overline{\mathrm{p}}`$ scattering. This fact, which was pointed out by Donnachie and Landshoff , is here confirmed by the detailed description of all data.
Our parametrization describes all data in detail and has a remarkably smooth behavior that allows interpolations for predictions.
###### Acknowledgements.
The authors wish to thank M. J. Menon, A. F. Martini and P. A. Carvalho for information on their work.
|
no-problem/9907/cond-mat9907467.html
|
ar5iv
|
text
|
# Detection of Coulomb Charging around an Antidot
## Abstract
We have detected oscillations of the charge around a potential hill (antidot) in a two-dimensional electron gas as a function of a perpendicular magnetic field $`B`$. The field confines electrons around the antidot in closed orbits, the areas of which are quantised through the Aharonov-Bohm effect. Increasing $`B`$ reduces each state’s area, pushing electrons closer to the centre, until enough charge builds up for an electron to tunnel out. This is a new form of the Coulomb blockade seen in electrostatically confined dots. We have also studied $`h/2e`$ oscillations and found evidence for coupling of opposite spin states of the lowest Landau level.
Coulomb blockade (CB) in an open system sounds paradoxical. CB arises from the discrete charge of an electron. For charging to happen, it has been generally believed that electrons must be electrostatically confined in a small cavity. Although it has recently been reported that “open” dots can also show charging effects , they are not completely open systems, still having some degree of electrostatic confinement.
In contrast, an antidot, which is a potential hill in a two-dimensional electron gas (2DEG), is in a completely open system. Thus it has often been assumed that CB does not occur when an electron tunnels through a state bound around an antidot by a large perpendicular magnetic field $`B`$ ($`>0.2`$ T). Here, electron waves travel phase-coherently around the antidot with quantised orbits, each enclosing an integer number of magnetic flux quanta $`h/e`$ through the Aharonov-Bohm (AB) effect. Where the potential is sloping, these single-particle (SP) states have distinct energies. Conductance oscillations observed as a function of $`B`$ or gate voltage have been attributed to resonant tunnelling through such discrete states from one edge of the sample to the other. This causes resonant backscattering or transmission depending on the tunnelling direction . Up until now, no charging effect has been taken into account in the system . However, Ford et al. proposed that antidot charging should be present to explain double-frequency AB oscillations, where two sets of resonances through the two spin states of the lowest Landau level (LL) were found to lock exactly in antiphase, giving $`h/2e`$ periodicity, and to have the same amplitudes in spite of different tunnelling probabilities . There is as yet no full explanation for these phenomena.
The aim of this paper is to demonstrate that magnetic confinement causes charging in antidot systems, although there is no electrostatic confinement. We have conducted non-invasive detector experiments and obtained clear evidence of charge oscillations around an antidot as a function of $`B`$ . We have also investigated $`h/2e`$ AB conductance oscillations. The data show that the resonance only occurs through states of one spin, explaining the matched amplitudes.
The samples were fabricated from a GaAs/AlGaAs heterostructure containing a 2DEG of sheet carrier density $`2.2\times 10^{15}`$ m<sup>-2</sup> with mobility 370 m<sup>2</sup>/Vs. An SEM micrograph of a device is shown in Fig. 1(a). A square dot gate (G<sub>dot</sub>), 0.3 $`\mu `$m on a side, was contacted by a second metal layer evaporated on top of an insulator (not shown) to allow independent control of gate voltages. The lithographic widths of the antidot and detector constrictions were 0.45 and 0.3 $`\mu `$m, respectively. All constrictions showed good 1D ballistic quantisation at $`B=0`$. A voltage of $`4.5`$ V on the separation gate (G<sub>sep</sub>), of width 0.1 $`\mu `$m, divided the 2DEG into separate antidot and detector circuits. The detector gate (G<sub>det</sub>) squeezed the detector constriction to a high resistance to make it very sensitive to nearby charge. To maximise the sensitivity, transresistance measurements were made by modulating the dot-gate voltage (or the voltage on the side-gate G<sub>side</sub>) at 10 Hz with 0.5 mV rms and applying a 1 nA DC current through the detector constriction. Simultaneously, the transconductance of the antidot circuit was measured with a 10 $`\mu `$V DC source-drain bias, when necessary. The experiments were performed at temperatures down to 50 mK.
Figures 1(b) and (c) show the transresistance $`dR_{\mathrm{det}}/dV_{\mathrm{G}\mathrm{side}}`$ (transconductance $`dG_{\mathrm{ad}}/dV_{\mathrm{G}\mathrm{side}}`$) vs $`B`$ of the detector (antidot) circuit in two different field regions: (b) $`\nu _\mathrm{c}=2`$ and (c) $`\nu _\mathrm{c}<1`$, where $`\nu _\mathrm{c}`$ is the filling factor in both antidot constrictions, which were determined from the conductance $`G_{\mathrm{ad}}`$. The filling factors in the bulk 2DEG were $`\nu _\mathrm{b}=7`$ and 2, respectively. The oscillations in $`G_{\mathrm{ad}}`$ occur as SP states around the antidot rise up through the Fermi energy $`E_\mathrm{F}`$. The AB effect causes the overall period $`\mathrm{\Delta }B`$ to be $`h/eS`$, where $`S`$ is the area enclosed by the state at $`E_\mathrm{F}`$. The curve in (b) has pairs of spin-split peaks, whereas in (c) only one spin of the lowest LL is present. The dips in $`dR_{\mathrm{det}}/dV_{\mathrm{G}\mathrm{side}}`$ correspond to a saw-tooth oscillation in the change $`\mathrm{\Delta }R_{\mathrm{det}}`$ from the background resistance (see Fig. 1(d)). Here, note that a small increase in $`B`$ or decrease in $`V_\mathrm{G}`$ has a similar effect on the SP states. Hence, integration with respect to $`B`$ and $`V_\mathrm{G}`$ are qualitatively equivalent. Thus the net charge $`\mathrm{\Delta }q`$ nearby suddenly becomes more positive (making the effective gate voltage less negative) whenever the antidot comes on to resonance (since the dips line up with the zeros in $`dG_{\mathrm{ad}}/dV_{\mathrm{G}\mathrm{side}}`$). The charging signals are not dependent on the presence of conductance oscillations in the antidot circuit. It is still possible to observe the signal with no applied bias in the antidot circuit, or when the side-gate voltage is set to zero so that there is no tunnelling between that edge and the antidot. Hence we conclude that this charge oscillation is associated with states near the antidot, and interpret it as CB.
Before showing how the charging occurs, it is worth reconsidering the shape of the antidot potential. The conventional picture is a potential hill smoothly increasing towards the centre as shown dotted in Fig. 2(a). However, for $`B>0`$, such a potential would require abrupt changes in the carrier density where LL intersect $`E_\mathrm{F}`$, which is not electrostatically favourable. Chklovskii et al. treated such a problem along the edge of a 2D system and introduced alternating compressible and incompressible strips. Compressible strips require flat regions in the self-consistent potential as depicted by a solid line in the figure. It has always been considered that the potential should not be completely flat in antidot systems , since the presence of several SP states at $`E_\mathrm{F}`$ makes AB conductance oscillations impossible in the simple non-interacting picture. However, if CB of tunnelling into the compressible region occurs, conductance oscillations with periodicity $`h/e`$ can still occur for such a self-consistent potential.
We explain the charging as follows. As $`B`$ increases, each SP state encircling the antidot moves inwards, reducing its area to keep the flux enclosed constant. This results in a shift of the electron distribution towards the antidot centre. One may think such a shift should not occur due to screening in the compressible region. However, since each state is discrete and is trapped around the antidot, and the incompressible regions obviously have one electron per state, the total number of electrons in the compressible regions must be an integer. Hence the compressible region also moves inwards with the states. As a result, a net charge $`\mathrm{\Delta }q`$ builds up in the region. When it reaches $`e/2`$, one electron can leave the region and $`\mathrm{\Delta }q`$ becomes $`+e/2`$. This is when resonance occurs, as for CB in a dot. At the same time, the compressible region, by losing the innermost state and acquiring one at its outer edge, shifts back to its original position just after the previous resonance. The same argument also applies, of course, even if there is no compressible region, as the states are still discrete.
As in quantum dot systems, the SP energy spacing $`\mathrm{\Delta }E_{\mathrm{sp}}`$ and the charging energy $`e^2/C`$ together determine when resonance occurs ($`C`$ is the capacitance of the antidot). We have deduced these energy scales from the temperature dependence of the charging signals and the antidot conductance oscillations, and the DC-bias measurements of the differential antidot conductance. The detailed analysis is given in Ref. . We found that $`\mathrm{\Delta }E_{\mathrm{sp}}`$ decreases as $`1/B`$, as expected. In contrast, Maasilta and Goldman found an almost constant energy gap, which we interpret as the interplay of $`\mathrm{\Delta }E_{\mathrm{sp}}`$ and a charging energy which is small at low $`B`$ and saturates at high $`B`$.
The presence of charging should help to explain the $`h/2e`$ AB oscillations. Fig. 3 shows AB conductance oscillations as both constrictions are narrowed keeping the symmetry. On the $`\nu _\mathrm{c}=1`$ plateau the outer spin state is excluded from the constrictions. Peaks up from this plateau for $`B<2.7`$ T are due to inter-LL resonant transmission . This is only noticeable when resonant backscattering is absent, i.e., on the plateau, and is irrelevant in the arguments here. We focus on the resonant backscattering process, which is caused by intra-LL scattering in the constrictions (see diagrams at the right of Fig. 3). The tunnelling probability into the antidot states from the current-carrying edges is controlled by the side-gate voltages. The flat $`\nu _\mathrm{c}=1`$ plateau implies that there is no tunnelling into the inner spin state. Hence, at higher $`\nu _\mathrm{c}`$ at the same field, where the constrictions are wider, there can also be no such tunnelling, despite the presence of $`h/2e`$ oscillations. It is not yet clear why the outer spin states should come on to resonance twice per $`h/e`$ period; however, the equal amplitude of the resonances can be explained since the tunnelling probability for that spin should be almost the same for each resonance.
In conclusion, we have used a non-invasive charge detector to show that tunnelling into antidot states is Coulomb blockaded. When states of both spins are occupied, $`h/2e`$ oscillations are seen but tunnelling is only via states of one spin, showing that there is a strong coupling with states of the other spin.
This work was funded by the UK EPSRC. We thank C. H. W. Barnes and C. G. Smith for useful discussions. M. K. acknowledges financial support from Cambridge Overseas Trust.
Present address: The Technology Partnership PLC, Melbourn Science Park, Melbourn, SG8 6EE, UK.
|
no-problem/9907/cond-mat9907263.html
|
ar5iv
|
text
|
# Thermal and Tunneling Pair Creation of Quasiparticles in Quantum Hall Systems
## I Introduction
The quantum Hall (QH) effect has attracted much attentions from various points of view. It is characterized by the appearance of Hall plateaux and minima in the longitudinal resistivity. Observation of a zero-resistance state implies the existence of a gap in the excitation spectrum leading to the incompressibility of the system. A Hall plateau develops when quasiparticles are pinned by impurities. Quasiparticles are vortices and skyrmions . The aim of this paper is to investigate semiclassically the mechanism of thermal creations of quasiparticle pairs in the presence of impurities. It is pointed out that a quantum-mechanical tunneling plays an important role in this process.
Quasiparticles are activated thermally at finite temperature $`T`$, and contribute to the longitudinal current. It is experimentally known that the longitudinal resistivity exhibits a behavior of the Arrhenius type,
$$\rho _{xx}\mathrm{exp}\left(\frac{\mathrm{\Delta }_{\text{gap}}}{2k_BT}\right),$$
(1)
with $`k_B`$ the Boltzmann constant. The gap energy $`\mathrm{\Delta }_{\text{gap}}`$ is expected to be given by the excitation energy of a pair of quasihole ($`\mathrm{\Delta }_{\text{qh}}`$) and quasielectron ($`\mathrm{\Delta }_{\text{qe}}`$). However, the gap energy experimentally observed is much smaller than the theoretical value even if an effect of finite layer thickness is taken into account. Phenomenologically it is well given by
$$\mathrm{\Delta }_{\text{gap}}=\mathrm{\Delta }_{\text{qh}}+\mathrm{\Delta }_{\text{qe}}\mathrm{\Gamma }_{\text{offset}},$$
(2)
with a sample-dependent offset $`\mathrm{\Gamma }_{\text{offset}}`$.
The offset may be dominated by a Landau-level broadening due to impurities . They are mainly provided by the donors in the bulk situated several hundreds of angstroms away from the electron layer. The Hamiltonian includes the impurity term $`H_{\text{imp}}`$ given by
$$H_{\text{imp}}=ed^2x\rho (𝒙)V_{\text{imp}}(𝒙),$$
(3)
where $`V_{\text{imp}}(𝒙)`$ is the Coulomb potential made by impurities. For a single impurity it may be approximated by
$$V_{\text{imp}}(𝒙)=\pm \frac{Ze}{4\pi \epsilon }\frac{1}{\sqrt{|𝒙|^2+d_{\text{imp}}^2}},$$
(4)
where $`\pm Ze`$ is the impurity charge, $`\epsilon `$ is the dielectric constant ($`4\pi \epsilon 12.9`$), and $`d_{\text{imp}}`$ is the distance from the layer to the impurity in the bulk.
MacDonald et al. derived qualitatively the behavior (2) by studying an impurity effect on the activation energy of magnetorotons in a perturbation theory, though their predicted value for $`\mathrm{\Delta }_{\text{gap}}`$ becomes negative and is physically unacceptable. Furthermore, it is not clear how magnetorotons (electrically neutral objects) would explain magnetotransport experiments. See also Ref. for a related analysis based on magnetorotons.
We present a simple semiclassical picture for a pair creation of quasihole and quasielectron. Arguing that it occurs to minimize the impurity term (3), we derive the formula (2) with
$$\mathrm{\Gamma }_{\text{offset}}e^{}|V_{\text{imp}}(0)|,$$
(5)
where a quasiparticle is assumed to be pointlike. Here, $`e^{}`$ is the electric charge of quasiholes, $`e^{}=e/m`$ at the filling factor $`\nu =n/m`$ with odd $`m`$ ($`m=1,3,5,\mathrm{}`$). We also argue that thermal activation is aided by a tunneling process at sufficiently low temperature and at strong magnetic field. The Arrhenius formula (1) is generalized as
$$\rho _{xx}\mathrm{exp}\left(\frac{\mathrm{\Delta }_{\text{gap}}}{2k_BT}\right)\left[1+e^{S_{\text{tunnel}}/\mathrm{}}\mathrm{exp}\left(\frac{A^{}}{k_BT}\right)\right]^{1/2}.$$
(6)
This formula contains two energy scales $`\mathrm{\Delta }_{\text{gap}}`$ and $`A^{}`$, and $`S_{\text{tunnel}}`$ is the Euclidean action for the tunneling process.
This paper is composed as follows. In Section II, we summarize theoretical values of gap energies at various filling factors. We then compare them with typical experimental data based on the formula (2). In Section III, we discuss semiclassically the dispersion relation of a neutral excitation mode made of a quasihole-quasielectron pair. In Section IV, analyzing thermal creations of quasiparticle pairs, we derive the Arrhenius formula (1) and the generalized formula (6) together with (2) and (5). We show that the generalized formula gives an excellent fitting of the resistivity $`\rho _{xx}`$ for typical data.
## II Gap Energies
Vortices are quasiparticles in fractional QH states. They have electric charges $`\pm e^{}`$ at $`\nu =n/m`$, where $`e^{}=e/m`$. The excitation energy of a vortex pair is solely made of the Coulomb energy,
$$\mathrm{\Delta }_{\text{qh}}+\mathrm{\Delta }_{\text{qe}}=\alpha _{\text{pair}}^{1/m}E_\text{C}^0,$$
(7)
where $`E_\text{C}^0=e^2/4\pi \epsilon \mathrm{}_B`$ is the energy unit. It is expected that
$$\alpha _{\text{pair}}^{1/m}=\frac{1}{m^2}\alpha _{\text{pair}}.$$
(8)
There are several independent estimations on the numerical parameter $`\alpha _{\text{pair}}^{1/3}`$: $`\alpha _{\text{pair}}^{1/3}0.056`$ according to Laughlin ; $`\alpha _{\text{pair}}^{1/3}0.053`$ according to Chakraborty ; $`\alpha _{\text{pair}}^{1/3}0.094`$ according to Morf and Halperin ; $`\alpha _{\text{pair}}^{1/3}0.105`$ according to Haldane and Rezayi ; $`\alpha _{\text{pair}}^{1/3}0.106`$ according to Girvin, MacDonald and Platzman ; $`\alpha _{\text{pair}}^{1/3}0.065`$ according to our semiclassical analysis . Actual samples have finite layer widths, which may decrease considerably the Coulomb energies . We treat $`\alpha _{\text{pair}}^{1/m}`$ as a phenomenological parameter to analyze experimental data. As we derive in Section IV, the gap energies at $`\nu =n/3`$ and $`\nu =n/5`$ are given by
$`\mathrm{\Delta }_{\text{gap}}^{1/3}`$ $`=\alpha _{\text{pair}}^{1/3}E_\text{C}^0{\displaystyle \frac{e}{3}}|V_{\text{imp}}(0)|,`$ (10)
$`\mathrm{\Delta }_{\text{gap}}^{1/5}`$ $`=\alpha _{\text{pair}}^{1/5}E_\text{C}^0{\displaystyle \frac{e}{5}}|V_{\text{imp}}(0)|.`$ (11)
We have fitted typical data due to Boebinger et al. based on these formulas in Fig.1. We have used $`\alpha _{\text{pair}}^{1/3}=0.50/3^20.056`$ and $`\alpha _{\text{pair}}^{1/5}=0.64/5^20.026`$, where the relation (8) holds approximately. We have taken the impurity potential $`V_{\text{imp}}(0)`$ common to all samples, $`e|V_{\text{imp}}(0)|=20.4`$ K. It would imply $`Z/d_{\text{imp}}1/650(\AA )`$ if the impurity potential (4) is assumed.
The $`\nu =1`$ QH state is a QH ferromagnet, where skyrmions are excited. The excitation energy consists of the exchange energy $`E_{\text{ex}}`$, the Coulomb self energy $`E_C`$ and the Zeeman energy $`E_Z`$,
$`E_{\text{ex}}`$ $`=\sqrt{{\displaystyle \frac{\pi }{32}}}E_C^0,`$ (12)
$`E_C`$ $`={\displaystyle \frac{\beta }{2\kappa }}E_C^0,`$ (13)
$`E_Z`$ $`=2\stackrel{~}{g}\kappa ^2\mathrm{ln}\left({\displaystyle \frac{\sqrt{2\pi }}{32\stackrel{~}{g}}}+1\right)E_C^0,`$ (14)
where $`\stackrel{~}{g}=g^{}\mu _BB/E_\text{C}^0`$. The skyrmion size $`\kappa `$ is determined to minimize the total energy. The resulting gap energy is
$$\mathrm{\Delta }_{\text{gap}}^12\left(\sqrt{\frac{\pi }{32}}+\frac{3\beta }{4\kappa }\right)E_\text{C}^0e|V_{\text{imp}}(0)|,$$
(15)
with the skyrmion size
$$\kappa =\frac{1}{2}\beta ^{1/3}\left\{\stackrel{~}{g}\mathrm{ln}\left(\frac{\sqrt{2\pi }}{32\stackrel{~}{g}}+1\right)\right\}^{1/3}.$$
(16)
The parameter $`\beta `$ measures the strength of the Coulomb energy, and we have $`\beta =3\pi ^2/64`$ for a large skyrmion. However, an actual skyrmion size is small, $`\kappa 1`$. Furthermore, there will be a modification due to a finite thickness of the layer . We treat $`\beta `$ as a phenomenological parameter. We have used $`\beta =0.24`$ to fit typical data in Fig.2. The potential $`V_{\text{imp}}(0)`$ is taken phenomenologically as $`e|V_{\text{imp}}(0)|4050`$ K. It would imply $`Z/d_{\text{imp}}2.5/650(\AA )`$ in (4).
Electrons are excited to a higher Landau level and spins are flipped at $`\nu =2,4,\mathrm{}`$. The gap energy is
$$\mathrm{\Delta }_{\text{gap}}^\nu =\mathrm{}\omega _c+\alpha _{\text{pair}}^\nu E_\text{C}^0g^{}\mu _BBe|V_{\text{imp}}(0)|,$$
(17)
where $`\alpha _{\text{pair}}^\nu `$ is the Coulomb energy associated with the electron-quasihole excitation. It has been estimated that $`\alpha _{\text{pair}}^2=\sqrt{\pi /8}0.63`$ by Kallin and Halperin . We have used $`\alpha _{\text{pair}}^2=0.65`$ to fit typical data due to Usher et al. in Fig.3. The potential $`V_{\text{imp}}(0)`$ is taken phenomenologically as $`e|V_{\text{imp}}(0)|37.7`$ K. It would imply $`Z/d_{\text{imp}}2/650(\AA )`$ in (4).
## III Dispersion Relation
Thermal fluctuation activates a quasielectron out of the ground state, leaving behind a quasihole. They are created as electrically neutral objects. Having charges $`\pm e^{}`$ in the magnetic field $`B_{}`$, with $`e^{}=e/m`$ at $`\nu =n/m`$, they feel the Coulomb attractive force as well as the Lorentz force. We examine semiclassically the condition that these two forces are balanced . Let $`V_{\text{pair}}(r)`$ be the potential energy of the quasiparticle pair with a separation $`r`$: The attractive force is $`V_{\text{pair}}(r)/r`$. The Lorentz force is $`e^{}vB`$ when the pair moves parallel to the $`x`$ axis with velocity $`v`$. They are balanced when
$$\frac{V_{\text{pair}}(r)}{r}=e^{}vB.$$
(18)
On the other hand the velocity is given by
$$v=\frac{1}{\mathrm{}}\frac{E_{\text{pair}}(𝒌)}{k}$$
(19)
in terms of the dispersion relation $`E_{\text{pair}}(𝒌)`$ with $`𝒌=(k,0)`$. The total energy $`E_{\text{pair}}`$ is different from the potential energy $`V_{\text{pair}}`$ by the kinetic energy, but it is quenched by the lowest Landau level projection . Then, we may equate
$$E_{\text{pair}}=V_{\text{pair}}.$$
(20)
It follows from (18), (19) and (20) that
$$r=mk\mathrm{}_B^2.$$
(21)
The dispersion relation $`E_{\text{pair}}(𝒌)`$ of a neutral excitation is obtainable from the potential energy $`V_{\text{pair}}(r)`$ with use of this relation. This semiclassical picture is easily justified by a quantum-mechanical analysis.
The potential energy $`V_{\text{pair}}(r)`$ may be approximated by
$$V_{\text{pair}}(r)\mathrm{\Delta }_{\text{qh}}+\mathrm{\Delta }_{\text{qe}}\frac{e^2}{4\pi \epsilon r},$$
(22)
for $`r\mathrm{}_B`$. However, this is a poor approximation for small separation. Indeed, according to this formula, $`V_{\text{pair}}(r)`$ becomes negative for sufficiently small $`r`$. It is necessary to take into account an overlap of quasiparticles. Quasiparticles are extended objects, vortices and skyrmions, described by classical fields. We place a quasihole at the origin ($`𝒙=0`$) and a quasielectron at the point ($`𝒙=𝒓`$). The density modulation is $`\varrho _{\text{pair}}(𝒙;𝒓)=\varrho _{\text{qh}}(𝒙)+\varrho _{\text{qe}}(𝒙𝒓)`$. The Coulomb energy is
$$V_{\text{pair}}(r)=\frac{1}{2}\frac{e^2}{4\pi \epsilon }d^2xd^2x^{}\frac{\varrho _{\text{pair}}(𝒙;𝒓)\varrho _{\text{pair}}(𝒙^{};𝒓)}{|𝒙𝒙^{}|}.$$
(23)
It depends only on the distance $`r`$ between two quasiparticles provided they have cylindrical symmetric configurations. This is the excitation energy of a quasiparticle pair apart from a possible Zeeman energy. It is reduced to (22) when two quasiparticles are sufficiently apart. It is a dynamical problem how $`\varrho _{\text{pair}}(𝒙;𝒓)`$ behaves as $`𝒓0`$. We have $`V_{\text{pair}}(0)=0`$ if the quasihole density is precisely cancelled by the quasielectron density, $`\varrho _{\text{qh}}(𝒙)=\varrho _{\text{qe}}(𝒙)`$, as illustrated in Fig.4(a). It implies the existence of a gapless mode in the dispersion relation $`E_{\text{pair}}(k)`$ via the relations (20) and (21).
If the spin degree of freedom is frozen, there exists no cancellation since the QH state is incompressible. Otherwise, a gapless mode which can only exists in the density fluctuation would lead to compressibility. Hence, it must be that $`\varrho _{\text{pair}}(𝒙;𝒓)0`$ at $`𝒓=0`$. When there exists a short-range repulsive interaction between a vortex and an antivortex, the energy $`E_{\text{pair}}(r)`$ may have a minimum describing a magnetoroton at $`r=r_m\mathrm{}_B`$ as in Fig.4(b).
In QH ferromagnets, on the contrary, the cancellation occurs because the dispersion relation contains a gapless mode , as illustrated in Fig.4(a). A gapless mode develops in the spin fluctuation, and hence QH ferromagnets are incompressible in spite of the existence of a gapless mode. By neglecting the Zeeman energy, the perturbative dispersion relation is given by ,
$$E_{\text{pair}}(𝒌)=\frac{2\rho _s}{\rho _0}𝒌^2,$$
(24)
as implies
$$V_{\text{pair}}(r)=\frac{2\rho _s}{m^2\rho _0\mathrm{}_B^4}r^2,\text{at}r0,$$
(25)
where $`\rho _s`$ is the spin stiffness $`\rho _s=\nu e^2/16\sqrt{2\pi }(4\pi \epsilon )\mathrm{}_B`$.
## IV Thermal Activation
We study thermal creations of quasiparticle pairs in QH ferromagnets with a gapless dispersion relation \[Fig.4(a)\]. We consider two cases. First we analyze a purely thermal process. We then include a tunneling process. As we shall see, it is obvious that our analysis is applicable also to the system where quasiparticles are vortices without gapless modes \[Fig.4(b)\]. It is applicable also to certain integer QH systems, say at $`\nu =2,4,\mathrm{}`$, where there are no quasielectrons: Here, electrons are activated with quasiholes left behind.
### A Thermal Process
At finite temperature $`T`$, thermal spin fluctuation occurs with the rate proportional to the Boltzmann factor $`\mathrm{exp}[E_{\text{pair}}(𝒌)/k_BT]`$ with (24). A well-separated quasiparticle pair ($`r\mathrm{}`$) is created with rate $`\mathrm{exp}[(\mathrm{\Delta }_{\text{qh}}+\mathrm{\Delta }_{\text{qe}})/k_BT]`$, where use was made of (22).
Thermal activation of quasiparticles is greatly enhanced in the presence of impurities bearing electric charges \[Fig.5\]. An impurity creates a Coulomb potential around it. For definiteness we assume that it has a positive charge. As we have seen in Section III, thermal spin fluctuation is regarded as a creation of a quasihole-quasielectron pair. The pair may be broken near an impurity because a quasielectron is attracted by the Coulomb force due to the impurity and a quasihole is expelled by it. The activation energy is given by (2), where $`\mathrm{\Gamma }_{\text{offset}}`$ is the energy gain (5) when the quasiparticle is trapped by a charged impurity \[Fig.5(b)\]. When a quasielectron is trapped by an impurity, only a quasihole moves and contributes to an Ohmic current \[Fig.5(a)\].
We estimate the number density of quasiparticles in thermal equilibrium at temperature $`T`$. On one hand, activated from the ground state near an impurity, a quasiparticle is transferred to the center of the impurity \[Fig.5(b)\]. The height of the potential barrier to jump over is $`A^{}+\mathrm{\Delta }_{\text{gap}}`$. The transition rate is
$$R_{}=c\rho _0\mathrm{exp}\left(\frac{A^{}+\mathrm{\Delta }_{\text{gap}}}{k_BT}\right),$$
(26)
where $`c`$ is a constant depending on the density of impurities. On the other hand, recombined with a quasihole, a quasielectron is transferred back to the ground state. The height of the potential barrier to jump over is $`A^{}`$. The transition rate is
$$R_{}=n_{\text{qh}}n_{\text{qe}}\sigma _{\text{pair}}\mathrm{exp}\left(\frac{A^{}}{k_BT}\right),$$
(27)
where $`n_{\text{qh}}`$ and $`n_{\text{qe}}`$ are the number densities of quasiholes and quasielectrons; $`\sigma _{\text{pair}}`$ is a certain cross section. When the system is at thermal equilibrium there exists a detailed balance between these two transitions, $`R_{}=R_{}`$, from which we derive
$$n_{\text{qh}}n_{\text{qe}}=\frac{c\rho _0}{\sigma _{\text{pair}}}\mathrm{exp}\left(\frac{\mathrm{\Delta }_{\text{gap}}}{k_BT}\right).$$
(28)
Since quasiholes and quasiparticles are activated in pairs, we find
$$n_{\text{qh}}=n_{\text{qe}}=n_0\mathrm{exp}\left(\frac{\mathrm{\Delta }_{\text{gap}}}{2k_BT}\right),$$
(29)
at the center of the plateau, where $`n_0=\sqrt{c\rho _0/\sigma _{\text{pair}}}`$. The Ohmic current is given by the formula (1) with (2) since it is proportional to the number density of quasiparticles.
The QH system is unstable when the gap energy $`\mathrm{\Delta }_{\text{gap}}`$ becomes negative. QH states break down when
$$\mathrm{\Delta }_{\text{qh}}+\mathrm{\Delta }_{\text{qe}}<\mathrm{\Gamma }_{\text{offset}}.$$
(30)
The excitation energy of the pair decreases as the magnetic field decreases. The critical magnetic field is derived from (30),
$$B_{}^{}=m^2\frac{16\pi ^2\epsilon ^2\mathrm{}}{e^3\alpha _{\text{pair}}^{1/m}}|V_{\text{imp}}(0)|^2,$$
(31)
for vortex activation (7) at $`\nu =n/m`$. QH states do not exist for $`B<B_{}^{}`$. This is consistent with typical data \[Fig.1\].
### B Tunneling Process
We have so far considered a purely thermal process of pair creation. However, a tunneling process enhances thermal activation at sufficiently low temperature. When a pair of quasiparticles acquires an energy $`\mathrm{\Delta }_{\text{gap}}`$ thermally, it can tunnel across the potential barrier with height $`A^{}`$ as in Fig.5. The transition rate is
$$R_{}^{\text{tunnel}}=c\rho _0e^{S_{\text{tunnel}}/\mathrm{}}\mathrm{exp}\left(\frac{\mathrm{\Delta }_{\text{gap}}}{k_BT}\right),$$
(32)
where $`S_{\text{tunnel}}`$ is the Euclidean action for the tunneling process. It depends on the height $`A^{}`$ and the range $`r_0`$. It is obvious the transition rate (32) dominates the rate (26) as $`T0`$. The rate of recombination process is still given by (27), because of the plateau in the potential for $`r>r_0`$ in Fig.5. The detailed balance implies
$$R_{}+R_{}^{\text{tunnel}}=R_{},$$
(33)
with (26), (27) and (32), from which we obtain
$`n_{\text{qh}}=n_{\text{qe}}`$ (34)
$`=n_0\mathrm{exp}\left({\displaystyle \frac{\mathrm{\Delta }_{\text{gap}}}{2k_BT}}\right)\left[1+e^{S_{\text{tunnel}}/\mathrm{}}\mathrm{exp}\left({\displaystyle \frac{A^{}}{k_BT}}\right)\right]^{1/2}.`$ (35)
This formula contains two energy scales $`\mathrm{\Delta }_{\text{gap}}`$ and $`A^{}`$. We have fitted typical data due to Boebinger et al. in Fig.6. In so doing we have determined $`\mathrm{\Delta }_{\text{gap}}`$ by our theoretical formula (10), $`\mathrm{\Delta }_{\text{gap}}=\mathrm{\Delta }_{\text{gap}}^{1/3}`$ with use of $`\mathrm{\Gamma }_{\text{offset}}=\frac{1}{3}e|V_{\text{imp}}(0)|=6.8`$ K. The theoretical curve (for $`B=8.9`$ T) is obtained by using $`\mathrm{\Delta }_{\text{gap}}1.7`$ K, $`A^{}0.69`$ K and $`S_{\text{tunnel}}/\mathrm{}2.0`$. The theoretical curve (for $`B=20.9`$ T) is obtained by using $`\mathrm{\Delta }_{\text{gap}}6.1`$ K, $`A^{}3.7`$ K and $`S_{\text{tunnel}}/\mathrm{}4.0`$. The tunneling process makes an important contribution at strong magnetic field because $`A^{}`$ becomes larger. They explain quite well the temperature dependence of the minimum of the longitudinal resistance $`\rho _{xx}`$.
## V Discussions
We have analyzed semiclassically a mechanism of thermal and tunneling pair creations of quasiparticles in the presence of impurities. Our formulas (2) with (5) account for experimental data quite well. The impurity effect is summarized into the parameter $`Z/d_{\text{imp}}`$ in (4). We list characteristic features at various filling factors.
(A) Experimental data by Boebinger et at. at fractional filling factors are explained by excitation of vortices with $`Z/d_{\text{imp}}1/650(\AA )`$. Activation energy is rather insensitive to samples.
(B) Experimental data by Usher et at. at $`\nu =2`$ are explained by excitation of electrons into higher Landau level with $`Z/d_{\text{imp}}2/650(\AA )`$. Activation energy is rather insensitive to samples.
(C) Experimental data by Schmeller et at. at $`\nu =1`$ are explained by excitation of skyrmions with $`Z/d_{\text{imp}}2.5/650(\AA )`$. Activation energy is sensitive to sample movilities.
These numbers ($`Z=13`$ and $`d_{\text{imp}}650\AA `$) are quite reasonable. If we take the results seriously, it seems that only skyrmions are sensible to sample movilities. This might be related to the fact that the skyrmion has no intrinsic size. However, we wish to urge caution. First of all, our semiclassical analysis is the first order approximation to the problem, and further improvements will be necessary. For instance, we have assumed that quasiparticles are pointlike objects to derive the gap-energy formula (2) with (5). It is clear in Fig.5 that the formula should be modified when the overlap of quasihole and quasielectron is not negligible at their dissociation range $`r_0`$. Second, experimental data are taken from different sources at different dates. It is necessary to make careful experiments by using a single sample to determine the parameter $`Z/d_{\text{imp}}`$ at various filling factors. We wish to propose such experiments.
We have pointed out the importance of tunneling process in thermal activation at sufficiently small temperature and at strong magnetic field. It is remarkable that the temperature dependence of the minimum of the longitudinal resistance $`\rho _{xx}`$ is fitted excellently by our formula (6) over a wide range of temperature. This formula is very different from any of previously proposed ones . QH systems may acquire additional interest from the importance of tunneling process. We would like to make a quantitative analysis of this tunneling process in a future report.
|
no-problem/9907/cond-mat9907362.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Magnetic excitation spectra of colossal magnetoresistance (CMR) manganites in the ferromagnetic metal phase attract our attention in the point whether they can be understood by the conventional double-exchange (DE) mechanism. For (La,Sr)MnO<sub>3</sub> and (La,Pb)MnO<sub>3</sub> where $`T_\mathrm{c}`$ is relatively high, a cosine-band type magnon dispersion is observed . At low temperature, Magnon linewidth $`\mathrm{\Gamma }`$ is narrow enough throughout the Brillouin zone, which makes it possible to observe well-defined magnon branches, and it becomes broad at finite temperature. The DE model explains the cosine-band dispersion as well as the temperature dependence of the linewidth in the form $`\mathrm{\Gamma }(1M^2)\omega _q`$, where $`M`$ is the magnetization normalized by the saturation value and $`\omega _q`$ is the magnon dispersion . The origin of the magnon broadening is the Stoner absorption, which disappears at $`T0`$ (or $`M1`$) due to the half-metallic nature of the system.
For compounds with lower $`T_\mathrm{c}`$, Doloc et al. observed broadening of magnon dispersion. They claimed that the abrupt increase of linewidth near the zone boundary can not be explained by DE mechanism alone. One of the possible explanations is that the broadening is caused by the magnon-phonon interaction . A strong coupling between magnons and phonons are through the modulation of the exchange coupling by the lattice displacement.
Anomalous broadening of magnon linewidth is also observed in the double-layered manganite La<sub>1.2</sub>Sr<sub>1.8</sub>Mn<sub>2</sub>O<sub>7</sub> . Intra double-layer coupling creates optical and acoustic branches of magnons. Two-dimensional dispersion of both branches indicates that the inter double-layer coupling is sufficiently weak. Magnon broadening near the zone boundary is also observed in this compound. In this paper we investigate the possibility of this broadening caused by the magnon-phonon interaction.
## 2 Comparison between theory and experiment
As for dispersionless optical phonon with frequency $`\mathrm{\Omega }_0`$, the magnon linewidth due to magnon-phonon interaction is given by $`\mathrm{\Gamma }(q)D(\omega _q\mathrm{\Omega }_0)`$, where $`D(\omega )`$ is the magnon density of states . In a two dimensional system, we have step-function like behavior
$$\mathrm{\Gamma }(q)=\{\begin{array}{cc}\mathrm{\Gamma }_0\hfill & \omega _q>\mathrm{\Omega }_0\hfill \\ 0\hfill & \omega _q<\mathrm{\Omega }_0\hfill \end{array}.$$
(1)
When a magnon with momentum $`q`$ has energy $`\omega _q>\mathrm{\Omega }_0`$, it is possible to find an elastic channel to decay into a magnon-phonon pair with momentum $`q^{}`$ and $`qq^{}`$, respectively, which satisfies $`\omega _q=\omega _q^{}+\mathrm{\Omega }_0`$. This is the reason why magnon linewidth abruptly becomes broad as magnon branch crosses that of the phonon.
Let us now compare the theoretical results with experimental data. We show inelastic neutron scattering intensities for La<sub>1.2</sub>Sr<sub>1.8</sub>Mn<sub>2</sub>O<sub>7</sub> in Fig. 1, where a contour map is plotted in the $`\omega `$-$`q`$ plane. Scattering vector is taken as $`(1+q,0,5)`$ in the reciprocal lattice units. Details of experimental are given in ref. . A well-defined acoustic magnon branch is observed near the zone center. We also see optical phonon which is nearly dispersionless at $`\omega 20\mathrm{m}\mathrm{e}\mathrm{V}`$. Above $`q0.3`$ where magnon branch and phonon branch crosses, we see an abrupt increase of the magnon linewidth. A weak trace of the dispersion is observed above the crossing point.
The data is consistently explained as follows. Magnon dispersion is cosine-band like with the zone boundary energy $`40\mathrm{m}\mathrm{e}\mathrm{V}`$, which crosses with the optical phonon with $`\mathrm{\Omega }_020\mathrm{m}\mathrm{e}\mathrm{V}`$. A strong coupling between magnons and phonons creates abrupt magnon broadening above the crossing point.
## 3 Discussion
Magnon dispersions so far observed in the ferromagnetic metal phase of manganites are well defined near the zone center regardless of compounds and dimensionalities. Zone boundary broadening is, however, strongly compound dependent. The present result suggests that the zone-boundary magnon broadening is influenced by the strength of the magnon-phonon interactions. Although magnon-phonon dispersion crossing is also reported in three-dimensional manganites , zone-boundary broadening is observed only in low $`T_\mathrm{c}`$ compounds. This implies a relation between $`T_\mathrm{c}`$ and spin-lattice interaction strength. Strong damping of the zone-boundary magnons might also explain the “zone-boundary softening” of magnons in low $`T_\mathrm{c}`$ manganites , if we assume that the zone-boundary flat dispersion observed by neutron inelastic scattering is allocated as an optical phonon branch, while the real zone-boundary magnon branch at higher frequency is wiped out above the magnon-phonon crossing point.
Further detailed studies of the relations between the magnon linewidth broadening above the magnon-phonon crossing point and the other magneto-elastic behaviors will clarify the role of the spin-lattice interactions to various physical properties.
N.F. thanks J. Fernandez-Baca for discussion. K.H. acknowledges H. Fujioka, M. Kubota, H. Yoshizawa, Y. Moritomo and Y. Endoh for experimental collaborations. This work is partially supported by Mombusho Grant-in-Aid for Priority Area.
## Figure captions.
Figure 1:
The dispersion relation of acoustic branch of spin wave of La<sub>1.2</sub>Sr<sub>1.8</sub>Mn<sub>2</sub>O<sub>7</sub> at 10 K ($`I4/mmm`$: $`a=3.87`$ Å, $`c=20.1`$ Å). Measurements were carried out on the triple-axis spectrometer TOPAN located in the JRR-3M reactor of JAERI. PG (002) reflection of pyrolytic graphite was use to monochromate and analyze neutrons. Data were taken at every 1 meV and 0.05 rlu (reciprocal lattic unit) along $`(1+q\mathrm{0\; 5})`$ and accumulated for 7 min. Contours are drawn every 20 counts between 0 and 400. Nearly dispersionless optical phonon branch is also observed at $`\omega 20`$ meV.
|
no-problem/9907/cond-mat9907007.html
|
ar5iv
|
text
|
# Effect of the Equivalence Between Topological and Electric Charge on the Magnetization of the Hall Ferromagnet.
\[
## Abstract
The dependence on temperature of the spin magnetization of a two-dimensional electron gas at filling factor unity is studied. Using classical Monte Carlo simulations we analyze the effect that the equivalence between topological and electrical charge has on the the behavior of the magnetization. We find that at intermediate temperatures the spin polarization increases in a thirty per cent due to the Hartree interaction between charge fluctuations.
PACS number 73.40.Hm, 73.20Dx, 73.20Mf \] The physics of a two-dimensional electron gas (2DEG) in a magnetic field ,$`B`$, is determined by the number of Landau levels occupied by the electrons. Since the degeneracy of the Landau levels increases with $`B`$, all the electrons can be accommodated in the lowest Landau level for strong enough fields. At filling factor unity, $`\nu =1`$, and zero temperature, $`T=0`$, the ground state of the 2DEG is an itinerant ferromagnet. The Zeeman coupling between the electron spin and the magnetic field determines the orientation of the ferromagnet polarization. For zero Zeeman coupling the interaction between the carriers produces a spontaneous spin magnetic moment.
Recently the magnetization of the quantum Hall system was measured for filling factors $`0.66<\nu <1.76`$ and temperatures $`1.55K<T<20K`$. At $`\nu =1`$ and very low temperatures the system is fully polarized, while for other filling factors the magnetization is reduced. The demagnetization for $`\nu 1`$ is related to the existence of Skyrmions in the system. Further experiments have verified the existence of Skyrmions using transport, capacity and optical experiments.
For $`\nu =1`$, the temperature dependence of the magnetization $`M(T)`$, has been measured using NMR techniques and magneto-optical absorption experiments. Different theoretical approaches have been used for the study of $`M(T)`$: i) Read and Sachdev have studied the $`N\mathrm{}`$ limit of a quantum continuum field theory model for the spin vector field, $`𝐦(𝐫)`$. This model describes the long-wavelength collective behavior of the electronic spins. This work has been extended by Timm et al. The field theory is expected to be accurate at low temperatures and weak Zeeman coupling. Using $`SU(N)`$ and $`O(N)`$ symmetries in the large $`N`$ limit, and using the spin stiffness $`\rho `$, as a parameter Read and Sachdev obtained results for $`M(T)`$ which are in reasonable good agreement with the experimental data at low temperatures. ii)Kasner and MacDonald calculated $`M(T)`$ using many-particle diagrammatic techniques which include spin-wave excitations and electron spin-wave interaction. This theory is a good improvement on the one-particle Hartree-Fock theory, but it gives a polarization for the system too high compared with the experimental one. Progress in the diagrammatic approach, including temperature dependence screening, has been done by Haussman. iii) The dependence of the magnetization on $`T`$ has been also obtained by exact diagonalization of the many-particle Hamiltonian for a small (up to 9) number of electrons on a sphere. These calculation have important finite size effects at low temperature and weak Zeeman coupling. iv) Finally, quantum Monte Carlo (MC) techniques have been used in order to calculate $`M(T)`$ for a spin 1/2 quantum Heisenberg model on a square lattice with exchange interactions adjusted to reproduce the spin stiffness of the quantum Hall ferromagnet. These calculations are essentially exacts and probably are free of finite size effects.
A unique property of the quantum Hall ferromagnets is the equivalence between the topological charge associated with $`𝐦`$ and the electrical charge. This equivalence make the Skyrmions to be the relevant charged excitation of the 2DEG at $`\nu =1`$. Charge conservation implies that at a given $`\nu `$ the integral of the topological charge over all the space should be constant, independently of $`T`$. At $`\nu =1`$ this constant is zero. A spin vector field texture produces a modulation of the topological charge density. Spatial spin fluctuations increase with temperature and produce a modulation of the topological charge density. In this way thermal fluctuations can produce a strong charge fluctuation. Because of the equivalence between topological and real charge, the modulation of the charge density costs Hartree energy. The models described above: diagrammatic techniques, quantum field theory and quantum MC calculations, do not take into account the Hartree contribution to the energy of the ferromagnet.
In this work we study the effect that the Hartree energy has on the temperature dependence of the magnetization. We perform classical MC simulations of $`M(T)`$ for the energy functional of the Hall ferromagnet. From the comparison of the results with and without Hartree energy we conclude that at moderate temperatures the inclusion of this term modifies the value of the magnetization up to a thirty per cent. For realistic values of the Zeeman coupling, we find that at intermediate-high temperature the Hartree energy is near one third of the Zeeman and exchange energies.
The classical model can not describe correctly the low temperature behavior of $`M(T)`$. The classical dynamics of the electron spins neglects several effects, in particular the quantum description of the spin-density waves which is extremely important for describing $`M(T)`$ at low temperatures. The inclusion, in the classical model of a temperature dependent low-energy cutoff simulates quantum effects. As we show latter, in the Hall ferromagnet this cutoff appears naturally when discretizing the continuum model. In this work we are interested in the effect that the inclusion of the Hartree term has on $`M(T)`$. We expect that in the quantum Heisenberg model, the Hartree energy should have the same effect than in the classical model.
The long-wavelength and low-energy properties of the $`\nu =1`$ Hall ferromagnet can be described by a functional $`E`$ of the unit vector field $`𝐦(𝐫)`$ which describes the local orientation of the spin magnetic order. The functional $`E`$ has three terms, the gradient leading or exchange term $`E_x`$, the Zeeman term $`E_z`$ and the Hartree term $`E_c`$.
$`E`$ $`=`$ $`E_x+E_z+E_c`$ (1)
$`E_x`$ $`=`$ $`{\displaystyle \frac{\rho }{2}}{\displaystyle d^2r\left(𝐦\right)^2}`$ (2)
$`E_z`$ $`=`$ $`{\displaystyle \frac{t}{2\pi \mathrm{}^2}}{\displaystyle d^2r[1m_z(𝐫)]}`$ (3)
$`E_c`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle d^2rd^2r^{}n(𝐫)n(𝐫^{})v(|𝐫𝐫^{}|)},`$ (4)
here $`t=g^{}\mu _BB/2`$ is the Zeeman coupling strength, $`\mathrm{}`$ is the magnetic length, $`v(|𝐫𝐫^{}|)`$ is the Coulomb interaction and $`n(𝐫)`$ is the charge density and it is given by
$$n(𝐫)=\frac{1}{8\pi }ϵ_{\nu \mu }𝐦(𝐫)\left[_\nu 𝐦(𝐫)\times _\mu 𝐦(𝐫)\right].$$
(5)
In this continuum model the total topological charge, $`Q`$, is given by the integral over all the space of $`n(𝐫)`$, and it represents the number of times $`𝐦(𝐫)`$ winds around the sphere $`S^2`$. Skyrmions are non trivial extrema solutions of the functional $`E`$ with $`Q0`$. The continuum model gives a good description of the Skyrmions with moderate and large spins.
In order to perform classical MC calculations we discretize the space using a square lattice with a lattice parameter $`a_L`$. The lattice parameter corresponding to an electron per unit cell is $`a_1=\sqrt{2\pi }\mathrm{}`$. The energy functional has the form,
$$E=\rho \underset{<i,j>}{}𝛀_i𝛀_jt\frac{a_L^2}{a_1^2}\underset{i}{}\left(\mathrm{\Omega }_{z,i}+1\right)+\frac{1}{2}\underset{i,j}{}q_iV_{i,j}q_j.$$
(6)
Here, $`𝛀_i`$ is the unit vector at site $`i`$, $`q_i`$ is the topological charge attached to the unit cell $`i`$ and $`V_{i,j}`$ is the Coulomb interaction between two charges unity distributed uniformly in the cells $`i`$ and $`j`$. The cell $`i`$ is defined by the points 1:($`i_x`$,$`i_y`$), 2:($`i_x`$ +1 ,$`i_y`$), 3:($`i_x`$ +1,$`i_y`$+1) and 4:($`i_x`$,$`i_y`$ +1). The expression of $`q_i`$ as a function of the unit vectors at the points 1-4 is
$$q_i=\frac{1}{4\pi }\left\{(\sigma A)(𝛀_1,𝛀_2,𝛀_3)+(\sigma A)(𝛀_1,𝛀_3,𝛀_4)\right\},$$
(7)
where $`(\sigma A)(𝛀_1,𝛀_2,𝛀_3)`$ denoted the signed area of the spherical triangle with corners $`𝛀_1,𝛀_2`$ and $`𝛀_3`$. Apart from a set of ’exceptional’ configurations of measure zero, this prescription for the topological charge yields well defined integer values for the total topological charge. For smooth spin texture the continuum and discrete expression for the density of topological charge give the same value. In order to analyze the effect of the Hartree term we study also the functional $`E_0=E_x+E_z`$, that is the classical version of the quantum Heisenberg Hamiltonian studied by Henelius et al. In that work the Hartree term is not taken into account.
A comment about the significance of $`a_L`$ is in order. In two dimensions the exchange term and the topological charge are scale invariants and do not depend on $`a_L`$. The Zeeman energy increases quadratically with the lattice parameter and $`V_{i,j}`$ is inversely proportional to $`a_L`$. An increment in $`a_L`$ is similar to an increment of the Zeeman strength and the lattice parameter acts as a low energy cutoff that controls the dynamics of the classical Heisenberg model. We obtain the value of $`a_L`$ by fitting the magnetization obtained from the functional $`E_0`$ to the magnetization obtained from quantum MC simulations. In this way we obtain a temperature dependent $`a_L`$. In the $`T\mathrm{}`$ limit $`a_L`$ should be $`a_1`$.
Now we describe briefly the MC procedure used for obtaining $`M(T)`$. The MC simulations were performed by using the techniques due to Metropolis et al. we study a cluster $`N\times N`$ with periodic boundary conditions (PBC). The use of PBC diminish the finite size effects. For studying $`M(T)`$ at $`\nu =1`$ we consider as a starting configuration a completely ordered ferromagnet, i.e. $`𝛀_i=(0,0,1)`$ for all sites $`i`$. In this way the initial $`Q`$ in the system is zero. The sites to be considered for a change in the spin orientation are randomly chosen, avoiding artificial correlations that could distort the results. Once a site is selected for a spin reorientation, we perform the following operations: i) a small change in the direction of $`𝛀_i`$, ii)calculation of the change induced in the topological charge by the spin reorientation. Only changes that do not modify $`Q`$ are accepted. iii)evaluation of the energy variation $`\mathrm{\Delta }E`$. iv) acceptance or not acceptance of the new spin direction. If $`\mathrm{\Delta }E<0`$ the change is accepted. If $`\mathrm{\Delta }E>0`$ the change is accepted depending if $`e^{\mathrm{\Delta }E/kT}`$ exceeds a random number. We realize a large number of MC steps until reach the equilibrium situation and perform the average of the different statistical properties: the magnetization $`M(T)`$, the different contributions to the total energy per electron, $`<E_x>`$, $`<E_z>`$ and $`<E_c>`$ and a measure of the charge fluctuations,
$$\delta q=\sqrt{\frac{\underset{i}{}q_i^2}{N\times N}}.$$
(8)
In Fig.1 we plot the magnetization as a function of temperature as obtained from the classical MC simulation for the functional $`E_0=E_x+E_z`$ with different value of the lattice parameter. The results correspond to a 2DEG of zero layer thickness, $`\rho =0.0249e^2/\mathrm{}`$ and a Zeeman coupling $`t=0.008e^2/\mathrm{}`$. The calculations are performed in a cluster 20$`\times `$20. We have checked that for this cluster size and PBC, the results are free of size effects. In the same figure we plot $`M(T)`$ as obtained from quantum MC simulations.. This calculation does not include the Hartree term and it is the quantum version of the functional $`E_0`$. By comparing the classical and the quantum results we can estimate $`a_L`$ in the different temperature ranges. We are interested in temperatures in the range $`0.075e^2\mathrm{}<T<0.2e^2/\mathrm{}`$. At lower temperatures the quantum effects are very important and at higher temperatures the effective functional Eq.(1) it is not longer valid. In order to describe the dynamic in this range of temperatures values of $`a_L`$ in the range $`2a_1<a_L<2.5a_1`$ are necessary. For simplicity, in the calculation we use a constant value of the lattice parameter. We use $`a_L`$=2.5$`a_1`$, which is the value of $`a_L`$ in the range $`2a_1<a_L<2.5a_1`$ where the effects of the Hartree interaction are weaker.
In Fig.2 we plot $`M(T)`$ as obtained from the classical MC simulation for the full functional $`E=E_x+E_z+E_c`$ and for the functional without Hartree energy $`E_0=E_x+E_z`$. the results corresponds to a lattice parameter $`a_L=2.5a_1`$. At low temperatures, $`T<0.05e^2/\mathrm{}`$, the spins are not very disordered, the charge modulation is very weak and therefore the Hartree term has a small effect on $`M(T)`$. For higher temperatures the spin fluctuations and consequently the charge fluctuations are stronger and the Hartree term becomes a important contribution to the internal energy. We find that at intermediate temperatures, in the range $`0.05e^2/\mathrm{}<T<0.15e^2\mathrm{}`$ the magnetization obtained with the full functional $`E`$ is near thirty per cent higher than the obtained without the Hartree term. At even higher temperatures, $`T>0.15e^2/\mathrm{}`$, the spin disorder is very large and the magnetization calculated with or without Hartree term is small, although $`M(T)`$ obtained with $`E`$ is always higher than the obtained with $`E_0`$.
As commented above the quantum MC calculation describe correctly the spin density waves and at low temperatures it should give an appropriate $`M(T)`$ for the $`\nu =1`$ Hall ferromagnet. However we expect that for intermediate temperatures the Hartree energy term would modify the quantum MC data in a similar amount that it modifies the classical results. In order to understand the experimental results at intermediate temperatures it is necessary to take into account the charge fluctuations induced by the temperature.
In figure 3 we plot the different contributions to the total energy per electron as a function of the temperature. The parameters are the same than the used in figure 2. At very high temperatures the spin are completely random and the exchange and Zeeman energies tend to their fully disorder values $`2\rho /a_L^2`$ and $`t`$ respectively. Note that intermediate temperatures the Hartree energy is near one third the Zeeman energy. For smaller values of the Zeeman coupling and high temperatures we have found that the Hartree energy can be the more important contribution to the internal energy.
Figure 4 shows the variation of the charge fluctuations, $`\delta q`$ as a function of the temperature. Charge fluctuations cost Hartree energy and they are weaker when the full functional is considered. Observe that at intermediate temperatures $`\delta q`$ is of the order of 0.1, that is $`10\%`$ of the charge in each cell.
In closing, we have studied the effect that the Hartree energy term has on the temperature dependence of a 2DEG at $`\nu =1`$. We find that at intermediate temperatures the spin fluctuations are weakened by the Hartree energy and the magnetization is near thirty per cent bigger than the obtained by neglecting the Hartree energy term. At intermediate temperatures the Hartree energy is an important contribution to the internal energy of the Hall ferromagnet.
We thank A.H.MacDonald, C.Tejedor, L.Martín-Moreno and J.J.Palacios for useful discussions. This work was supported by the CICyT of Spain under Contract No. PB96-0085 and by the Fundación Ramón Areces.
.
|
no-problem/9907/cond-mat9907201.html
|
ar5iv
|
text
|
# A simple deterministic self-organized critical system
## Abstract
We introduce a new continuous cellular automaton that presents self-organized criticality. It is one-dimensional, totally deterministic, without any embedded randomness, not even in the initial conditions. This system is in the same universality class as the Oslo rice pile, boundary driven interface depinning and the train model for earthquakes. Although the system is chaotic, in the thermodynamic limit chaos occurs only in a microscopic level.
In 1987, Bak, Tang and Wiesenfeld showed that fractal behavior, that is, power-law distributions, can be observed in simple dissipative systems with many degrees of freedom without fine tuning of parameters. They called this phenomenon self-organized criticality (SOC). Until then, the studies of fractal structures were basically related to equilibrium systems where fractality appears only at special parameter values where a phase transition takes place.
Since the pioneering work of Bak et. al, an enormous amount of numerical, theoretical and experimental studies have been done in systems that present SOC. One of the most interesting experimental studies demonstrating the existence of SOC in Nature was done in a quasi-one-dimensional pile of rice by Frette et. al. They found that the occurrence of SOC depends on the shape of the rice. Only with sufficient elongated grains, avalanches with a power-law distribution occurred. If the rice had little asymmetry, a distribution described by a stretched exponential was seen. Christensen et. al introduced a model for the rice pile experiment in which the local critical slope varies randomly between 1 and 2. They found that their model, known as the Oslo rice pile model, reproduced well the experimental results.
A good understanding of the Oslo system was achieved by Paczuski and Boettcher. They showed that it could be mapped exactly to a model for interface depinning where the interface is slowly pulled at one end through a medium with quenched random pinning forces. They found that the height of the interface maps to the number of toppling events in the rice pile model. The critical exponents of the two models were identical (within the error bars), showing that they were in the same universality class. Paczuski and Boettcher also conjectured that the train model for earthquakes, which was introduced by Burridge and Knopoff, and studied in detail in , is also in that same universality class. The train model is the only model that we know (besides the one we introduce here) that presents SOC and has no kind of embedded randomness. However, it is governed by coupled ordinary equations (ODE’s), what makes its study very time consuming.
A way of making a system governed by ODE’s more amenable to computer simulations is to discretize it in time. This was done by Olami, Feder and Christensen (OFC) who introduced a continuous cellular automaton (CCA) to study the two-dimensional version of another Burridge and Knopoff model for earthquakes. \[A continuous cellular automaton in SOC is known in chaos theory as coupled lattice maps. These systems are characterized as having space and time variables defined in the domain of real and integer numbers, respectively.\] In the OFC model, SOC is seen only in systems that have a geometry with dimensionality of at least two. That system is a variation of a model introduced by Nakanishi, which has a one-dimensional geometry. However, the model introduced by Nakanishi does not present SOC, since the power-law distribution it presents has an upper cutoff that is unrelated to the system size.
Here we introduce a new self-organized critical system, that is governed by a CCA (that is, the space is continuous and the time is discrete). It is one-dimensional and has no embedded randomness, not even in the initial conditions. We will show that our model belongs to the same universality class as the Oslo rice pile, boundary driven interface depinning and the train model. The importance of our results comes from the fact that we show that it is possible to map stochastic SOC systems to simple, discrete, chaotic systems, in which no randomness exists. Such an equivalence of a deterministic model with no embedded randomness which is chaotic with a stochastic model also occurs between the deterministic Kuramoto-Shivashinsky equation and the Langevin equation proposed by Kardar, Parisi and Zhang. In our opinion, the train model governed by ODE’s already achieved this. However, because its simulation is very time consuming, it will probably be impossible to find such equivalences for higher dimensional (two, three, etc.) systems, unless the system is discretized in time, as we do here. In fact, we are unaware of any studies on train-like systems with dimensionality higher than one. To the best of our knowledge, our model is the only SOC system introduced so far that is one-dimensional, totally deterministic and with discrete time.
Another important result of this paper concerns to the fact that we show that although chaos exists in the model, it decreases as the system size increases, and in thermodynamic limit it exists only in a microscopic level. Consequently, our results indicate that the fractal structures seen in Nature and supposed associated with SOC, could in principle result only from nonlinearities in those systems, without any need for the presence of random irregularities. Such fundamental questions are also found in equilibrium statistical mechanics, where it is unknown if probability theory is only a tool to describe phenomena that in principle could have been originated solely from microscopic chaos.
The train model is shown in Fig. 1(a). It consists of a chain of blocks connect by harmonic springs. The blocks are on a rough surface with friction, and the first block is pulled slowly with a constant velocity by a driver. The dynamics of the model is as follows: suppose that all the blocks are initially at rest. As the driver pulls the first block, it remains stuck until the elastic force overcomes the static friction. When this occurs, the first block will move a little and will be stopped again by friction. Such small events (or earthquakes) will continue, and will increase the elastic force on the second block. There will be a moment when the elastic force on the second block will overcome the friction force, and then we will see an event involving two blocks. This kind of dynamics will continue with events involving three, four, or all the blocks in the system.
The elastic force in block $`i`$ is given by $`f_i=x_{i1}2x_i+x_{i+1}`$, where $`x_i`$ is the displacement of block $`i`$ with respect to its equilibrium position (without losing generality, the spring constant can be taken as equal to 1). The boundary conditions are $`x_0=0`$ and $`x_{L+1}=x_L`$. After an earthquake, in which block $`i`$ was displaced by $`\mathrm{\Delta }x`$, the elastic forces in block $`i`$ and in its nearest neighbors will be $`f_i^{}=f_i\mathrm{\Delta }f`$, and $`f_{i\pm 1}^{}=f_{i\pm 1}+\mathrm{\Delta }f/2`$, respectively, where $`\mathrm{\Delta }f=2\mathrm{\Delta }x`$. Thus, the force that is relaxed in block $`i`$ is redistributed equally to its near nearest neighbors, implying conservation of elastic forces. This is embedded into the geometry of the system. However, the model does dissipate energy through friction between the rough surface and the blocks. Consequently, the model is conservative with respect to elastic forces, and dissipative with respect to energy. This is one of the main distinctions between the train model and the other Burridge and Knopoff earthquake model studied by Nakanishi and OFC, in which neither the energy nor the forces are conserved.
In the discretized version of the train model that we introduce here, each block $`i`$ is characterized by a variable $`f_i`$, which we will call force, with $`i=1,\mathrm{},L`$, and $`L`$ being the number of blocks in the system. The boundary conditions are the same as the ones in the train model, which are given above. The dynamical evolution of the system is determined by the following algorithm:
(1) Start the system by defining initial values for the variables $`f_i`$, which can be the same for all the blocks, so the they are below a chosen, fixed, threshold $`f_{th}`$.
(2) Update the force in the first block by incrementing its force to the threshold value plus a fixed small value $`\delta f`$, i.e., $`f_1=f_{th}+\delta f`$ (an event is going to be triggered).
(3) Check the forces in each block. If a block $`i`$ has $`f_if_{th}`$, update $`f_i`$ according to $`f_i^{}=\varphi (f_if_{th})`$, where $`\varphi `$ is a given nonlinear function that has a parameter $`a`$. Increase the forces in its two nearest neighboring blocks according to $`f_{i\pm 1}^{}=f_{i\pm 1}+\mathrm{\Delta }f/2`$, where $`\mathrm{\Delta }f=f_if_i^{}`$.
(4) If $`f_i^{}<f_{th}`$ for all the blocks, go to step (2) (the event has finished). Otherwise, go to step (3) (the event is still evolving).
One can use either parallel or sequential update in the evolution of the system. We have verified that the critical exponents of the model do not depend on the type of update chosen. The systems is governed by $`L`$ variables and has two parameters, $`a`$ and $`\delta f`$, since without losing generality we can take $`f_{th}=1`$. The force in our model is supposed to mimic the combination of two forces in the train model, that is, the elastic and the friction forces. The elastic force is periodic, whereas the friction force in simulations is generally assumed to decrease with the velocity of the block. We have found numerically that $`\varphi `$ mimics the combination of these two force when it is a periodic function, since only in this way the system presents SOC behavior. So, the periodicity of the elastic force dominates over the form of the friction force. The models introduced by Nakanishi and OFC assume that $`\varphi `$ is a strictly nonincreasing function. We have found that if we use a strictly nonincreasing function for $`\varphi `$, such as the one used in, we observe in our model the presence of stretched exponentials, instead of power-laws. It is worth noting that in this situation our model reproduces the results of the distributions found with nearly round rice in , which were also governed by stretched exponentials.
The functional form we chose for $`\varphi `$ is shown in Fig. 1(b), which is given by, $`\varphi (x)=1a[x]`$ where $`[x]`$ denotes $`x\mathrm{modulo}1/a`$, that is, a sawtooth function. However, we have tested several other periodic functions, and found that the SOC behavior we show here remains, that is, the results are robust, the essential ingredient being periodicity (not necessary a perfect one) for $`\varphi `$.
In Fig. 2 we show the distribution of events involving $`s`$ update steps, which is the size of the event, using parallel update. The events that involve all the blocks of the chain have been excluded from our analysis, since they do not belong to the same distribution, as expected. Before we start to compute the statistics, we wait until the last block has moved, in order to neglect transient effects. In (a) we show three different cases, with $`L=512`$, in which we have varied one parameter at a time. The solid line refers to $`a=4`$ and $`\delta f=0.1`$, the dashed line is for $`a=4`$ and $`\delta f=0.01`$, and the short-dashed line refers to $`a=2.5`$ and $`\delta f=0.01`$. We see that the small events have their own distribution, like in the Oslo model for rice piles. A careful analysis has shown that these small events have an exponential distribution. As $`s`$ is increased the distribution becomes a power-law, which has a cutoff related to finite size effects, only. We find that the slope of the power law is independent of $`a`$ and $`\delta f`$. However, the crossover point $`s^{}`$ of the exponential behavior to the power law one depends on $`a`$, but not on $`\delta f`$. The frequency of the small events is inversely proportional to $`\delta f`$.
In Fig. 2(b) we show simulations keeping $`a`$ and $`\delta f`$ fixed, with $`a=4`$ and $`\delta f=0.1`$, and varying $`L`$. We see that increasing $`L`$, the range of the power-law increases. To fit the data to a single curve, we try the finite size scaling ansatz $`P(s,L)=s^\tau G(s/L^D)`$, where $`D`$ and $`\tau `$ are the basic exponents of the model, defining its universality class. $`D`$ and $`\tau `$ are called the dimension and the size distribution exponents, respectively. In our model we find that $`<s>L`$ which results in $`\tau =21/D`$. The best fit for $`P(s,L)`$ is found for $`\tau 1.54`$ and $`D2.20`$. The data collapse for these values of the exponents is shown in Fig. 2(c). Within the error bars, these exponents are the same ones of the Oslo rice pile, driven boundary interface depinning and the train model for earthquake. Consequently, all these models, including the one we introduce here, are in the same universality class.
In Fig. 3(a) we show the frequency of the events $`P(T)`$ as a function of the its time duration $`T`$, for different system sizes. The parameters are the same as in Fig. 2(b). Again, we see a power-law distribution, except for the smallest events. A data collapse for the function $`P(T,L)=T^yf_T(T/L^\sigma )`$ is shown in Fig. 3(b), with $`y=1.84`$ and $`\sigma =1.40`$. These are the same exponents found in the Oslo rice pile. From conservation of probability it follows that $`\sigma (y1)=D(\tau 1)`$ in good agreement with our results. The results shown in Fig. 3 are, again, for parallel update, since in sequential update the time duration and event size coincide resulting in $`y=\tau `$ and $`\sigma =D`$.
Using the method introduced by Benettin et al., we have calculated the largest Liapunov exponent, $`\lambda _1`$, and the second largest Liapunov exponent, $`\lambda _2`$, of the system. If $`\lambda _1`$ is greater than zero, it implies that the system has a strong sensitivity to the initial conditions, and by definition, it is called chaotic. To study the Liapunov exponents we have chosen sequential update. The reason for this is that the calculation of the Liapunov exponent assumes, by its own definition, that small changes happen in the system in the time unit, and this is more consistent with sequential rather than with parallel update. In Fig. 4(a) and Fig. 4(b) we show $`\lambda _1`$ and $`\lambda _2`$ as a function of $`a`$ for $`\delta f=0.1`$, and as a function of $`\delta f`$ for $`a=4`$, respectively. In both cases $`L=64`$. We have found that for $`a1`$ the system is in continuous motion, and therefore being impossible to define earthquakes. Consequently, SOC is only seen when $`a>1`$. We see that the Liapunov exponents increase as $`a`$ or $`\delta f`$ increases, with the other parameters kept fixed. Fig. 4(c) shows the largest Liapunov exponent as a function of the system size for $`a=4`$ and $`\delta f=0.1`$ (solid), $`a=4`$ and $`\delta f=0.01`$ (dashed) and $`a=2.5`$ and $`\delta f=0.01`$ (short-dashed). We observe that $`\lambda _1`$ is approximately constant for small $`L`$ and decreases nearly as power-law when $`L`$ is greater than a certain value. The value where this bending occurs seems to be sensitive to both $`\delta f`$ and $`a`$. Since $`\lambda _10`$ in the thermodynamic limit ($`L\mathrm{}`$) we conclude that chaos exists only in a microscopic level, and that any time or space scales in the system are negligible. We have studied the system using slower time scales, such as measuring time by the updates in the first block. Still we find that $`\lambda _10`$ when $`L\mathrm{}`$. In the train model governed by ODE’s and pulled with a constant finite velocity we have found that the largest Liapunov exponent tends to a constant as the system size increases. However, our new unpublished results shows that the Liapunov exponents in that system start to decrease for system sizes greater than a given value, as it happens in the system we introduce here.
|
no-problem/9907/cond-mat9907079.html
|
ar5iv
|
text
|
# Operation characteristics of piezoelectric quartz tuning forks in high magnetic fields at liquid Helium temperatures
## Abstract
Piezoelectric quartz tuning forks are investigated in view of their use as force sensors in dynamic mode scanning probe microscopy at temperatures down to 1.5 K and in magnetic fields up to 8 T. The mechanical properties of the forks are extracted from the frequency dependent admittance and simultaneous interferometric measurements. The performance of the forks in a cryogenic environment is investigated. Force-distance studies performed with these sensors at low temperatures are presented.
Piezoelectric quartz tuning forks have been introduced in scanning probe microscopy by Günther, Fischer and Dransfeld in Ref. for use in scanning near field acoustic microscopy and later by Karrai and Grober in Ref. as the distance control for a scanning near field optical microscope (SNOM). Several other implementations of tuning forks have been reported, e.g. in SNOMs , scanning force microscopes (SFMs) , magnetic force microscopes and in the acoustic near field microscope . The operation in a cryogenic environment was reported by Karrai and Grober in their pioneering work in Ref. . To our knowledge operation characteristics at temperatures below 10K were not reported to date. In this paper we present results on piezoelectric tuning fork sensors in our low temperature SFM which operates in the sample space of a <sup>4</sup>He cryostat .
In our studies we utilized commercially available tuning forks (see inset of Fig. 1a) which are usually employed in watches with a standard frequency of $`2^{15}`$Hz. These forks are fabricated from wafers of $`\alpha `$-quartz with the optical axis oriented approximately normal to the wafer plane.
The tuning fork can either be mechanically driven by an additional piezo element or electrically excited through the tuning fork electrodes . Similar to Ref. we drive the oscillation electrically by applying an AC-voltage of typically $`U=0.0110`$mV to the tuning fork contacts. For the investigation of the tuning fork behavior we measure the complex admittance of the fork with a two-channel lock-in amplifier. When employed as the sensor for dynamic force measurements the tuning fork is part of a phase-locked loop described in Ref. .
Figure 1b shows a typical resonance in the admittance of a tuning fork measured at room temperature at a pressure of $`6\times 10^7`$ mbar. The admittance exhibits an asymmetric resonance at 32768Hz and a sharp minimum at about 30Hz above this resonance. The current through the fork consists of two parts : $`I_p`$ is the current created by the mechanical (harmonic) oscillation of the fork arms through the piezoelectric effect of the quartz. $`I_0`$ is the capacitive current through the fork. The behavior of the admittance can therefore be modeled with the equivalent circuit shown in the inset of Fig. 1b.
The LRC series resonator with a resonance frequency $`f_0=1/(2\pi \sqrt{LC})`$ around $`2^{15}`$Hz and a quality factor $`Q=\sqrt{L/(CR^2)}`$ which is typically of the order of $`10^4`$ allows the current $`I_p`$ to pass. Using a mechanical model one can relate $`L`$, $`R`$ and $`C`$ with the effective mass of one arm $`m`$, the damping constant $`\gamma `$, the spring constant $`k`$ and the driving force $`\alpha U`$ via $`L=m/\left(2\alpha ^2\right)`$, $`C=2\alpha ^2/k`$, $`R=m\gamma /\left(2\alpha ^2\right)`$ . The capacitance $`C_0`$ is mainly determined by the geometrical arrangement of the contacts on the crystal, the dielectric properties of the quartz and by cable capacitances. The fit to the measured admittance in Fig. 1a (which could not be distinguished in the plot from the measured curve) leads to $`C_0=1.2129`$pF, $`C=2.9fF`$, $`L=8.1\times 10^3`$H, $`R=27.1`$k$`\mathrm{\Omega }`$, $`f_0=32765.58`$Hz and $`Q=61730`$.
In addition to the electrical resonance we measured the mechanical resonance amplitude $`x`$ of one of the tuning fork arms (see Fig. 1a) utilizing the interferometer setup usually used for optical cantilever deflection detection in a scanning force microscope . From a combination of both measurements (Fig. 1a and 1b) and using the relation $`I_p=4\pi f\alpha x`$ we determined the effective mass $`m=0.332`$ mg, the quality factor $`Q=61734`$, the spring constant $`k=14066.4`$ N/m and the piezoelectric coupling constant $`\alpha =4.26\mu `$C/m. The effective mass calculated from the density of quartz and the dimensions of a tuning fork arm according to Ref. turns out to be 0.36 mg, in good agreement with our measured value. A linear relation between the driving voltage and the oscillation amplitude was found in the interferometer measurement down to amplitudes of 1 nm as well as in large-amplitude measurements performed under an optical microscope up to amplitudes of about 100 $`\mu `$m.
For the use in our SFM we remove the tuning forks from their casing and glue a thin metallic wire (diam. 10$`\mu `$m - 50$`\mu `$m) in the direction of the oscillatory motion to the end of one prong. The wire is then etched electrochemically to form a sharp tip. If the wire is electrically connected to one of the tuning fork contacts its length is about 500$`\mu `$m. In cases where we connect the wire to a separate contact pad the wire can be up to 3mm long. The additional weight $`\mathrm{\Delta }m`$ fixed to the tuning fork arm is in the range between 1.5$`\mu `$g and 50$`\mu `$g. In order to obtain the most sensitive force gradient detection it is important to keep the relative mass increase as small as possible. After this modification the resonance of the tuning forks are always shifted to lower frequency, in most cases less than 100Hz and typical quality factors $`Q=10000`$ under ambient conditions are reached.
In our SFM we operate the tuning forks in the gas flow of a variable temperature <sup>4</sup>He cryostat. Alternatively the sample space can be flooded with liquid He and the microscope is then operated either in normal fluid <sup>4</sup>He or in a mixed normal-superfluid phase (below 2.2K) . Table I shows tuning fork resonance characteristics obtained under these different conditions. Compared to operation in the gas the resonance frequency of the fork is shifted by more than 500Hz to lower frequencies in the normalfluid liquid. At the same time the $`Q`$-value of the resonator decreases due to the significantly increased friction in the liquid. At temperatures below 2.2K the quality factor increases again as a result of the formation of superfluid <sup>4</sup>He which tends to suppress friction effects . The explanation of the reduction of the resonance frequency in the superfluid is an open question. These measurements demonstrate the robustness of the tuning fork properties when external conditions are severly changed. With a conventional cantilever for scanning force microscopy it is difficult to achieve $`Q`$-values of this order in liquid <sup>4</sup>He. However, even with the tuning forks, operation in liquid He is cumbersome because the resonance frequency fluctuates strongly. During scanning the tuning fork is typically operated at a constant frequency shift of 100 mHz and deviations of more than a few mHz induced by the environment are intolerable.
The temperature coefficient of the resonance frequency below 5K was determined to be $`260\mathrm{m}\mathrm{H}\mathrm{z}/\mathrm{K}`$ at a constant pressure of 10 mbar (see Fig. 2). The pressure coefficient was $`50\mathrm{m}\mathrm{H}\mathrm{z}/\mathrm{mbar}`$ at 5 K. This means that frequency shifts of the order of 10mHz are produced by temperature instabilities of about 50mK or pressure instabilities of about 0.2mbar. In addition, we measured the dependence of the resonance frequency on an external magnetic field in the range between 0 and 8 T. The detected frequency shift was smaller than 100mHz.
In order to demonstrate the power of the piezoelectric tuning fork sensing of tip-sample interactions we show in Fig. 3 a set of measuements of the frequency shift versus distance of the tip to the Au surface measured at 2.5K and zero tip-sample voltage. From the quality of topographical images taken with this tip just before these measurements we deduce a tip radius of several hundred nanometers.
The oscillaton amplitude was varied by a factor of 10 from one curve to the next, starting from a sub-nm value. A larger oscillation amplitude averages over a larger $`z`$-range in the repulsive as well as in the attractive region of the interaction such that the frequency shift at a given distance decreases with increasing amplitude. Such a behavior was quantitatively described by Giessibl for an attractive van der Waals-potential of the form $`Vz^n`$, where $`n`$ is a positive integer.
In the following we discuss several aspects of the utilization of tuning forks in SFMs at cryogenic temperatures. The typical values of the spring constants of tuning forks are much higher than the ones of conventional SFM cantilevers. This has several implications for their use in a dynamic mode SFM: first, a given force gradient leads to a smaller shift of the resonance frequency than in conventional cantilevers, since typically $`\delta f/f_0k^1`$, no matter whether small or large oscillation amplitudes are used . Care has to be taken in the design of the control electronics to make sure that frequency shifts of at least 10mHz can be measured to compensate this disadvantage. Second, there is no danger for the tip to snap into contact with the sample, since the condition $`k>F/z`$ is met for all tip-sample spacings. This makes the tuning fork an ideal tool to investigate tip-sample interactions as a function of distance. And last but not least, the high spring constant makes tuning forks ideal for specific applications, e.g. for nanolithography in the non-contact mode or as carriers for all kinds of scanning nano-sensors which may be harder to implement on conventional SFM cantilevers. These issues will be discussed in future publications.
The $`Q`$-values obtained with our tuning forks at pressures around 1mbar are generally of the same order as the best cantilevers when operated under UHV-conditions. The robustness of $`Q`$ against pressure changes is also significantly higher than that of conventional cantilevers. Compared to piezoresistive cantilevers which tend to heat systems at low temperatures with powers in the 1mW range tuning forks do not produce any significant amount of heating power and are therefore ideal for future applications in <sup>3</sup>He-systems or dilution refrigerators.
In conlusion, we have demonstrated the operation of piezoelectric tuning forks as sensors for dynamic mode scanning force microscopy at cryogenic temperatures and discussed their performance. The robustness of this sensor allows to achieve very high quality factors even under the otherwise problematic conditions of non-UHV environments. The force gradient detection method is well suited for force distance studies.
Financial support by ETH Zürich is gratefully acknowledged.
|
no-problem/9907/cond-mat9907008.html
|
ar5iv
|
text
|
# Specific-heat evidence for strong electron correlations in the thermoelectric material (Na,Ca)Co2O4
## Abstract
The specific heat of (Na,Ca)Co<sub>2</sub>O<sub>4</sub> is measured at low-temperatures to determine the magnitude of the electronic specific-heat coefficient $`\gamma `$, in an attempt to gain an insight into the origin of the unusually large thermoelectric power of this compound. It is found that $`\gamma `$ is as large as $``$48 mJ/molK<sup>2</sup>, which is an order of magnitude larger than $`\gamma `$ of simple metals. This indicates that (Na,Ca)Co<sub>2</sub>O<sub>4</sub> is a strongly-correlated electron system, where the strong correlation probably comes from the low-dimensionality and the frustrated spin structure. We discuss how the large thermopower and its dependence on Ca doping can be understood with the strong electron correlations.
Recently, coexistence of a large thermopower ($``$100 $`\mu `$V/K at 300 K) and a low resistivity was found in a transition-metal oxide NaCo<sub>2</sub>O<sub>4</sub> , which made this compound an attractive candidate for thermoelectric (TE) applications. Normally, large thermopower is associated with materials with low carrier densities and the thermoelectric properties are optimized for systems with typical carrier concentration of 10<sup>19</sup> cm<sup>-3</sup> ; on the other hand, NaCo<sub>2</sub>O<sub>4</sub> has two-orders-of-magnitude larger carrier density ($``$10<sup>21</sup> cm<sup>-3</sup>) and yet shows a thermopower comparable to that of the usual low-carrier-density TE materials . The origin of the large thermopower in NaCo<sub>2</sub>O<sub>4</sub> is yet to be understood.
In NaCo<sub>2</sub>O<sub>4</sub>, Co ion has a mixed valence between 3+ and 4+. Since NaCo<sub>2</sub>O<sub>4</sub> is a layered system with a triangular lattice and Co<sup>4+</sup> has spins , it is expected that the interplay between charges and spins is playing a major role in producing the peculiar electronic properties of this compound, as in the case of high-$`T_c`$ cuprates. In those systems where Coulomb interactions or spin fluctuations are important, it is often found that the electrons become strongly correlated and thus the simple band picture is not well applicable. In fact, magnetotransport studies of NaCo<sub>2</sub>O<sub>4</sub> found that the Hall coefficient has an opposite sign to the thermopower and is strongly temperature dependent , which suggests the presence of a strong correlation in this system. Therefore, to elucidate the origin of the large thermopower in NaCo<sub>2</sub>O<sub>4</sub>, it would be illuminating to determine the strength of the electron correlations in NaCo<sub>2</sub>O<sub>4</sub> by measuring the electronic specific heat.
In this paper, we report our specific-heat measurement of NaCo<sub>2</sub>O<sub>4</sub> at low temperatures, which determines the electronic specific-heat coefficient $`\gamma `$ of this system for the first time. Since it has been reported that partial replacement of Na with Ca systematically increases the thermopower , we measured a series of (Na<sub>1-x</sub>Ca<sub>x</sub>)Co<sub>2</sub>O<sub>4</sub> samples and investigated the change of $`\gamma `$ with Ca substitution. Our results show that this system is indeed a strongly-correlated system with $`\gamma `$$``$48 mJ/molK<sup>2</sup>. It is also found that the Ca substitution does not change the $`\gamma `$ value appreciably, while the Ca substitution reduces carrier concentration and increases the thermopower. Based on these observations and by invoking a simple Drude picture, we discuss that the large thermopower of NaCo<sub>2</sub>O<sub>4</sub> is a result of a large electronic specific heat.
The samples used in this study are polycrystals prepared with a conventional solid-state reaction. Starting powders of NaCO<sub>3</sub>, CaCO<sub>3</sub>, and Co<sub>3</sub>O<sub>4</sub> are mixed and calcined first at 860C for 12 hours, and then at 800C for 6 hours. Since it is known that Na tends to evaporate during the calcination, which produces impurity phases in samples with (nominally) stoichiometric composition, we used samples with the composition of Na<sub>1.1-x</sub>Ca<sub>x</sub>Co<sub>2</sub>O<sub>4</sub> . The measurements were done on samples with three different Ca contents, $`x`$=0.0, 0.05, and 0.10. The specific heat is measured using a standard quasi-adiabatic method with a mechanical heat switch. The mass of the samples used for the measurement is typically 1000 mg and the heat capacity of the samples is always more than two orders of magnitude larger than the addenda heat capacity.
Figure 1 shows the specific heat $`C`$ of Na<sub>1.1-x</sub>Ca<sub>x</sub>Co<sub>2</sub>O<sub>4</sub> in one decade of temperature range, from 2 K to 26 K. One may immediately notice two features: (i) The magnitude of $`C/T`$, about 80 mJ/molK<sup>2</sup> at 10 K, is large compared to simple metals (for example, pure Cu has $`C/T`$ $``$ 6 mJ/molK<sup>2</sup> at 10 K). (ii) An unusual increase is observed at low temperatures for all $`x`$ values.
As the first approximation, let us neglect the low-temperature increase in $`C`$ for the moment and analyze the data for $`T>`$7 K with the Debye formula. Since the temperature range to be analyzed is not quite low enough, we should include higher-order terms for the phonon specific heat and use the formula
$$C/T=\gamma +\beta T^2+\beta _5T^4+\beta _7T^6.$$
(1)
Figure 2 shows the result of the analysis which neglects $`\beta _5`$ and $`\beta _7`$ (thereby the fit becomes a straight line in the plot of $`C/T`$ vs $`T^2`$) to show that the simple Debye formula without the higher-order lattice terms is moderately good to describe the data in the temperature region 7 - 12 K. It is clear from Fig. 2 that $`\gamma `$ does not change appreciably with $`x`$. The values of $`\gamma `$ and $`\beta `$ obtained from the straight-line fits in Fig. 2 are listed in Table I. The result of the analysis which uses the full formula of Eq. (1) is shown in Fig. 3. Clearly, Eq. (1) describes the data above 7 K very well and we obtained good fits in the temperature range 7 - 26 K for all three data sets. The values of $`\gamma `$, $`\beta `$, $`\beta _5`$, and $`\beta _7`$ obtained from the fits in Fig. 3 are also listed in Table I. The electronic specific-heat coefficient $`\gamma `$ obtained from this analysis is relatively large, 52 - 54 mJ/molK<sup>2</sup>, compared to simple metals where $`\gamma `$ is usually a few mJ/molK<sup>2</sup>. Interestingly, $`\gamma `$ does not change appreciably with $`x`$ within our range of resolution.
Perhaps as a better approximation, we next analyze our data using the formula which includes the Schottky term:
$`C/T`$ $`=`$ $`\gamma +\beta T^2+\beta _5T^4+\beta _7T^6`$ (3)
$`+c_0(T_0/T)^2{\displaystyle \frac{\mathrm{exp}(T_0/T)}{(\mathrm{exp}(T_0/T)+1)^2}},`$
where $`T_0`$ is the characteristic temperature for the Schottky anomaly. Figure 4 shows the result of the fit of the data to Eq. (2). Apparently, the data in the whole temperature range measured (2 - 26 K) are well fitted with Eq. (2). The fitting parameters are listed in Table II. In both Tables I and II, the Debye temperatures calculated from $`\beta `$ are also listed. Although the $`\gamma `$ values obtained with Eq. (2) tend to be smaller compared to the result of the simpler analysis using Eq. (1), the changes are only about 10%. The values of $`\gamma `$ with this analysis are about 48 mJ/molK<sup>2</sup> and, again, do not seem to be systematically correlated with $`x`$. We note that the low temperature limit used for fitting the Schottky term is 2 K; extending the measurement to lower temperature is desirable for a better determination of both the Schottky anomaly and the $`\gamma `$ value. It is possible that $`\gamma `$ becomes smaller than $``$48 mJ/molK<sup>2</sup> when the temperature range is extended.
The above results indicate that the magnitude of the enhancement of the density of states, which is represented in the magnitude of $`\gamma `$, does not show a clear change with Ca substitution. This is not a trivial result, because Ca substitution is expected to reduce carrier density $`n`$ . As mentioned in the introduction, the origin of the strong correlation in this system is probably the frustration of the antiferromagnetically interacting spins in the two-dimensional triangular lattice . The magnetic susceptibility of NaCo<sub>2</sub>O<sub>4</sub> shows a Curie-Weiss-like temperature dependence , which suggests that spin fluctuations are actually appreciable in the magnetic properties. Also, a negative magnetoresistance has been observed in the temperature region where the resistivity does not show any localization behavior , suggesting that the scattering from spin fluctuations is playing a major role in the charge transport. If the spin fluctuations are indeed the source of the strong correlation, one would expect the electron correlation to become stronger as the carrier concentration $`n`$ is decreased, because mobile carriers tend to destroy spin correlations. Since we expect Ca doping to reduce the free-electron density of states (DOS) through the reduction in $`n`$, the effect of increasing correlation and the decreasing free-electron DOS upon Ca doping would tend to cancel with each other in determining the electronic specific heat. This might explain the apparent insensitivity of the observed $`\gamma `$ to the Ca doping.
Now let us briefly discuss the inference of our specific-heat result to the large thermopower by employing the Drude picture for the thermoelectric transport. Although the simple Drude picture cannot explain all the aspects of the complicated charge transport in NaCo<sub>2</sub>O<sub>4</sub>, it may help us to capture the basic physics for the enhancement of the thermopower. The Seebeck coefficient $`S`$ of NaCo<sub>2</sub>O<sub>4</sub> monotonically increases with $`T`$ , which suggests that the charge transport is Fermi-liquid like and the Drude model is expected to be used as the first approximation (as opposed to many other strongly-correlated systems which show non-Fermi-liquid behavior). In the simple Drude picture, the Seebeck coefficient $`S`$ is proportional to $`c_e/n`$, where $`c_e`$ is the electronic specific heat . Thus, when the strong correlation enhances $`c_e`$, the Drude picture predicts a large thermopower. More interestingly, the increase in $`S`$ of Na<sub>1.1-x</sub>Ca<sub>x</sub>Co<sub>2</sub>O<sub>4</sub> with the Ca concentration $`x`$ is in qualitative agreement with the Drude picture, because our result shows $`\gamma `$ (and thus $`c_e`$) to be almost unchanged upon Ca doping while $`n`$ decreases with increasing $`x`$. This suggests that the simple Drude picture captures the basic physics of the thermoelectric transport in NaCo<sub>2</sub>O<sub>4</sub> and thus the large thermopower is actually a result of the strong electron correlation. We note that an enhancement of the thermopower due to the large effective mass has recently been discussed theoretically for strongly-correlated systems .
It should be mentioned that the rather large change in $`S`$ with Ca doping ($`S`$ increases by about 20% upon 0.1 of Ca substitution ) cannot be quantitatively explained by the simple Drude picture; apparently, a more sophisticated model should be employed for the full understanding of the large thermopower. A semiclassical Boltzmann approach gives a formula for $`S`$, which includes the energy dependence of the scattering time $`\tau `$, $`\frac{d\tau }{d\epsilon }`$. . Since the temperature dependence of $`S`$ changes with Ca doping in Na<sub>1.1-x</sub>Ca<sub>x</sub>Co<sub>2</sub>O<sub>4</sub> , one can expect that $`\frac{d\tau }{d\epsilon }`$ is actually changing with Ca doping, which introduces an additional factor in determining the Ca-doping dependence of $`S`$.
In summary, we found that the electronic specific-heat coefficient $`\gamma `$ of NaCo<sub>2</sub>O<sub>4</sub> is about 48 mJ/molK<sup>2</sup>, which indicates that NaCo<sub>2</sub>O<sub>4</sub> is a strongly-correlated system. No apparent correlation was found between $`\gamma `$ and the $`x`$ value in Na<sub>1.1-x</sub>Ca<sub>x</sub>Co<sub>2</sub>O<sub>4</sub>, in which increasing $`x`$ reduces the carrier concentration $`n`$. The increase in the Seebeck coefficient $`S`$ with increasing $`x`$ and the apparent insensitivity of $`\gamma `$ to the change in $`x`$ together suggest that the simple Drude picture, which gives $`S`$$``$$`c_e/n`$, captures the basic physics for the enhancement of the thermopower, although quantitatively the simple Drude picture is insufficient. Therefore, it may be concluded that the larger thermopower of NaCo<sub>2</sub>O<sub>4</sub> is a result of the strong electron correlation.
|
no-problem/9907/astro-ph9907282.html
|
ar5iv
|
text
|
# The Linear Polarization of Sagittarius A* II. VLA and BIMA Polarimetry at 22, 43 and 86 GHz
## 1 Introduction
The compact non-thermal radio source Sgr A\* is recognized as one of the most convincing massive black hole candidates (Maoz 1998). Recent results from stellar proper motion studies indicate that there is a dark mass of $`2.6\times 10^6M_{\mathrm{}}`$ enclosed within 0.01 pc (Genzel et al. 1997, Ghez et al. 1998). Very long baseline interferometry studies at millimeter wavelengths have shown that the intrinsic radio source coincident with the dark mass has a size that is less than 1 AU and a brightness temperature greater than $`10^9`$ K (Rogers et al. 1994, Bower & Backer 1998, Lo et al. 1998, Krichbaum et al. 1998). Together these points are compelling evidence that Sgr A\* is a cyclo-synchrotron emitting region surrounding a massive black hole. Nevertheless, specific details of the excitation of high energy electrons, their distribution and the accretion of infalling matter onto Sgr A\* are unknown (e.g., Falcke, Mannheim & Biermann 1993, Melia 1994, Narayan et al. 1998, Mahadevan 1998).
We have recently demonstrated that Sgr A\* is not linearly polarized at a level of 0.2% at 4.8 and 8.4 GHz (Bower et al. 1999, hereafter Paper I). This spectro-polarimetric result excludes rotation measures up to $`10^7\mathrm{rad}\mathrm{m}^2`$. Interstellar depolarization in the scattering region (Frail et al. 1994, Yusef-Zadeh et al. 1994, Lazio & Cordes 1998) is unlikely but not completely excluded by these observations. Interstellar depolarization can occur if the scale of turbulent fluctuations in the scattering medium are on the order of $`10^4\mathrm{pc}`$. Although this scale is probably too large, it is not fully excluded by observations. The millimeter polarimetry that we describe in this paper directly addresses the significance of interstellar depolarization on these scales.
Our recent detection of circular polarization in Sgr A\* gives particular relevance to the question of the level of intrinsic polarization (Bower, Falcke & Backer 1999). Typically, AGN display integrated circular polarization that is an order of magnitude or more less than the integrated linear polarization (Weiler & de Pater 1983). This is not only the consequence of beam dilution. In the case of the VLBI detection of circular polarization in a compact knot in 3C 279, the circular polarization is less than the co-spatial linear polarization by a factor of $`10`$ (Wardle et al. 1998). That is, there are no known regions in jets with high circular polarization and low linear polarization. Therefore, the presence of a large circular to linear polarization ratio in Sgr A\* is an unsolved and intriguing radiative transfer problem. We discuss later some of the models that may account for this ratio.
In §2 we present VLA<sup>1</sup><sup>1</sup>1The VLA is an instrument of the National Radio Astronomy Observatory. The NRAO is a facility of the National Science Foundation, operated under cooperative agreement with Associated Universities, Inc. and BIMA<sup>2</sup><sup>2</sup>2The BIMA array is operated by the Berkeley-Illinois-Maryland Association under funding from the National Science Foundation array polarimetry. There is no detected polarization for Sgr A\* at 22, 43 and 86 GHz. In §3 we demonstrate that interstellar depolarization at these frequencies is extremely unlikely. We consider the consequences of an intrinsically unpolarized Sgr A\* in §4.
## 2 Observations and Data Reduction
### 2.1 VLA Observations at 22 GHz and 43 GHz
We observed Sgr A\* on 3 February 1997 at 22 GHz and 43 GHz using the VLA The array was in the BnA configuration. Data were obtained in two 50 MHz wide intermediate frequency (IF) bands at 22.435 and 22.485 GHz, and 43.315 and 43.365 GHz, respectively. The 27-element array was divided into two sub-arrays that observed simultaneously at 22 GHz and 43 GHz. The flux density scale was set by assuming standard flux densities for 3C 286. Hourly observations of B1730-130 were used to measure antenna-based gain amplitude fluctuations and to determine the antenna-based polarization leakage terms, following standard practices. Absolute position angle calibration was not possible due to errors in the cross-correlation data for 3C 286. All measured position angles were rotated so that the position angle for B1730-130 was set to 0.
Sgr A\* and the compact source B1741-312 were each observed twice an hour for 7 hours. The compact source B1921-293 was observed at 43 GHz once an hour for 4 hours. Total and polarized intensities in each IF band were measured as the best-fit Gaussian in the $`I`$ and $`P`$ images (Table 1). The quoted errors are rms errors from the fit. We also report the off-source maximum value in the polarized image, $`P_{lim}`$ in flux units and $`p_{lim}`$ as a fraction of the total intensity. A real detection must be more than twice this value to be believable.
The measured polarizations for Sgr A\* are many times the rms image noise, which is on the order of 0.2 mJy. However, there is a significant contribution from multiplicative errors. These errors principally derive from variations in the polarization leakage terms (Holdaway, Carilli & Owen 1992). The effect of the $`D`$-term errors is to scatter a fraction of the total intensity into the polarized intensity map. Typically, at centimeter wavelengths the VLA can achieve a fractional error of $`0.1\%`$ (e.g., Boweret al. 1999a). The smaller number of antennas and poorer performance of the array at 22 GHz and 43 GHz will lead to larger fractional polarization errors.
Comparing results between IF bands is not a reliable method for determining fractional errors. The dominant sources of $`D`$-term errors are common to both antennas. Hence, we see variations between IFs for bright sources that are fully consistent with the thermal noise.
Two factors indicate that the measured polarization for Sgr A\* is an upper limit rather than a detection. We show in Figure 1 a 43 GHz image of Sgr A\* with polarization vectors overlaid. First, there is large variation in the polarization position angles over the source. This is also true in the 22 GHz images. Second, the sidelobes and noise peaks are polarized at a level comparable to the central source. Off-source peaks in the $`P`$ maps are as large as the measured polarization. This implies fractional polarization errors of 0.2% and 0.4% at 22 GHz and 43 GHz, respectively.
### 2.2 BIMA Observations at 86 and 90 GHz
Polarimetric observations of Sgr A\* were obtained with the BIMA array (Welch et al. 1996) on three dates, 10 March 1998, 14 March 1998 and 19 December 1998. The array was in the A configuration producing projected baselines for Sgr A\* in the range 20 to 520 $`k\lambda `$. Continuum bandwidths were 800 MHz in lower and upper IF sidebands centered at 86.582 GHz and 90.028 GHz. Standard antenna amplitude gains were applied.
Each receiver is sensitive to linear polarization. Quarter-wave plates were installed on all antennas such that the receivers can be switched between linear, right circular (RCP) or left circular (LCP) polarization. One antenna observed linear polarization continuously, while the other antennas were switched between RCP and LCP using a Walsh-function pattern to optimize the visibility coverage in parallel- and cross-hand correlations (Wright 1995, Wright 1996). The data were self-calibrated for both RCP and LCP with respect to the antenna observing linear polarization. Because RCP and LCP is detected with the same receiver in each antenna, there is no phase-offset between the parallel hand visibilities. Hence, the absolute position angle is correctly determined without any further calibration.
For all three observations instrumental leakage was calibrated from observations of strong unresolved sources. The instrumental leakage is stable to about 0.4% rms. This implies that the minimum error in the polarization maps will be 0.4%. If variations in the $`D`$-terms are correlated, the error could be over 1% (Holdaway, Carilli & Owen 1992). For the March observations, we used $`D`$-term solutions from spectral-line observations of the Orion SiO maser on 28 January 1998 and 25 February 1998 (Rao et al. 1998). The average difference per antenna between the Orion maser $`D`$-term solutions is 1.3%, implying a minimum error in the polarization of $`0.4\%`$ if the variations between antennas are uncorrelated. The average difference between the two Orion maser and calibrator $`D`$-term solutions is similar. This implies that we are not strongly affected by variations in the $`D`$-term solutions over the bandpass. Because solutions were found for a spectral line, they were available only at a single IF frequency. For the 19 December 1998 observations, we used solutions found for 3C 273 observed on 21 November 1998 in the C array. These data showed better agreement between the two IF bands than the solutions found from interleaved observations of B1730-130. A similar level of variation in the $`D`$-term solutions was found for these observations.
We summarize the total and polarized intensity in Table 2. The reported errors are estimated from fits to the corrected parallel- and cross-hand visibility data. As is the case with the VLA data, these are underestimates because they do not take into account amplitude calibration and polarization leakage term errors. We estimate the total error by the level of off-source peaks in the polarization maps. These are on the order of 20 mJy, or 1%, for Sgr A\*. This is consistent with the results of Rao et al. , in which the linear polarization limit is 1.5%. Therefore, we consider the measured polarization for Sgr A\* to be an upper limit of 1%.
In Figure 2 we summarize all upper limits to the polarization of Sgr A\* from Paper I and from this paper.
## 3 Interstellar Depolarization
A very large rotation measure (RM) will rotate the position angle of linear polarization through the observing band. However, bandwidth depolarization is unlikely to occur in these observations. The maximum rotation measure detectable in the continuum band of these experiments is $`1.3\times 10^6\mathrm{rad}\mathrm{m}^2`$, $`8.4\times 10^6\mathrm{rad}\mathrm{m}^2`$ and $`4.8\times 10^6\mathrm{rad}\mathrm{m}^2`$ at 22 GHz, 43 GHz and 86 GHz, respectively. The spectro-polarimetric observations in Paper I would have detected a signal at these RMs if they were present.
We argued in Paper I that the scattering medium will depolarize the source if variations in the RM lead to a phase change of $`\pi `$ radians. The required RM variations at 22 GHz, 43 GHz and 86 GHz are $`1.8\times 10^4\mathrm{rad}\mathrm{m}^2`$, $`6.4\times 10^4\mathrm{rad}\mathrm{m}^2`$ and $`2.7\times 10^5\mathrm{rad}\mathrm{m}^2`$. The known variations in the RM in the Galactic Center region (Yusef-Zadeh, Wardle & Parastaran 1997) are not sufficient to depolarize Sgr A\* at 4.8 GHz and 8.4 GHz (Paper I). Therefore, we must only consider whether the depolarization conditions could arise in the scattering medium around SgrA\*.
The angular broadening of images of masers near the Galactic Center and Sgr A\* is most likely associated with the ionized skins of molecular clouds. The ionization mechanism is either photo-ionization by hot stars (Yusef-Zadeh et al. 1994) or contact with diffuse, hot gas (Lazio & Cordes 1998). There are two relevant length scales for the structure of these scattering screens: the thickness of the ionized skins, $`l_{skin}10^4\mathrm{pc}`$ which was derived by Yusef-Zadeh et al. (1994); and the outer scale of the turbulent spectrum of electron density fluctuations within these skins, $`l_010^7\mathrm{pc}`$ which was derived by Lazio & Cordes (1998). The small outer scale in relation to the skin depth suggests that these layers may contain many independent turbulent cells. The small angular scale of these cells, $`l_0/8`$ kpc $`0.02`$ mas, means that they can depolarize a linearly polarized signal owing to their random Faraday rotations. The rms RM along independent lines of sight through a single skin will depend on $`l_0\sqrt{l_{skin}/l_0}`$. This rms will be about some mean if the magnetic field is uniform in the skin or about zero if the field is random. If our line of sight traverses $`N`$ skins, then the equivalent path length for the rms RM estimation is $`L=\sqrt{Nl_{skin}l_0}`$. This path length is less than $`10^5\mathrm{pc}`$ for $`N<10`$ scattering screens.
The constancy of maser image anisotropy over $`10\mathrm{arcsec}`$ angular scales suggests that the average perpendicular to the line of sight magnetic field imbedded in these skins is uniform over physical scales of $`1`$ pc (Yusef-Zadeh et al. 1999). This scale is a significant fraction of the size of molecular clouds in the Galactic Center region. Hence, the variations on greater scales may be the result of scattering by physically distinct regions. This uniformity then requires the rms RM to be about some mean RM (with contributions from density alone) rather than about zero (with contributions from density and field).
We show now that for $`L`$ as large as $`10^4\mathrm{pc}`$, depolarization in the scattering medium and energy equipartition between the magnetic field and particle energy require that either or both the electron density and magnetic field strength exceed the peak values measured in the Galactic Center region. These two conditions require
$$n_e=7.3\times 10^4\mathrm{cm}^3\mathrm{RM}_{4}^{}{}_{}{}^{2/3}L_4^{2/3}T_4^{1/3}$$
(1)
and
$$B=1.6\mathrm{mG}\mathrm{RM}_{4}^{}{}_{}{}^{1/3}L_4^{1/3}T_4^{1/3},$$
(2)
where $`\mathrm{RM}_4`$ is the rotation measure in units of $`10^4\mathrm{rad}\mathrm{m}^2`$, $`L_4`$ is the length scale in units of $`10^4\mathrm{pc}`$ and $`T_4`$ is the electron temperature in units of $`10^4`$ K. Mehringer et al. (1993) showed that ionized densities in H II regions are significantly less than $`10^5\mathrm{cm}^3`$ on arcsecond scales. Magnetic field strengths measured with OH masers in dense molecular regions are on the order of a few milliGauss (Yusef-Zadeh et al. 1999).
At 22 GHz and assuming $`T_4=1`$, we find $`B2\mathrm{mG}`$ and $`n_e10^5\mathrm{cm}^3`$, which exceeds the observed upper limit on electron density. At 86 GHz, $`B5\mathrm{mG}`$ and $`n_e7\times 10^5\mathrm{cm}^3`$. For the case of $`L10^7\mathrm{pc}`$, depolarization of the 22 GHz radiation requires $`B15\mathrm{mG}`$ and $`n_e10^7\mathrm{cm}^3`$. The case is much worse at 86 GHz. Increasing the electron temperature does not allow depolarization: it leads to lower electron densities but higher magnetic fields. Therefore, we consider it extremely unlikely that Sgr A\* is depolarized by the interstellar medium.
These electron densities correspond more closely to what we expect from a sub-parsec accretion flow onto Sgr A\* (Melia 1994, Melia & Coker 1999, Quataert, Narayan & Reid 1999). As Melia and Coker show, densities in excess of $`10^5\mathrm{cm}^3`$ appear at radii less than $`0.01`$ pc. We demonstrated in Paper I that this can easily lead to very high RMs and that depolarization will occur if the accretion region is sufficiently turbulent. However, the detailed character of the accretion region is not well-known. The geometry, volume filling factor and degree of turbulence are poorly constrained.
## 4 An Intrinsically Weakly Polarized Sgr A\*
The degree of linear polarization in AGN typically rises with frequency. Aller, Aller & Hughes (1992) showed that in their flux-limited sample $`40\%`$ of AGN have polarization fractions less than 1% at 4.8 GHz while $`10\%`$ of the same sample have polarization fractions less than 1% at 14.5 GHz. All sources in the sample have detected polarization fractions greater than 0.2% at 14.5 GHz. This includes 3C 84 which has an average polarization fraction at 4.8 GHz of $`0.03\pm 0.01\%`$. A polarization increase with frequency can be explained by the high RMs present in some radio cores (Taylor 1998), the increased prominence of shocked regions and the decreased synchrotron opacity (Stevens, Robson & Holland 1996). We note that a flux-limited sample of this kind is biased towards powerful, beamed sources which may have different polarization properties than weaker unbeamed sources. The polarization properties of these weaker sources are not well-studied due to their low flux densities. There is no high-frequency polarization study of a volume-limited sample for weaker sources. However, Rudnick, Jones & Fiedler (1986) did observe a sample of “weak” cores with flat spectra. They found that even at 15 GHz many of these sources were unpolarized at a level of $`1\%`$.
The absence of linear polarization in Sgr A\* from 4.8 GHz to 86 GHz can be explained with the presence of thermal electrons or with significant magnetic field cancellation. The thermal electrons may be outside the emission region (in the accretion flow, as discussed above, but not in the scattering medium) or may be coincident with the emission region. This latter case is appealing because it may be able to account for the presence of circular polarization through the conversion of linear polarization to circular polarization (Bower, Falcke & Backer 1999, Pacholczyk 1977, Jones & O’Dell 1977).
Magnetic field cancellation could occur as the result of a tangled field or a circularly symmetric field orientation. The former is typically assumed to depolarize radio jets. This requires for Sgr A\* that the emission region consist of $`\left(70/0.2\right)^210^5`$ independent B-field cells. The latter case may arise if the emission originates in a quasi-spherical inflow (e.g., an ADAF model). Magnetic field cancellation is an unlikely depolarization mechanism if the circular polarization is intrinsic to the source (e.g., Wilson & Weiler 1997). However, if the circular polarization arises from interstellar propagation effects (Macquart & Melrose 1999), then magnetic field cancellation is a possible explanation. In this case, the absence of linear polarization argues against a strong shock origin for the total flux variability in Sgr A\* (Wright & Backer 1993, Falcke 1999). Total flux variability in AGN comes about from the presence of shocks which order the magnetic field and accelerate particles in the relativistic jet leading to linearly polarized emission (Marscher & Gear 1985).
We have shown here that Sgr A\* is not linearly polarized to the current limits of instrumental sensitivity at 22, 43 and 86 GHz. The possibility is remote that Sgr A\* is externally depolarized. However, the linear and circular polarizations are unique to Sgr A\*. Explaining that relationship may reveal significant details for the emission region and environment of Sgr A\*.
This work was partially supported by NSF Grant AST-9613998 to the University of California, Berkeley. HF is supported by DFG grant Fa 358/1-1&2.
|
no-problem/9907/solv-int9907011.html
|
ar5iv
|
text
|
# Spectral decomposition for the Dirac system associated to the DSII equation
## 1 Introduction
Gravity-capillary surface wave packets are described by the Davey–Stewartson (DS) system which is integrable by inverse scattering tranform in the limit of shallow water . In this paper, we study the focusing DSII equation which can be written in a complex form,
$`iu_t+u_{zz}+u_{\overline{z}\overline{z}}+4(g+\overline{g})u`$ $`=`$ $`0,`$
$`2g_{\overline{z}}\left(|u|^2\right)_z`$ $`=`$ $`0,`$ (1.1)
where $`z=x+iy`$, $`\overline{z}=xiy`$, $`u(z,\overline{z},t)`$ and $`g(z,\overline{z},t)`$ are complex functions. This equation appears as the compatibility condition for the two-dimensional Dirac system,
$$\phi _{1\overline{z}}=u\phi _2,\phi _{2z}=\overline{u}\phi _1,$$
(1.2)
coupled to the equations for the time evolution of the eigenfunctions,
$`i\phi _{1t}+\phi _{1zz}+u\phi _{2\overline{z}}u_{\overline{z}}\phi _2+4g\phi _1`$ $`=`$ $`0,`$
$`i\phi _{2t}+\phi _{2\overline{z}\overline{z}}+\overline{u}_z\phi _1\overline{u}\phi _{1z}+4\overline{g}\phi _2`$ $`=`$ $`0.`$ (1.3)
The DSII equation was solved formally through the $`\overline{}`$ problem of complex analysis by Fokas and Ablowitz and Beals and Coifman . Rigorous results on existence and uniqueness of solutions of the initial-value problem were established under a small-norm assumption . The small-norm assumption was used to eliminate homogeneous solutions of equations of the inverse scattering which correspond to bound states and radially symmetric localized waves (lumps) of the DSII equation. When the potential in the linear system becomes weakly localized (in $`L^2`$ but not in $`L^1`$), homogeneous solutions may exist and the analysis developed in Ref. is not applicable.
The lump solutions were included formally in , where their weak decay rate was found, $`u\mathrm{O}(R^1)`$ as $`R=\sqrt{x^2+y^2}\mathrm{}`$. This result is only valid for complexified solutions of the DSII equation (when $`|u|^2`$ is considered to be complex). The reality conditions were incorporated in the work of Arkadiev et al. where lumps were shown to decay like $`u\mathrm{O}(R^2)`$. Multi-lump solutions were expressed as a ratio of two determinants , or, in a special case, as a ratio of two polynomials but their dynamical role was left out of consideration.
Recently, structural instability of a single lump of the DSII equation was reported by Gadyl’shin and Kiselev . The authors used methods of perturbation theory based on completeness of squared eigenfunctions of the Dirac system . A similar conclusion was announced by Yurov who studied Darboux transformation of the Dirac system .
In this paper, we present an alternative solution of the problem of stability of multi-lump solutions of the DSII equation. The approach generalizes our recent work on spectral decomposition of a linear time-dependent Schrödinger equation with weakly localized (not in $`L^1`$) potentials . We find a new spectral decomposition in terms of single eigenfunctions of the Dirac system. Surprisely enough, the two-component Dirac system in two dimensions has a scalar spectral decomposition. In contrast, we recall that the Dirac system in one dimension (the so-called AKNS system) has a well-known 2 x 2 matrix spectral decomposition .
Using the scalar spectral decomposition, we associate the multi-lump potentials with eigenvalues embedded into a two-dimensional essential spectrum of the Dirac system. Eigenvalues embedded into a one-dimensional essential spectrum occur for instance for the time-dependent Schrödinger problem . They were found to be structurally unstable under a small variation of the potential. Depending on the sign of the variation, they either disappear or become resonant poles in the complex spectral plane which correspond to lump solutions of the KPI equation .
For the Dirac system in two dimensions, the multi-lump potentials and embedded eigenvalues are more exotic. The discrete spectrum of the Dirac system is separated from the continuous spectrum contribution in the sense that the spectral data satisfy certain constraints near the embedded eigenvalues. These constraints are met for special solutions of the DSII equation such as lumps, but may not be satisfied for a generic combination of lumps and radiative waves. As a result, embedded eigenvalues of the Dirac system generally disappear under a local disturbance of the initial data. Physically, this implies that a localized initial data of the DSII equation decays into radiation except for the cases where the data reduce to special solutions such as lumps.
The paper is organized as follows. Elements of inverse scattering for the Dirac system are reviewed in Section 2, where we find that the discrete spectrum of the Dirac system is prescribed by certain constraints on the spectral data. Spectral decomposition is described in Section 3 with the proof of orthogonality and completeness relations through a proper adjoint problem. The perturbation theory for lumps is developed in Section 4 where some of previous results are recovered. Section 5 contains concluding remarks. Appendix A provides a summary of formulas of the complex $`\overline{}`$-analysis used in proofs of Section 3.
## 2 Spectral Data and Inverse Scattering
Here we review some results on the Dirac system (1.2) and discard henceforth the time dependence of $`u`$, $`g`$ and $`𝝋`$. The potential $`u(z,\overline{z})`$ is assumed to be non-integrable ($`uL^1`$) with the boundary conditions, $`u\mathrm{O}(|z|^2)`$ as $`|z|\mathrm{}`$.
### 2.1 Essential Spectrum of the Dirac System
We define the fundamental matrix solution of Eq. (1.2) in the form ,
$$𝝋=[𝝁(z,\overline{z},k,\overline{k})e^{ikz},𝝌(z,\overline{z},k,\overline{k})e^{i\overline{k}\overline{z}}],$$
(2.1)
where $`k`$ is a spectral parameter, $`𝝁(z,\overline{z},k,\overline{k})`$ and $`𝝌(z,\overline{z},k,\overline{k})`$ satisfy the system,
$`\mu _{1\overline{z}}`$ $`=`$ $`u\mu _2,\mu _{2z}=ik\mu _2+\overline{u}\mu _1,`$ (2.2)
$`\chi _{1\overline{z}}`$ $`=`$ $`i\overline{k}\chi _1u\chi _2,\chi _{2z}=\overline{u}\chi _1.`$ (2.3)
It follows from Eqs. (2.2) and (2.3) that $`𝝁`$ and $`𝝌`$ are related by the symmetry constraint,
$$𝝌(z,\overline{z},k,\overline{k})=𝝈\overline{𝝁}(z,\overline{z},k,\overline{k}),𝝈=\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right).$$
(2.4)
We impose the boundary conditions for $`𝝁(z,\overline{z},k,\overline{k})`$,
$$\underset{|k|\mathrm{}}{lim}𝝁(z,\overline{z},k,\overline{k})=𝐞_1=\left(\begin{array}{cc}1& \\ 0& \end{array}\right).$$
(2.5)
Solutions of Eq. (2.2) with boundary conditions (2.5) can be expressed through the Green’s functions as Fredholm’s inhomogeneous integral equations ,
$`\mu _1(z,\overline{z},k,\overline{k})`$ $`=`$ $`1{\displaystyle \frac{1}{2\pi i}}{\displaystyle \frac{dz^{}d\overline{z}^{}}{z^{}z}(u\mu _2)(z^{},\overline{z}^{})},`$ (2.6)
$`\mu _2(z,\overline{z},k,\overline{k})`$ $`=`$ $`{\displaystyle \frac{1}{2\pi i}}{\displaystyle \frac{dz^{}d\overline{z}^{}}{\overline{z}^{}\overline{z}}(\overline{u}\mu _1)(z^{},\overline{z}^{})e^{ik(zz^{})i\overline{k}(\overline{z}\overline{z}^{})}}.`$ (2.7)
Values of $`k`$ for which the homogeneous system associated to Eqs. (2.6) and (2.7) has bounded solutions are called eigenvalues of the discrete spectrum of the Dirac system. Let us suppose that the homogeneous solutions (eigenvalues) are not supported by the potential $`u(z,\overline{z})`$. We evaluate the departure from analyticity of $`𝝁`$ in the $`k`$ plane by calculating the derivative $`𝝁/\overline{k}`$ directly from the system (2.6)-(2.7) as
$$\frac{𝝁}{\overline{k}}=b(k,\overline{k})𝐍_\mu (z,\overline{z},k,\overline{k}).$$
(2.8)
Here $`b(k,\overline{k})`$ is the spectral data,
$$b(k,\overline{k})=\frac{1}{2\pi }𝑑zd\overline{z}\left(\overline{u}\mu _1\right)(z,\overline{z})e^{i(kz+\overline{k}\overline{z})},$$
(2.9)
and $`𝐍_\mu (z,\overline{z},k,\overline{k})`$ is a solution of Eq. (2.2) which is linearly independent of $`𝝁(z,\overline{z},k,\overline{k})`$ and satisfies the boundary condition,
$$\underset{|k|\mathrm{}}{lim}𝐍_\mu (z,\overline{z},k,\overline{k})e^{i(kz+\overline{k}\overline{z})}=𝐞_2=\left(\begin{array}{cc}0& \\ 1& \end{array}\right).$$
(2.10)
This solution can be expressed through the Fredholm’s inhomogeneous equations,
$`N_{1\mu }(z,\overline{z},k,\overline{k})`$ $`=`$ $`{\displaystyle \frac{1}{2\pi i}}{\displaystyle \frac{dz^{}d\overline{z}^{}}{z^{}z}(uN_{2\mu })(z^{},\overline{z}^{})},`$ (2.11)
$`N_{2\mu }(z,\overline{z},k,\overline{k})`$ $`=`$ $`e^{i(kz+\overline{k}\overline{z})}+{\displaystyle \frac{1}{2\pi i}}{\displaystyle \frac{dz^{}d\overline{z}^{}}{\overline{z}^{}\overline{z}}(\overline{u}N_{1\mu })(z^{},\overline{z}^{})e^{ik(zz^{})i\overline{k}(\overline{z}\overline{z}^{})}}.`$ (2.12)
The following reduction formula connects $`𝐍_\mu (z,\overline{z},k,\overline{k})`$ and $`𝝁`$$`(z,\overline{z},k,\overline{k})`$,
$$𝐍_\mu (z,\overline{z},k,\overline{k})=𝝌(z,\overline{z},k,\overline{k})e^{i(kz+\overline{k}\overline{z})}=𝝈\overline{𝝁}(z,\overline{z},k,\overline{k})e^{i(kz+\overline{k}\overline{z})}.$$
(2.13)
If the potential $`u(z,\overline{z})`$ has the boundary values $`u\mathrm{O}(|z|^2)`$ as $`|z|\mathrm{}`$, ($`uL^1`$), then, the integral kernel in Eq. (2.9) is not absolutely integrable, while Eqs. (2.6) and (2.7) are still well-defined. We specify complex integration in the $`z`$-plane of a non-absolutely integrable function $`f(z,\overline{z})`$ according to the formula,
$$𝑑zd\overline{z}f(z,\overline{z})=\underset{R\mathrm{}}{lim}_{|z|R}𝑑zd\overline{z}f(z,\overline{z}).$$
(2.14)
The same formula is valid for integrating eigenfunctions of the Dirac system in the $`k`$ plane as well. In Section 3, we use (2.14) when computing the inner products and completeness relations for the Dirac system and its adjoint.
### 2.2 The Discrete Spectrum
Suppose here that integral equations (2.6) and (2.7) have homogeneous solutions at an eigenvalue $`k=k_j`$. The discrete spectrum associated to multi-lump potentials was introduced in Refs. . Here we review their approach and give a new result (Proposition 2.1) which clarifies the role of discrete spectrum in the spectral problem (2.2).
For the discrete spectrum associated to the multi-lump potentials, an isolated eigenvalue $`k=k_j`$ has double multiplicity with the corresponding two bound states $`𝚽_j(z,\overline{z})`$ and $`𝚽_j^{}(z,\overline{z})`$ . The bound state $`𝚽_j(z,\overline{z})`$ is a solution of the homogeneous equations,
$`\mathrm{\Phi }_{1j}(z,\overline{z})`$ $`=`$ $`{\displaystyle \frac{1}{2\pi i}}{\displaystyle \frac{dz^{}d\overline{z}^{}}{z^{}z}(u\mathrm{\Phi }_{2j})(z^{},\overline{z}^{})},`$ (2.15)
$`\mathrm{\Phi }_{2j}(z,\overline{z})`$ $`=`$ $`{\displaystyle \frac{1}{2\pi i}}{\displaystyle \frac{dz^{}d\overline{z}^{}}{\overline{z}^{}\overline{z}}(\overline{u}\mathrm{\Phi }_{1j})(z^{},\overline{z}^{})e^{ik_j(zz^{})i\overline{k}_j(\overline{z}\overline{z}^{})}},`$ (2.16)
with the boundary conditions as $`|z|\mathrm{}`$,
$$𝚽_j(z,\overline{z})\frac{𝐞_1}{z}.$$
(2.17)
Equivalently, this boundary condition can be written as renormalization conditions for Eqs. (2.15) and (2.16),
$`{\displaystyle \frac{1}{2\pi i}}{\displaystyle 𝑑z}d\overline{z}(u\mathrm{\Phi }_{2j})(z,\overline{z})`$ $`=`$ $`1,`$ (2.18)
$`{\displaystyle \frac{1}{2\pi i}}{\displaystyle 𝑑z}d\overline{z}(\overline{u}\mathrm{\Phi }_{1j})(z,\overline{z})e^{i(k_jz+\overline{k}_j\overline{z})}`$ $`=`$ $`0.`$ (2.19)
The other (degenerate) bound state $`𝚽_j^{}(z,\overline{z})`$ can be expressed in terms of $`𝚽_j(z,\overline{z})`$ using Eq. (2.13),
$$𝚽_j^{}(z,\overline{z})=𝝈\overline{𝚽}_j(z,\overline{z})e^{i(k_jz+\overline{k}_j\overline{z})}.$$
(2.20)
The behaviour of the eigenfunction $`𝝁(z,\overline{z},k,\overline{k})`$ near the eigenvalue $`k=k_j`$ becomes complicated due to the fact that the double eigenvalue is embedded into the two-dimensional essential spectrum of the Dirac system (2.2). We prove the following result.
Proposition 2.1. For smooth data $`b(k,\overline{k})C^1`$ at $`kk_j`$, the eigenfunction $`𝝁(z,\overline{z},k,\overline{k})`$ has a pole singularity at $`kk_j`$ only if
$$b_0=\frac{1}{2\pi }𝑑zd\overline{z}\left(\overline{z}\overline{u}\mathrm{\Phi }_{1j}\right)(z,\overline{z})e^{i(k_jz+\overline{k}_j\overline{z})}=0.$$
(2.21)
Proof. Suppose $`𝝁(z,\overline{z},k,\overline{k})`$ has a pole singularity at $`k=k_j`$. Then, it can be shown from Eq. (2.2) that the meromorphic continuation of $`𝝁(z,\overline{z},k,\overline{k})`$ is given by the limiting relation,
$$\underset{kk_j}{lim}\left[𝝁(z,\overline{z},k,\overline{k})\frac{i𝚽_j(z,\overline{z})}{kk_j}\right]=(z+z_j)𝚽_j(z,\overline{z})+c_j𝚽_j^{}(z,\overline{z}),$$
(2.22)
where $`z_j`$, $`c_j`$ are some constants. Using Eqs. (2.8), (2.9), and (2.13), we find the differential relation for $`b(k,\overline{k})`$,
$$\frac{b}{\overline{k}}=\frac{b(k,\overline{k})}{2\pi }𝑑zd\overline{z}\left(\overline{u}\overline{\mu }_2\right)(z,\overline{z})\frac{1}{2\pi i}𝑑zd\overline{z}\left(\overline{z}\overline{u}\mu _1\right)(z,\overline{z})e^{i(kz+\overline{k}\overline{z})}.$$
In the limit $`kk_j`$, this equation reduces with the help of Eqs. (2.18) and (2.22) to the form,
$$\frac{b}{\overline{k}}=\frac{b(k,\overline{k})}{\overline{k}\overline{k}_j}\frac{b_0}{kk_j},$$
where $`b_0`$ is given in Eq. (2.21). The reduced equation exhibits the limiting behavior of $`b(k,\overline{k})`$ as $`kk_j`$,
$$b(k,\overline{k})b_0\frac{\overline{k}\overline{k}_j}{kk_j}\mathrm{ln}|\overline{k}\overline{k}_j|.$$
(2.23)
On the other hand, it follows from Eqs. (2.13), (2.20), and (2.22) that $`𝐍_\mu (z,\overline{z},k,\overline{k})`$ has the limiting behavior,
$$𝐍_\mu (z,\overline{z},k,\overline{k})\frac{i𝚽_j^{}(z,\overline{z})}{\overline{k}\overline{k}_j}.$$
(2.24)
According to Eqs. (2.23) and (2.24), the right-hand-side of Eq. (2.8) is of order $`\mathrm{O}(b_0|kk_j|^1\mathrm{ln}|kk_j|)`$ as $`kk_j`$. On the other hand, the left-hand-side of Eq. (2.8) must be of order $`O(1)`$ in the limit $`kk_j`$ according to Eq. (2.22). Therefore, the eigenfunction $`𝝁(z,\overline{z},k,\overline{k})`$ has a pole at $`k=k_j`$ only if the constraint $`b_0=0`$ holds. $`\mathrm{}`$
The limiting relation (2.22) was introduced by Arkadiev et al . However, the authors did not notice that the discrete spectrum is supported only by potentials which satisfy the additional constraint (2.21). In particular, such potentials include the multi-lump solutions for which $`b(k,\overline{k})=0`$ everywhere in the $`k`$-plane.
### 2.3 Expansion Formulas for Inverse Scattering
Combining Eq. (2.8) for the essential spectrum and Eq. (2.22) for the discrete spectrum, we reconstruct the eigenfunction $`𝝁(z,\overline{z},k,\overline{k})`$ ,
$$𝝁(z,\overline{z},k,\overline{k})=𝐞_1+\underset{j=1}{\overset{n}{}}\frac{i𝚽_j(z,\overline{z})}{kk_j}+\frac{1}{2\pi i}\frac{dk^{}d\overline{k}^{}}{k^{}k}b(k^{},\overline{k}^{})𝐍_\mu (z,\overline{z},k^{},\overline{k}^{}),$$
(2.25)
where $`n`$ is number of distinct eigenvalues $`k_j`$ of double multiplicity. At $`kk_j`$, this system is coupled with the algebraic system for the bound states,
$$(z+z_j)𝚽_j(z,\overline{z})+c_j𝚽_j^{}(z,\overline{z})=𝐞_1+\underset{lj}{}\frac{i𝚽_l(z,\overline{z})}{k_jk_l}+\frac{1}{2\pi i}\frac{dkd\overline{k}}{kk_j}b(k,\overline{k})𝐍_\mu (z,\overline{z},k,\overline{k}).$$
(2.26)
Expansion (2.25) can be related to the inverse scattering transform for the potential $`u(z,\overline{z})`$ . It follows from Eq. (2.2) that the eigenfunction $`𝝁(z,\overline{z},k,\overline{k})`$ has the asymptotic expansion as $`|k|\mathrm{}`$,
$$𝝁(z,\overline{z},k,\overline{k})=𝐞_1+\frac{1}{ik}𝝁_{\mathrm{}}(z,\overline{z})+\mathrm{O}(|k|^2),$$
(2.27)
where $`\mu _2\mathrm{}(z,\overline{z})=\overline{u}(z,\overline{z})`$ and
$$\mu _1\mathrm{}(z,\overline{z})=\frac{1}{2\pi i}\frac{dz^{}d\overline{z}^{}}{z^{}z}(|u|^2)(z^{},\overline{z}^{}).$$
We deduce from Eqs. (2.25) and (2.27) that the potential $`\overline{u}(z,\overline{z})`$ is expressed through the eigenfunctions of the Dirac system in the form ,
$$\overline{u}(z,\overline{z})=\underset{j=1}{\overset{n}{}}\mathrm{\Phi }_{2j}(z,\overline{z})\frac{1}{2\pi }𝑑kd\overline{k}b(k,\overline{k})N_{2\mu }(z,\overline{z},k,\overline{k}).$$
(2.28)
Formulas (2.6) to (2.28) constitute a standard framework for the inverse scattering transform of the DSII equation with a new relation (2.21). The existence and uniqueness of solutions of the Fredholm integral equations (2.6) and (2.7) and the $`\overline{}`$ problem (2.8) and (2.25) were proved in and under the small-norm assumption for the potential $`u(x,y)`$,
$$\left(\underset{(x,y)𝐑^2}{sup}|u|(x,y)\right)\left(|u(x,y)|𝑑x𝑑y\right)<\frac{\pi }{8}.$$
In this case $`n=0`$ and $`b(k,\overline{k})0`$. The nonlinear two-dimensional Fourier transform associated to this scheme was discussed in Examples 8-10 of Chapter 7.7 of Ref. . Indeed, the connection formula (2.28) implies that there is a scalar spectral decomposition of $`\overline{u}(z,\overline{z})`$ through $`N_{2\mu }(z,\overline{z},k,\overline{k})`$ for $`n=0`$. In order to close the decomposition, one could use Eqs. (2.9) and (2.13) to construct a “completeness relation” for the expansion of $`\delta (z^{}z)`$ in the form,
$$\delta (z^{}z)=\frac{1}{2\pi ^2i}𝑑kd\overline{k}\overline{N}_{2\mu }(z^{},\overline{z}^{},k,\overline{k})N_{2\mu }(z,\overline{z},k,\overline{k}).$$
However, we show in Proposition 3.4 below that a completeness theorem for Eq. (2.2) is different and is based on the set of eigenfunctions of the adjoint Dirac system.
## 3 Basis for a Scalar Spectral Decomposition
In this section, we specify the adjoint problem for the Dirac system (2.2) and establish orthogonality and completeness relations.
### 3.1 The Adjoint System
The adjoint system for Eq. (2.2) is
$$\mu _{1z}^a=ik\mu _1^au\mu _2^a,\mu _{2\overline{z}}^a=\overline{u}\mu _1^a,$$
(3.1)
which provides the balance equation,
$$i(k^{}k)\mu _1^a(k^{})\mu _2(k)=\frac{}{z}\left[\mu _1^a(k^{})\mu _2(k)\right]\frac{}{\overline{z}}\left[\mu _2^a(k^{})\mu _1(k)\right].$$
(3.2)
The system (3.1) admits plane solutions $`𝝁^a(z,\overline{z},k,\overline{k})`$ and oscillatory-type solutions $`𝐍_\mu ^a(z,\overline{z},k,\overline{k})`$ with the boundary conditions,
$`\underset{|k|\mathrm{}}{lim}𝝁^a(z,\overline{z},k,\overline{k})`$ $`=`$ $`𝐞_2,`$ (3.3)
$`\underset{|k|\mathrm{}}{lim}𝐍_\mu ^a(z,\overline{z},k,\overline{k})e^{i(kz+\overline{k}\overline{z})}`$ $`=`$ $`𝐞_1.`$ (3.4)
The adjoint eigenfunctions $`𝐍_\mu ^a(z,\overline{z},k,\overline{k})`$ can be expressed through the Green functions,
$`N_{1\mu }^a(z,\overline{z})`$ $`=`$ $`e^{i(kz+\overline{k}\overline{z})}{\displaystyle \frac{1}{2\pi i}}{\displaystyle \frac{dz^{}d\overline{z}^{}}{\overline{z}^{}\overline{z}}(uN_{2\mu }^a)(z^{},\overline{z}^{})e^{ik(zz^{})+i\overline{k}(\overline{z}\overline{z}^{})}},`$ (3.5)
$`N_{2\mu }^a(z,\overline{z})`$ $`=`$ $`{\displaystyle \frac{1}{2\pi i}}{\displaystyle \frac{dz^{}d\overline{z}^{}}{z^{}z}(\overline{u}N_{1\mu }^a)(z^{},\overline{z}^{})}.`$ (3.6)
They are related to the adjoint eigenfunctions $`𝝁^a(z,\overline{z},k,\overline{k})`$ by the formula,
$$𝐍_\mu ^a(z,\overline{z},k,\overline{k})=𝝈\overline{𝝁}^a(z,\overline{z},k,\overline{k})e^{i(kz+\overline{k}\overline{z})}.$$
(3.7)
Using this representation, we prove the following result.
Lemma 3.1. The spectral data $`b(k,\overline{k})`$ is expressed in terms of the adjoint eigenfunctions as
$$b(k,\overline{k})=\frac{1}{2\pi }𝑑zd\overline{z}(\overline{u}N_{1\mu }^a)(z,\overline{z}).$$
(3.8)
Proof. Multiplying Eq. (3.5) by $`\overline{u}\mu _1(k)`$, integrating over $`dzd\overline{z}`$ and using Eq. (2.7), we express $`b(k,\overline{k})`$ defined in Eq. (2.9) in the form,
$$b(k,\overline{k})=\frac{1}{2\pi }𝑑zd\overline{z}\left[\overline{u}\mu _1(k)N_{1\mu }^a(k)u\mu _2(k)N_{2\mu }^a(k)\right].$$
(3.9)
On the other hand, multiplying Eq. (2.6) by $`uN_{1\mu }^a(k)`$, integrating over $`dzd\overline{z}`$, and using Eqs. (3.6) and (3.9), we get Eq. (3.8). $`\mathrm{}`$
Suppose now that $`k=k_j`$ is an isolated double eigenvalue of Eq. (2.2) with the bound states $`𝚽_j(z,\overline{z})`$ and $`𝚽_j^{}(z,\overline{z})`$ given by Eqs. (2.15) – (2.20). Suppose also that $`k=k_j^a`$ is an eigenvalue of the adjoint system (3.1) with the adjoint bound states $`𝚽_j^a(z,\overline{z})`$ and $`𝚽_j^a(z,\overline{z})`$.
Lemma 3.2. If $`k_j`$ is a double eigenvalue of the Dirac system (2.2), then $`k_j`$ is also a double eigenvalue of the adjoint system (3.1).
Proof. We use Eq. (3.2) with $`𝝁=𝚽_j(z,\overline{z})`$ and $`𝝁^a=𝚽_j^a(z,\overline{z})`$ at $`k=k_j`$ and $`k^{}=k_j^a`$ and integrate over $`dzd\overline{z}`$ with the help of Eq. (A.3) of Appendix A. The contour contribution of the integral vanishes due to the boundary conditions (2.17) and (3.12) and the resulting expression is
$$(k_j^ak_j)𝑑zd\overline{z}(\mathrm{\Phi }_{1j}^a\mathrm{\Phi }_{2j})(z,\overline{z})=0.$$
The relation $`k_j^a=k_j`$ follows from this formula if the integral is non-zero at $`k_j^a=k_j`$ (which is proved below in Eq. (3.22)). The other possibility is when $`k_j^ak_j`$ but $`\mathrm{\Phi }_{1j}^a`$ is orthogonal to $`\mathrm{\Phi }_{2j}`$. We do not consider such a non-generic situation. The other bound state $`𝚽_j^a`$ at $`k_j^a=k_j`$ can be defined using the symmetry relation (see Eq. (3.15) below). $`\mathrm{}`$
The adjoint bound state $`𝚽_j^a(z,\overline{z})`$ solves the homogeneous equations,
$`\mathrm{\Phi }_{1j}^a(z,\overline{z})`$ $`=`$ $`{\displaystyle \frac{1}{2\pi i}}{\displaystyle \frac{dz^{}d\overline{z}^{}}{\overline{z}^{}\overline{z}}(u\mathrm{\Phi }_{2j}^a)(z^{},\overline{z}^{})e^{ik_j(zz^{})+i\overline{k}_j(\overline{z}\overline{z}^{})}},`$ (3.10)
$`\mathrm{\Phi }_{2j}^a(z,\overline{z})`$ $`=`$ $`{\displaystyle \frac{1}{2\pi i}}{\displaystyle \frac{dz^{}d\overline{z}^{}}{z^{}z}(\overline{u}\mathrm{\Phi }_{1j}^a)(z^{},\overline{z}^{})}`$ (3.11)
with the boundary condition as $`|z|\mathrm{}`$,
$$𝚽_j^a(z,\overline{z})\frac{𝐞_2}{z},$$
(3.12)
and the normalization conditions,
$`{\displaystyle \frac{1}{2\pi i}}{\displaystyle 𝑑z}d\overline{z}(\overline{u}\mathrm{\Phi }_{1j}^a)(z,\overline{z})`$ $`=`$ $`1,`$ (3.13)
$`{\displaystyle \frac{1}{2\pi i}}{\displaystyle 𝑑z}d\overline{z}(u\mathrm{\Phi }_{2j}^a)(z,\overline{z})e^{i(k_jz+\overline{k}_j\overline{z})}`$ $`=`$ $`0.`$ (3.14)
In addition, the bound state $`𝚽_j^a(z,\overline{z})`$ is related to $`𝚽(z,\overline{z})`$ according to the symmetry formula,
$$𝚽_j^a(z,\overline{z})=𝝈\overline{𝚽}_j^a(z,\overline{z})e^{i(k_jz+\overline{k}_j\overline{z})}.$$
(3.15)
Using Eqs. (3.1)–(3.15), we see that the adjoint eigenfunction $`𝝁^a(z,\overline{z},k,\overline{k})`$ satisfies relations similar to those for $`𝝁(z,\overline{z},k,\overline{k})`$,
$$\frac{𝝁^a}{\overline{k}}=\overline{b}(k,\overline{k})𝐍_\mu ^a(z,\overline{z},k,\overline{k})$$
(3.16)
and
$$\underset{kk_j}{lim}\left[𝝁^a(z,\overline{z},k,\overline{k})+\frac{i𝚽_j^a(z,\overline{z})}{kk_j}\right]=(z+z_j)𝚽_j^a(z,\overline{z})\overline{c}_j𝚽_j^a(z,\overline{z}).$$
(3.17)
The expansions for inverse scattering transform of the adjoint eigenfunctions can be found in the form,
$$𝝁^a(z,\overline{z},k,\overline{k})=𝐞_2\underset{j=1}{\overset{n}{}}\frac{i𝚽_j^a(z,\overline{z})}{kk_j}\frac{1}{2\pi i}\frac{dk^{}d\overline{k}^{}}{k^{}k}\overline{b}(k^{},\overline{k}^{})𝐍_\mu ^a(z,\overline{z},k^{},\overline{k}^{})$$
(3.18)
and
$$(z^{}+z_j)𝚽_j^a(z^{},\overline{z}^{})\overline{c}_j𝚽_j^a(z^{},\overline{z}^{})=𝐞_2\underset{lj}{}\frac{i𝚽_l^a(z^{},\overline{z}^{})}{k_jk_l}\frac{1}{2\pi i}\frac{dkd\overline{k}}{kk_j}\overline{b}(k,\overline{k})𝐍_\mu ^a(z,\overline{z},k,\overline{k}).$$
(3.19)
### 3.2 Orthogonality and Completeness Relations
Using the Dirac system (2.2) and its adjoint system (3.1), we prove the orthogonality and completeness relations for the set of eigenfunctions $`S=[N_{2\mu }(k,\overline{k}),\{\mathrm{\Phi }_{2j}\}_{j=1}^n]`$ and its adjoint set $`S^a=[N_{1\mu }^a(k,\overline{k}),\{\mathrm{\Phi }_{1j}^a\}_{j=1}^n]`$.
Proposition 3.3. The eigenfunctions $`N_{2\mu }(z,\overline{z},k,\overline{k})`$ and $`\mathrm{\Phi }_{2j}(z,\overline{z})`$ are orthogonal to the eigenfunctions $`N_{1\mu }^a(z,\overline{z},k,\overline{k})`$ and $`\mathrm{\Phi }_{1j}^a(z,\overline{z})`$ as follows
$$N_{1\mu }^a(k^{})|N_{2\mu }(k)_z=2\pi ^2i\delta (k^{}k),$$
(3.20)
$$N_{1\mu }^a(k)|\mathrm{\Phi }_{2j}_z=\mathrm{\Phi }_{1j}^a|N_{2\mu }(k)_z=0,$$
(3.21)
$$\mathrm{\Phi }_{1l}^a|\mathrm{\Phi }_{2j}_z=2\pi i\delta _{jl},$$
(3.22)
where the inner product is defined as
$$g(k^{})|f(k)_z=𝑑zd\overline{z}g(z,\overline{z},k^{},\overline{k}^{})f(z,\overline{z},k,\overline{k}).$$
Proof. Using Eqs. (2.12) and (3.5), we expand the inner product in Eq. (3.20) as
$$N_{1\mu }^a(k^{})|N_{2\mu }(k)_z=I_0$$
$$+𝑑zd\overline{z}\left(uN_{2\mu }^a\right)(z,k^{})e^{i(k^{}z+\overline{k}^{}\overline{z})}I_1(z)𝑑zd\overline{z}\left(\overline{u}N_{1\mu }\right)(z,k)e^{i(kz+\overline{k}\overline{z})}I_1(z)$$
$$+\frac{1}{2\pi i}𝑑zd\overline{z}\left(uN_{2\mu }^a\right)(z,k^{})e^{i(k^{}z+\overline{k}^{}\overline{z})}\frac{dz^{}d\overline{z}^{}}{\overline{z}^{}\overline{z}}\left(\overline{u}N_{1\mu }\right)(z^{},k)e^{i(kz^{}+\overline{k}\overline{z}^{})}\left[I_1(z)I_1(z^{})\right],$$
where
$$I_0=𝑑zd\overline{z}e^{i(k^{}k)z+i(\overline{k}^{}\overline{k})\overline{z}}=2\pi ^2i\delta (k^{}k)$$
(3.23)
and
$$I_1(z)=\frac{1}{2\pi i}\frac{dz^{}d\overline{z}^{}}{\overline{z}^{}\overline{z}}e^{i(k^{}k)z^{}+i(\overline{k}^{}\overline{k})\overline{z}^{}}=\frac{1}{i(k^{}k)}e^{i(k^{}k)z+i(\overline{k}^{}\overline{k})\overline{z}}.$$
(3.24)
The integrals $`I_0`$ and $`I_1(z)`$ are computed in Appendix A. Using these formulas, we find the inner product in Eq. (3.20) in the form,
$$N_{1\mu }^a(k^{})|N_{2\mu }(k)_z=2\pi ^2i\delta (k^{}k)+\frac{1}{i(k^{}k)}(k,k^{}),$$
where the residual term $`(k,k^{})`$ is expressed in the form,
$$(k,k^{})=𝑑zd\overline{z}\left[uN_{2\mu }^a(k^{})N_{2\mu }(k)\overline{u}N_{1\mu }^a(k^{})N_{1\mu }(k)\right],$$
with the help of Eqs. (2.12) and (3.5). We show that $`(k,k^{})=0`$ by multiplying Eq. (3.6) by $`\overline{u}N_{1\mu }(k)`$, integrating over $`dzd\overline{z}`$ and using Eq. (2.11).
The zero inner products in Eqs. (3.21) and (3.22) for $`jl`$ are obtained in a similar way with the help of the Fredholm’s equations for eigenfunctions $`𝚽_j`$, $`𝚽_j^a`$, $`𝐍_\mu `$, and $`𝐍_\mu ^a`$. In order to find the non-zero inner product (3.22) for $`j=l`$ we evaluate the following integral by using the same integral equations,
$`{\displaystyle 𝑑z}d\overline{z}\mathrm{\Phi }_{1j}^a\mu _2(k)`$ $`=`$ $`{\displaystyle \frac{1}{i(k_jk)}}{\displaystyle 𝑑z}d\overline{z}\left[u\mathrm{\Phi }_{2j}^a\mu _2(k)\overline{u}\mathrm{\Phi }_{1j}^a\mu _1(k)\right]`$
$`=`$ $`{\displaystyle \frac{1}{i(kk_j)}}{\displaystyle 𝑑z}d\overline{z}\overline{u}\mathrm{\Phi }_{1j}^a.`$
Using Eq. (3.13), the right-hand-side identifies to $`\frac{1}{i(kk_j)}`$. Substituting Eq. (2.25) in the left-hand-side and using the zero inner products (3.21) and (3.22), we find Eq. (3.22) for $`j=l`$. $`\mathrm{}`$
Proposition 3.4. The eigenfunctions $`N_{2\mu }(z,\overline{z},k,\overline{k})`$ and $`\mathrm{\Phi }_{2j}(z,\overline{z})`$ are complete with respect to the adjoint eigenfunctions $`N_{1\mu }^a(z,\overline{z},k,\overline{k})`$ and $`\mathrm{\Phi }_{1j}^a(z,\overline{z})`$ according to the identity,
$$\delta (z^{}z)=\frac{1}{2\pi ^2i}𝑑kd\overline{k}N_{1\mu }^a(z^{},\overline{z}^{},k,\overline{k})N_{2\mu }(z,\overline{z},k,\overline{k})\frac{1}{\pi }\underset{j=1}{\overset{n}{}}\mathrm{\Phi }_{1j}^a(z^{},\overline{z}^{})\mathrm{\Phi }_{2j}(z,\overline{z}).$$
(3.25)
Proof. Using the symmetry relations (2.13) and (3.7), we express the integral in Eq. (3.25) as
$$N_{1\mu }^a(z^{})|N_{2\mu }(z)_k=𝑑kd\overline{k}\overline{\mu }_2^a(z^{},\overline{z}^{},k,\overline{k})\overline{\mu }_1(z,\overline{z},k,\overline{k})e^{ik(z^{}z)+i\overline{k}(\overline{z}^{}\overline{z})}.$$
(3.26)
We use Eqs. (2.25), (2.26), (3.18), and (3.19) and find the pole decomposition for the integrand in Eq. (3.26),
$$\overline{\mu }_2^a(z^{})\overline{\mu }_1(z)=1+\underset{j=1}{\overset{n}{}}\frac{i}{\overline{k}\overline{k}_j}\left[\overline{c}_j\overline{\mathrm{\Phi }}_{2j}^a(z^{},\overline{z}^{})\overline{\mathrm{\Phi }}_{1j}^{}(z,\overline{z})+c_j\overline{\mathrm{\Phi }}_{2j}^a(z^{},\overline{z}^{})\overline{\mathrm{\Phi }}_{1j}(z,\overline{z})\right]$$
$$+\underset{j=1}{\overset{n}{}}\frac{i}{\overline{k}\overline{k}_j}(\overline{z}\overline{z}^{})\overline{\mathrm{\Phi }}_{2j}^a(z^{},\overline{z}^{})\overline{\mathrm{\Phi }}_{1j}(z,\overline{z})+\underset{j=1}{\overset{n}{}}\frac{\overline{\mathrm{\Phi }}_{2j}^a(z^{},\overline{z}^{})\overline{\mathrm{\Phi }}_{1j}(z,\overline{z})}{(\overline{k}\overline{k}_j)^2}$$
$$+\frac{1}{2\pi i}\frac{dk^{}d\overline{k}^{}}{\overline{k}^{}\overline{k}}\left[\overline{\mu }_2^a(z^{})\overline{b}\overline{N}_{1\mu }(z)\overline{\mu }_1(z)b\overline{N}_{2\mu }^a(z^{})\right](k^{},\overline{k}^{}).$$
(3.27)
We substitute (3.27) into Eq. (3.26) and reduce the integral to the form,
$$N_{1\mu }^a(z^{})|N_{2\mu }(z)_k=I_02\pi \underset{j=1}{\overset{n}{}}\left[\overline{c}_j\overline{\mathrm{\Phi }}_{2j}^a(z^{},\overline{z}^{})\overline{\mathrm{\Phi }}_{1j}^{}(z,\overline{z})+c_j\overline{\mathrm{\Phi }}_{2j}^a(z^{},\overline{z}^{})\overline{\mathrm{\Phi }}_{1j}(z,\overline{z})\right]I_1(k_j)$$
$$+\underset{j=1}{\overset{n}{}}\overline{\mathrm{\Phi }}_{2j}^a(z^{},\overline{z}^{})\overline{\mathrm{\Phi }}_{1j}(z,\overline{z})\left[2\pi (\overline{z}^{}\overline{z})I_1(k_j)+I_2(k_j)\right]$$
$$𝑑kd\overline{k}I_1(k)\left[\overline{\mu }_2^a(z^{})\overline{b}\overline{N}_{1\mu }(z)\overline{\mu }_1(z)b\overline{N}_{2\mu }^a(z^{})\right](k,\overline{k})e^{ik(z^{}z)i\overline{k}(\overline{z}^{}\overline{z})},$$
(3.28)
where the integrals $`I_0`$ and $`I_1(k)`$ are given in Eqs. (3.23) and (3.24) respectively, with $`z`$ and $`k`$ interchanged, while the integral $`I_2(k_j)`$ is defined by
$$I_2(k_j)=\underset{ϵ0}{lim}_{|kk_j|ϵ}\frac{dkd\overline{k}}{(\overline{k}\overline{k}_j)^2}e^{ik(z^{}z)+i\overline{k}(\overline{z}^{}\overline{z})}.$$
(3.29)
The integral $`I_2(k)`$ is found in Appendix A in the form, $`I_2(k_j)=2\pi (\overline{z}^{}\overline{z})I_1(k_j)`$, such that the third term in Eq. (3.28) vanishes. In order to express the second term in Eq. (3.28) we use Eqs. (2.26), (3.19), (2.20), and (3.15) and derive the relation,
$$\underset{j=1}{\overset{n}{}}\left[\overline{c}_j\overline{\mathrm{\Phi }}_{2j}^a(z^{},\overline{z}^{})\overline{\mathrm{\Phi }}_{1j}^{}(z,\overline{z})+c_j\overline{\mathrm{\Phi }}_{2j}^a(z^{}\overline{z}^{})\overline{\mathrm{\Phi }}_{1j}(z,\overline{z})\right]e^{ik_j(z^{}z)+i\overline{k}_j(\overline{z}^{}\overline{z})}=$$
$$\underset{j=1}{\overset{n}{}}\left[\overline{c}_j\mathrm{\Phi }_{1j}^a(z^{},\overline{z}^{})\mathrm{\Phi }_{2j}(z,\overline{z})+c_j\mathrm{\Phi }_{1j}^a(z^{},\overline{z}^{})\mathrm{\Phi }_{2j}^{}(z,\overline{z})\right]=$$
$$(z^{}z)\underset{j=1}{\overset{n}{}}\mathrm{\Phi }_{1j}^a(z^{},\overline{z}^{})\mathrm{\Phi }_{2j}(z,\overline{z})+\frac{1}{2\pi i}\underset{j=1}{\overset{n}{}}\frac{dkd\overline{k}}{kk_j}\left[\mathrm{\Phi }_{2j}(z)\overline{b}N_{1\mu }^a(z^{})+\mathrm{\Phi }_{1j}^a(z^{})bN_{2\mu }(z)\right](k,\overline{k}).$$
(3.30)
Using this expression and Eqs. (3.23), (3.24), and (3.29), we rewrite Eq. (3.28) in the form,
$$N_{1\mu }^a(z^{})|N_{2\mu }(z)_k=2\pi ^2i\delta (z^{}z)2\pi i\underset{j=1}{\overset{n}{}}\mathrm{\Phi }_{1j}^a(z^{},\overline{z}^{})\mathrm{\Phi }_{2j}(z,\overline{z})$$
$$+\frac{i}{z^{}z}𝑑kd\overline{k}\frac{}{\overline{k}}\left[\left(\mu _1^a(z^{})+\underset{j=1}{\overset{n}{}}\frac{i\mathrm{\Phi }_{1j}^a(z^{})}{kk_j}\right)\left(\mu _2(z)\underset{j=1}{\overset{n}{}}\frac{i\mathrm{\Phi }_{2j}(z)}{kk_j}\right)\right].$$
The last integral vanishes according to Eq. (A.3) of Appendix A and the boundary conditions (2.5) and (3.3). $`\mathrm{}`$
Our main result for the spectral decomposition associated to the Dirac system (2.2) follows from the above orthogonality and completeness relations.
Proposition 3.5. An arbitrary scalar function $`f(z,\overline{z})`$ satisfying the condition $`f(z,\overline{z})\mathrm{O}(|z|^2)`$ as $`|z|\mathrm{}`$ can be decomposed through the set $`S=[N_{2\mu }(z,\overline{z},k,\overline{k}),\{\mathrm{\Phi }_{2j}(z,\overline{z})\}_{j=1}^n]`$.
Proof. The spectral decomposition is defined through the orthogonality relations (3.20)–(3.22) as
$$f(z,\overline{z})=𝑑kd\overline{k}\alpha (k,\overline{k})N_{2\mu }(z,\overline{z},k,\overline{k})+\underset{j=1}{\overset{n}{}}\alpha _j\mathrm{\Phi }_{2j}(z,\overline{z}),$$
(3.31)
where
$$\alpha (k,\overline{k})=\frac{1}{4\pi ^2}N_{1\mu }^a(k)|f_z,\alpha _j=\frac{1}{2\pi i}\mathrm{\Phi }_{1j}^a|f_z.$$
(3.32)
Provided the condition on $`f(z,\overline{z})`$ is satisfied, we interchange integration with respect to $`dzd\overline{z}`$ and $`dkd\overline{k}`$ and use the completeness formula (3.25). $`\mathrm{}`$
The spectral decomposition presented here is different from that of Kiselev . In the latter approach, the function $`f(z,\overline{z})`$ is spanned by squared eigenfunctions of the original problem (1.2) defined according to oscillatory-type behaviour at infinity. In our approach, we transformed the system (1.2) to the form (2.2) and defined the oscillatory-type eigenfunctions according to the single eigenfunctions $`𝐍_\mu (z,\overline{z},k,\overline{k})`$. We also notice that the (degenerate) bound states $`𝚽_j^{}(z,\overline{z})`$ are not relevant for the spectral decomposition, although they appear implicitly through the meromorphic contributions of the eigenfunctions $`𝐍_\mu (z,\overline{z},k,\overline{k})`$ at $`k=k_j`$ (see Section 4).
## 4 Perturbation Theory for a Single Lump
We use the scalar spectral decomposition based on Eq. (3.31) and develop a perturbation theory for multi-lump solutions of the DSII equation. We present formulas in the case of a single lump ($`n=1`$), the case of multi-lump potentials can be obtained by summing along the indices $`j`$, $`l`$ occuring in the expressions below.
The single-lump potential $`u(z,\overline{z})`$ has the form ,
$$u(z,\overline{z})=\frac{c_j}{|z+z_j|^2+|c_j|^2}e^{i(k_jz+\overline{k}_j\overline{z})},$$
(4.1)
where $`c_j`$, $`z_j`$ are complex parameters. The associated bound states follow from Eqs. (2.26) and (3.19) as
$`𝚽_j(z,\overline{z})`$ $`=`$ $`{\displaystyle \frac{1}{|z+z_j|^2+|c_j|^2}}\left[\begin{array}{c}\overline{z}+\overline{z}_j\\ \overline{c}_je^{i(k_jz+\overline{k}_j\overline{z})}\end{array}\right],`$ (4.4)
$`𝚽_j^a(z,\overline{z})`$ $`=`$ $`{\displaystyle \frac{1}{|z+z_j|^2+|c_j|^2}}\left[\begin{array}{c}c_je^{i(k_jz+\overline{k}_j\overline{z})}\\ \overline{z}+\overline{z}_j\end{array}\right].`$ (4.7)
We first consider a general perturbation to the single lump subject to the localization condition, $`\mathrm{\Delta }u\mathrm{O}(|z|^2)`$ as $`|z|\mathrm{}`$. We then derive explicit formulas for a special form of the perturbation term $`\mathrm{\Delta }u(z,\overline{z})`$.
### 4.1 General Perturbation of a Single Lump
Suppose the potential is specified as $`u^ϵ=u(z,\overline{z})+ϵ\mathrm{\Delta }u(z,\overline{z})`$, where $`u(z,\overline{z})`$ is given by Eq. (4.1) and $`\mathrm{\Delta }u(z,\overline{z})`$ is a perturbation term. Two bound states $`𝚽_j(z,\overline{z})`$ and $`𝚽_j^{}(z,\overline{z})`$ are supported by a single-lump potential $`u(z,\overline{z})`$ at a single point $`k=k_j`$. The spectral decomposition given by Eq. (3.31) provides a basis for expansion of $`\mu _2^ϵ(z,\overline{z},\kappa ,\overline{\kappa })`$ at $`k=\kappa `$,
$$\mu _2^ϵ(z,\overline{z},\kappa ,\overline{\kappa })=𝑑kd\overline{k}\alpha (k,\overline{k})N_{2\mu }(z,\overline{z},k,\overline{k})+\alpha _j\mathrm{\Phi }_{2j}(z,\overline{z})$$
(4.8)
where $`\alpha (k,\overline{k})`$ and $`\alpha _j`$ are defined by Eq. (3.32) and depend on the parameter $`\kappa `$. The other component $`\mu _1^ϵ(z,\overline{z},\kappa ,\overline{\kappa })`$ can be expressed from Eq. (2.2) as
$$\mu _1^ϵ(z,\overline{z},\kappa ,\overline{\kappa })=𝑑kd\overline{k}\alpha (k,\overline{k})N_{1\mu }(z,\overline{z},k,\overline{k})+\alpha _j\mathrm{\Phi }_{1j}(z,\overline{z})+ϵ\mathrm{\Delta }\mu _1(z,\overline{z}),$$
(4.9)
where the remainder term $`\mathrm{\Delta }\mu _1(z,\overline{z})`$ solves the equation,
$$\left(\mathrm{\Delta }\mu _1\right)_{\overline{z}}=\mathrm{\Delta }u\mu _2^ϵ.$$
We write the solution of this equation in the form,
$$\mathrm{\Delta }\mu _1(z,\overline{z})=A\frac{1}{2\pi i}\frac{dz^{}d\overline{z}^{}}{z^{}z}\left(\mathrm{\Delta }u\mu _2^ϵ\right)(z^{},\overline{z}^{}),$$
(4.10)
subject to the boundary condition as $`|z|\mathrm{}`$,
$$\mathrm{\Delta }\mu _1(z,\overline{z})A+\mathrm{O}(z^1),$$
where $`A`$ is an arbitrary constant. Using the explicit representation (4.10), we transform Eq. (2.2) into the system of integral equations for $`\alpha (k,\overline{k})`$ and $`\alpha _j`$,
$$\alpha (k,\overline{k})=\frac{ϵ}{4\pi ^2i(k\kappa )}\left[𝑑k^{}d\overline{k}^{}K(k,\overline{k},k^{},\overline{k}^{})\alpha (k^{},\overline{k}^{})+K_j(k,\overline{k})\alpha _j+R(k,\overline{k})A\right]+\mathrm{O}(ϵ^2),$$
(4.11)
$$\alpha _j=\frac{ϵ}{2\pi (k_j\kappa )}\left[𝑑kd\overline{k}P_j(k,\overline{k})\alpha (k,\overline{k})+K_{jl}\alpha _l+R_jA\right]+\mathrm{O}(ϵ^2),$$
(4.12)
where
$$K(k,\overline{k},k^{},\overline{k}^{})=𝐍_\mu ^a(k)|𝐍_\mu (k^{})_{\mathrm{\Delta }u},K_j(k,\overline{k})=𝐍_\mu ^a(k)|𝚽_j_{\mathrm{\Delta }u},$$
$$P_j(k,\overline{k})=𝚽_j^a|𝐍_\mu (k)_{\mathrm{\Delta }u},K_{jl}=𝚽_j^a|𝚽_l_{\mathrm{\Delta }u},$$
and the scalar product for squared eigenfunction is defined as
$$𝐟(k)|𝐠(k^{})_h=𝑑zd\overline{z}\left[\overline{h}(z,\overline{z})f_1(z,\overline{z},k,\overline{k})g_1(z,\overline{z},k^{},\overline{k}^{})+h(z,\overline{z})f_2(z,\overline{z},k,\overline{k})g_2(z,\overline{z},k^{},\overline{k}^{})\right].$$
The non-homogeneous terms $`R(k,\overline{k})`$ and $`R_j`$ can be computed exactly as
$`R(k,\overline{k})`$ $`=`$ $`{\displaystyle 𝑑z}d\overline{z}\left(\overline{u}N_{1\mu }^a\right)(z,\overline{z})=2\pi b(k,\overline{k}),`$
$`R_j`$ $`=`$ $`{\displaystyle 𝑑z}d\overline{z}\left(\overline{u}\mathrm{\Phi }_{1j}^a\right)(z,\overline{z})=2\pi i,`$
where $`b(k,\overline{k})=0`$ if $`n0`$. We solve the system of equations (4.11) and (4.12) asymptotically for $`\kappa =k_j+ϵ\mathrm{\Delta }\kappa `$ and $`\mathrm{\Delta }\kappa \mathrm{O}(1)`$. The leading order behaviour of the integral kernels follows from the asymptotic representation (2.24) as $`kk_j`$,
$$K(k,\overline{k},k^{},\overline{k}^{})\frac{\overline{K}_{jj}}{(\overline{k}\overline{k}_j)(\overline{k}^{}\overline{k}_j)},K_j(k,\overline{k})\frac{iP_{jj}}{\overline{k}\overline{k}_j},P_j(k,\overline{k})\frac{i\overline{P}_{jj}}{\overline{k}\overline{k}_j},$$
(4.13)
where
$$\overline{K}_{jj}=𝚽_j^a|𝚽_j^{}_{\mathrm{\Delta }u},P_{jj}=𝚽_j^a|𝚽_j_{\mathrm{\Delta }u},\overline{P}_{jj}=𝚽_j^a|𝚽_j^{}_{\mathrm{\Delta }u}.$$
(4.14)
Here we have used the symmetry constraints (2.20) and (3.15). The leading order of $`\alpha (k,\overline{k})`$ as $`kk_j`$ follows from Eq. (4.11) as
$$\alpha (k,\overline{k})\frac{ϵ\mathrm{\Delta }\overline{\kappa }\beta _j}{2\pi (k\kappa )(\overline{k}\overline{k}_j)},$$
where $`\beta _j`$ is not yet defined. We use Eq. (A.4) of Appendix A to compute the integral term,
$$\frac{dkd\overline{k}}{(k\kappa )(\overline{k}\overline{k}_j)^2}=\frac{2\pi i}{\overline{k}_j\overline{\kappa }},$$
and reduce the system of integral equations (4.11) and (4.12) to an algebraic system as $`kk_j`$,
$`2\pi \mathrm{\Delta }\kappa \alpha _j`$ $`=`$ $`K_{jj}\alpha _j\overline{P}_{jj}\beta _j2\pi iA,`$ (4.15)
$`2\pi \mathrm{\Delta }\overline{\kappa }\beta _j`$ $`=`$ $`P_{jj}\alpha _j\overline{K}_{jj}\beta _j.`$ (4.16)
If $`P_{jj}0`$, the determinant of the above system is strictly positive. Therefore, homogeneous solutions at $`A=0`$ (bound states) are absent for $`ϵ0`$. This result indicates that the double eigenvalue at $`k=k_j`$ disappears under a generic perturbation of the potential $`u(z,\overline{z})`$ with $`P_{jj}0`$ (see also Ref. ).
For $`A0`$, we find inhomogeneous solutions of Eqs. (4.15) and (4.16),
$$\alpha _j=\frac{2\pi iA\left(\overline{K}_{jj}+2\pi \mathrm{\Delta }\overline{\kappa }\right)}{|K_{jj}+2\pi \mathrm{\Delta }\kappa |^2+|P_{jj}|^2},\beta _j=\frac{2\pi iAP_{jj}}{|K_{jj}+2\pi \mathrm{\Delta }\kappa |^2+|P_{jj}|^2}.$$
(4.17)
The eigenfunction $`𝝁^ϵ(z,\overline{z},\kappa ,\overline{\kappa })`$ given by Eqs. (4.8) and (4.9) satisfies the boundary condition (2.5) if $`A=ϵ^1`$ and has the following asymptotic representation,
$`𝝁^ϵ(z,\overline{z},\kappa ,\overline{\kappa })=𝐞_1+{\displaystyle \frac{2\pi i\left[2\pi (\overline{\kappa }\overline{k}_j)+ϵ\overline{K}_{jj}\right]𝚽_j(z,\overline{z})}{|2\pi (\kappa k_j)+ϵK_{jj}|^2+|ϵP_{jj}|^2}}`$ $``$ $`{\displaystyle \frac{2\pi iϵP_{jj}𝚽_j^{}(z,\overline{z})}{|2\pi (\kappa k_j)+ϵK_{jj}|^2+|ϵP_{jj}|^2}}`$ (4.18)
$`+`$ $`\mathrm{\Delta }𝝁^ϵ(z,\overline{z}),`$
where the term $`\mathrm{\Delta }𝝁^ϵ(z,\overline{z})`$ is not singular in the limit $`ϵ0`$ and $`\kappa k_j`$.
In the limit $`ϵ0`$, $`\kappa k_j`$, we find a meromorphic expansion for $`\mu ^ϵ(z,\overline{z},\kappa ,\overline{\kappa })`$ as
$$𝝁^ϵ(z,\overline{z},\kappa ,\overline{\kappa })=𝐞_1+\frac{i𝚽_j(z,\overline{z})}{\kappa k_j}+ϵ\left[\frac{K_{jj}𝚽_j(z,\overline{z})}{2\pi i(\kappa k_j)^2}+\frac{P_{jj}𝚽_j^{}(z,\overline{z})}{2\pi i|\kappa k_j|^2}\right]+\mathrm{O}(ϵ^2).$$
(4.19)
It is clear that the double pole can be incorporated by shifting the eigenvalue $`k_j`$ to
$$k_j^ϵ=k_j\frac{ϵK_{jj}}{2\pi }.$$
The other double-pole term in the expansion (4.19) has a non-analytic behaviour in the $`k`$-plane and leads to the appearance of the spectral data $`b^ϵ(\kappa ,\overline{\kappa })=ϵ\mathrm{\Delta }b(\kappa ,\overline{\kappa })`$ which measures the departure of $`𝝁^ϵ(z,\overline{z},\kappa ,\overline{\kappa })`$ from analyticity according to Eq. (2.8). We find from Eqs. (2.9) and (4.19) that the spectral data $`\mathrm{\Delta }b(\kappa ,\overline{\kappa })`$ has the following singular behaviour as $`\kappa k_j`$
$$\mathrm{\Delta }b(\kappa ,\overline{\kappa })\frac{P_{jj}}{2\pi |\kappa k_j|^2}.$$
(4.20)
Thus, if $`P_{jj}0`$ the analyticity of $`𝝁^ϵ(z,\overline{z},\kappa ,\overline{\kappa })`$ is destroyed and the lump disappears. This conclusion as well as the analytical solution (4.17) agree with the results of Gadyl’shin and Kiselev where the transformation of a single lump into decaying wave packet was also studied.
In the other limit $`ϵ0`$ and $`\kappa k_j^ϵ`$ we find another expansion from Eq. (4.18),
$$𝝁^ϵ(z,\overline{z},\kappa ,\overline{\kappa })=𝐞_1\frac{2\pi i}{ϵ\overline{P}_{jj}}𝚽_j^{}(z,\overline{z})+\mathrm{O}(\kappa k_j^ϵ).$$
(4.21)
We conclude that the eigenfunction $`𝝁^ϵ(z,\overline{z},\kappa ,\overline{\kappa })`$ is now free of pole singularities . We summarize the main result in the form of a proposition.
Proposition 4.1. Suppose $`u(z,\overline{z})`$ is given by Eq. (4.1) and $`\mathrm{\Delta }u(z,\overline{z})`$ satisfies the constraint,
$$P_{jj}=𝚽_j^a|𝚽_j_{\mathrm{\Delta }u}0.$$
Then, the potential $`u^ϵ=u(z,\overline{z})+ϵ\mathrm{\Delta }u(z,\overline{z})`$ does not support embedded eigenvalues of the Dirac system (2.2) for $`ϵ0`$.
### 4.2 Explicit Solution for a Particular Perturbation
Here we specify $`c_j=ce^{i\theta }`$, where $`c`$ and $`\theta `$ are real, and consider a particular perturbation $`\mathrm{\Delta }u(z,\overline{z})`$ to the lump $`u(z,\overline{z})`$ (4.1) in the form,
$$\mathrm{\Delta }u(z,\overline{z})=Q(z,\overline{z})e^{i(k_jz+\overline{k}_j\overline{z}+\theta )},$$
where $`Q(z,\overline{z})`$ is a real function. Using Eqs. (4.14), (4.4), and (4.7), we find explicitly the matrix elements $`K_{jj}`$ and $`P_{jj}`$,
$$K_{jj}=𝑑zd\overline{z}\frac{c(\overline{z}+\overline{z}_j)[Q(z,\overline{z})\overline{Q}(z,\overline{z})]}{[|z+z_j|^2+c^2]^2}=0,$$
$$P_{jj}=𝑑zd\overline{z}\frac{|z+z_j|^2\overline{Q}(z,\overline{z})+c^2Q(z,\overline{z})}{[|z+z_j|^2+c^2]^2}=\frac{1}{2c}𝑑zd\overline{z}\left(u\mathrm{\Delta }\overline{u}+\overline{u}\mathrm{\Delta }u\right).$$
The element $`P_{jj}`$ can be seen as a correction to the field energy,
$$N=\frac{i}{2}𝑑zd\overline{z}|u^ϵ|^2(z,\overline{z})=N_0+iϵcP_{jj}+\mathrm{O}(ϵ^2),$$
where $`N_0=\pi `$ is the energy of the single lump solution (independent of the lump parameters $`k_j`$ and $`c_j`$). Thus, a perturbation which leads to the destruction of a single lump, that is with $`P_{jj}0`$, changes necessarily the value for the lump energy $`N_0`$.
## 5 Concluding Remarks
The main result of our paper is the prediction of structural instability of multi-lump potentials in the Dirac system associated to the DSII equation. The multi-lump potentials correspond to eigenvalues embedded into a two-dimensional continuous spectrum with the spectral data $`b(k,\overline{k})`$ satisfying the additional constraint (2.21). In this case, there is no interaction between lumps and continuous radiation. However, a generic initial perturbation induces coupling between the lumps and radiation and, as a result of their interaction, the embedded eigenvalues disappear. This result indicates that the localized multi-lump solutions decay into continuous wave packets in the nonlinear dynamics of the DSII equation (see also Refs. ).
This scenario is different from the two types of bifurcations of embedded eigenvalues discussed in our previous paper . The type I bifurcation arises from the edge of the essential spectrum when the limiting bounded (non-localized) eigenfunction is transformed into a localized bound state. The type II bifurcation occurs when an embedded eigenvalue splits off the essential spectrum. Both situations persist in the spectral plane when the essential spectrum is one-dimensional and covers either a half-axis or the whole axis. However, in the case of the DSII equation, the essential spectrum is the whole spectral plane and embedded eigenvalue can not split off the essential spectrum. As a result, they disappear due to their structural instability.
## Acknowledgements
We benefited from stimulating discussions with M. Ablowitz, A. Fokas, D. Kaup, O. Kiselev, and A. Yurov. D.P. acknowledges support from a NATO fellowship provided by NSERC and C.S. acknowledges support from NSERC Operating grant OGP0046179.
## Appendix A. Formulas of the $``$-analysis
Here we reproduce some formulas of the complex $`\overline{}`$-analysis to compute the integrals $`I_0`$, $`I_1(z)`$ and $`I_2(z)`$ defined in Eqs. (3.23), (3.24), and (3.29). We define the complex integration in the $`z`$-plane by
$$𝑑zd\overline{z}f(z,\overline{z})=𝑑\overline{z}dzf(z,\overline{z}),$$
where $`dzd\overline{z}=2idxdy`$. The complex $`\delta (z)`$ distribution is defined by
$$𝑑zd\overline{z}f(z)\delta (zz_0)=2if(z_0),$$
$`(A.1)`$
where $`\delta (z)=\delta (x)\delta (y)`$. In particular, the $`\delta `$-distribution appears in the $`\overline{}`$-analysis according to the relation ,
$$\frac{}{\overline{z}}\left[\frac{1}{zz_0}\right]=\pi \delta (zz_0).$$
$`(A.2)`$
Computing the integral $`I_0`$, we get the formula,
$$I_0=𝑑zd\overline{z}e^{ikz+i\overline{k}\overline{z}}=2i𝑑x𝑑ye^{2i\mathrm{Re}(k)x2i\mathrm{Im}(k)y}=2\pi ^2i\delta (k),$$
which proves the identity (3.23).
Using the Green’s theorem , one has the integration identity,
$$_D𝑑zd\overline{z}\left(\frac{f_1}{z}\frac{f_2}{\overline{z}}\right)=_C\left(f_1d\overline{z}+f_2dz\right),$$
$`(A.3)`$
where $`D`$ is a domain of the complex plane and $`C`$ its boundary. The generalized Cauchy’s formula has the form ,
$$f(z,\overline{z})=\frac{1}{2\pi i}_C\frac{f(z^{},\overline{z}^{})dz^{}}{z^{}z}+\frac{1}{2\pi i}_D\frac{dz^{}d\overline{z}^{}}{z^{}z}\frac{f}{\overline{z}^{}},$$
$`(A.4)`$
or, equivalently,
$$f(z,\overline{z})=\frac{1}{2\pi i}_C\frac{f(z^{},\overline{z}^{})d\overline{z}^{}}{\overline{z}^{}\overline{z}}+\frac{1}{2\pi i}_D\frac{dz^{}d\overline{z}^{}}{\overline{z}^{}\overline{z}}\frac{f}{z^{}}.$$
$`(A.5)`$
In order to find the integral $`I_1(z)`$ we use Eq. (A.5) with
$$f(z,\overline{z})=\frac{1}{ik}e^{i(kz+\overline{k}\overline{z})},k0$$
and choose the domain $`D`$ to be a large ball of radius $`R`$ (see Eq. (2.14)). The boundary value integral vanishes since
$$\underset{R\mathrm{}}{lim}_{|z|=R}\frac{d\overline{z}}{\overline{z}}e^{i(kz+\overline{k}\overline{z})}=2\pi i\underset{R\mathrm{}}{lim}J_0(2|k|R)=0,$$
$`(A.6)`$
where $`J_0(z)`$ is the Bessel function. Equation (A.5) for the function $`f(z,\overline{z})`$ then reduces to Eq. (3.24).
In order to compute the integral $`I_2(z_0)`$, we apply Eq. (A.3) with $`f_1=0`$ and
$$f_2(z,\overline{z})=\frac{1}{\overline{z}\overline{z}_0}e^{i(kz+\overline{k}\overline{z})}.$$
The domain $`D`$ is chosen as above. The boundary value integral vanishes again,
$$\underset{R\mathrm{}}{lim}_{|z|=R}\frac{dz}{\overline{z}}e^{i(kz+\overline{k}\overline{z})}=2\pi i\underset{R\mathrm{}}{lim}J_2(2|k|R)=0,$$
$`(A.7)`$
where $`J_2(z)`$ is the Bessel function. Equation (A.3) for the function $`f_2(z,\overline{z})`$ reduces to Eq. (3.29).
|
no-problem/9907/cs9907001.html
|
ar5iv
|
text
|
# Setting Parameters by Example
## 1 Introduction
Many cars now come equipped with route planning software that suggests a path from the current location to a desired destination. Similar services are also available on the internet (e.g., from http://maps.yahoo.com/). But although these routes may be found by computing shortest paths in a graph representing the local road system, the “distance” may be a weighted sum of several values other than actual mileage: expected travel time, scenic value, number of turns, tolls, etc. . Different drivers may have different preferences among these values, and may not be able to clearly articulate these preferences. Can we automatically infer the appropriate weights to use in the sum by observing the routes actually chosen by a driver?
More abstractly, we define an inverse parametric optimization problem as follows: we are given as input both a parametric optimization problem (that is, a combinatorial optimization problem such as shortest paths, but with the element weights being linear combinations of certain parameters rather than fixed numbers), and also a desired optimal solution for the problem.<sup>1</sup><sup>1</sup>1One could more generally allow as input a set of problem-solution pairs, but for most of the problems we consider any such set can be represented equally well by a single larger problem. Our task is to determine parameter values such that the given solution is optimal for those values.
Along with the path planning problem described above, one can find many other applications in which one must tune the parameters to an optimization problem:
* In many online services such as web page hosting, data is sent in a star topology from a central server to each user. But in multicast routing of video and other high-bandwidth information, network resources are conserved by sending the data along the edges of a tree, in which some users receive copies of the data from other users rather than from the central server. Natural measures of the quality of each edge in this routing tree include the edge’s bandwidth, congestion, delay, packet loss, and possibly monetary charges for use of that link. Since one can find minimum spanning trees efficiently in the distributed setting , it is natural to try to model this routing problem using minimum spanning trees. Given one or more networks with these parameters, and examples of desired routing trees, how can we set the weights of each quality measure so that the desired trees are the minimum spanning trees of their networks?
* Bipartite matching, or the assignment problem, is a common formalism for grouping indivisible resources with resource consumers. For instance, the first example given for matching by Ahuja et al. is to assign recently hired workers to jobs, using weights based on such values as aptitude test scores and college grades. One might set the weight of an edge from worker $`i`$ to job $`j`$ to be $`a_ip_j`$, where $`a_i`$ is the (known) set of aptitudes of the worker, and $`p_j`$ is the (unknown) set of parameters describing the combination of aptitudes best fitting the job. Again, it is natural to ask for a way to automatically set the parameters of each job, based on experience assigning previously hired workers to those jobs.
* Many board games, such as chess, checkers, or Othello, can be played well by programs based on relatively simple alpha-beta searching algorithms. However, these programs use relatively complex evaluation functions in which the evaluation of a given position can be the sum of hundreds or thousands of terms. Some of these terms may represent the gross material balance of a game (e.g., in chess, one usually normalizes the score so that a pawn is worth 1 point, while a knight may be worth 2.5-3 points) while others represent more subtle features of piece placement, king safety, advanced pawns, etc. The weight of each of these terms may be individually adjusted in order to improve the quality of play. Although there have been some preliminary experiments in using evolutionary learning techniques to tune these weights , they are currently usually set by hand. The true test of a game program is in actual play, but programs are also often tuned by using test suites, large collections of positions for which the correct move is known. If we are given a test suite, can we automatically set evaluation weights in such a way that a shallow alpha-beta search can find each correct move?
### 1.1 New Results
We show the following theoretical results:
* For the inverse parametric minimum spanning tree problem, in the case that the number of parameters is a fixed constant, we provide a randomized algorithm with linear expected running time, and a deterministic algorithm with worst case running time $`O(mlog^\mathrm{2}n)`$.
* For the minimum spanning tree, shortest path, matching, and other “optimal subgraph” problems for which the optimization problem can be solved in polynomial time, we show that the inverse optimization problem can also be solved in polynomial time by means of the ellipsoid method from linear programming, even when the number of parameters is large.
In addition, although we do not provide theoretical results for this case, we discuss the game tree search problem and describe how to fit it into the same inverse parametric optimization framework.
In cases where the initial problem is infeasible (there is no parameter setting leading to the desired optimum), our techniques provide a witness for infeasibility: a small number of alternative solutions, one of which must be better than the given solution for any parameter setting. One can then examine these solutions to determine whether the initial solution is suboptimal or whether additional parameters should be added to better model the users’ utility functions.
### 1.2 Relation to Previous Work
Although there has been considerable work on parametric versions of optimization problems such as minimum spanning trees and shortest paths , we are not aware of any prior work in inverting such problems to produce parameter values that match given solutions. One could compute the set of solutions available over the range of parameter values, and compare these solutions to the given one, but the number of different solutions would typically grow exponentially with the number of parameters.
The inverse parametric optimization problems considered here are most closely related to parametric search, which describes a general class of problems in which one sets the parameters of a parametric problem in order to optimize some criterion. However in most applications of parametric search, the criterion being optimized is a numeric function of the solution (e.g. the ratio between two linear weights) rather than the solution structure itself. Megiddo describes a very general technique for solving parametric search problems, in which one simulates the steps of an optimization algorithm, at each conditional step using the algorithm itself as an oracle to determine which conditional branch to take. However this technique does not seem to apply to our problems, because the given optimal structure (e.g. a single shortest path) does not give enough information to deduce the conditional branches followed by a shortest path algorithm.
The vehicle routing problem discussed in the introduction was introduced by Rogers and Langley . However, they used a weaker model of optimization (a hill-climbing procedure) and a stronger model of user interaction requiring the user to specify preferences in a sequence of choices between pairs of routes.
## 2 Minimum Spanning Trees
In this section, we consider the inverse parametric minimum spanning tree problem, in which we are given a fixed tree $`T`$ in a network in which the weight of each edge $`e`$ is a linear function $`w(e)=𝐜_e𝐩`$ (where $`𝐩`$ represents the unknown vector of parameter settings and $`𝐜_e`$ represents the known value of edge $`e`$ according to each parameter). Our task is to find a value of $`𝐩`$ such that $`T`$ is the unique minimum spanning tree for the weights $`w(e)`$.
If we fix a given spanning tree $`T`$ in a network, a pair of edges $`(e,f)`$ is defined to be a swap if $`T\{f\}\{e\}`$ is also a spanning tree; that is, if $`e`$ is an edge in $`T`$, $`f`$ is not an edge in $`T`$, and $`e`$ belongs to the cycle induced in $`T`$ by $`f`$. $`T`$ is the unique minimum spanning tree if and only if for every swap $`(e,f)`$, the weight of $`f`$ is greater than the weight of $`e`$.
Thus we can solve the inverse parametric minimum spanning tree problem as a linear program, in which we have one variable per parameter, and one constraint $`(𝐜_f𝐜_e)𝐩>\mathrm{0}`$ per swap. If the number of variables is a fixed constant, a linear program may be solved in time linear in the number of constraints ; however here the number of constraints may be $`\mathrm{\Theta }(mn)`$.
We show how to improve this by a randomized algorithm which takes linear time and a deterministic algorithm which takes time $`O(mlog^\mathrm{2}n)`$. Both algorithms are based on (different) random sampling schemes for low dimensional linear programming, due to Clarkson .
### 2.1 Randomized Spanning Tree Algorithm
Clarkson showed that, if one randomly samples $`k`$ constraints from a $`d`$-dimensional linear program with $`n`$ constraints, and computes the optimum for the subprogram consisting only of the sampled constraints, then the expected number of the remaining constraints violated by this optimum is at most $`d(nk)/(k+\mathrm{1})`$. Further, if any constraint is violated, at least one of the $`d`$ constraints involved in any base (minimal subset of constraints having the same solution as the overall problem) belongs to the set of violated constraints. If no constraint is violated, the problem is solved.
This suggests the following randomized algorithm for the inverse parametric minimum spanning tree problem, where $`d=O(\mathrm{1})`$ is a fixed constant. We define a potential swap for the given tree $`T`$ to be a pair $`(e,f)`$ where $`e`$ belongs to $`T`$ and $`f`$ does not, regardless of whether $`(e,f)`$ is actually a swap. For technical reasons, we need to define a unique optimal parameter setting $`𝐩`$ for any subset of constraints, which we achieve by introducing an arbitrary linear objective function.
1. Let set $`S`$ be initialized to empty.
2. Repeat $`d`$ times:
1. Let set $`R`$ be a random sample of $`d\sqrt{mn}`$ potential swaps.
2. Find the optimal parameter setting $`𝐩`$ for constraints from $`RS`$.
3. Add the constraints violated by $`𝐩`$ to $`S`$.
3. Find the optimal parameter setting $`𝐩`$ for constraints from $`S`$.
Each iteration increases the size of the intersection of $`S`$ with the optimal base, so the loop terminates with a correct solution. The expected number of edges added to $`S`$ in each iteration is $`O(\sqrt{mn})`$, so the expected size of $`S`$ is $`O(d\sqrt{mn})=O(m)`$. If $`d=O(\mathrm{1})`$, the step in which we find $`𝐩`$ can be performed in time $`O(d\sqrt{mn})=O(m)`$ by fixed dimensional linear programming techniques. It remains to determine how we tell whether a potential swap $`(e,f)`$ is really a swap (so we can determine whether to use it as a constraint or ignore it in step (b)), and how to find the set of violated constraints (step (c)).
To test a potential swap, we simply build a least common ancestor data structure on the given tree $`T`$ (with an arbitrary choice of root). The pair $`(e,f)`$ is a swap if both endpoints of $`e`$ are on the path from one of the endpoints of $`f`$ to the common ancestor of the two endpoints.
To find the violated constraints for $`𝐩`$, we also use least common ancestors, on an auxiliary tree in which internal nodes represent edges and leaves represent vertices of $`T`$ (Figure 1). We build this auxiliary tree by choosing the root to be the maximum weight edge $`e`$ (according to $`𝐩`$) in $`T`$, with the two children of the root being auxiliary trees constructed recursively on the two components of $`T\{e\}`$. This construction takes time $`O(nlogn)`$. The least common ancestor of two leaves in this auxiliary tree represents the maximum weight edge on the path between the corresponding vertices of $`T`$. Therefore, if $`f`$ is a given non-tree edge, we can find a swap $`(e,f)`$ giving a violated constraint (if one exists) by using this auxiliary tree to find the maximum weight edge on the path between $`f`$’s endpoints. If this gives us a violated swap, we continue recursively on the subpaths between $`f`$ and the endpoints of $`e`$, until all swaps involving $`f`$ have been listed. Each swap is found in $`O(\mathrm{1})`$ time, and the expected number of swaps corresponding to violated constraints is $`O(\sqrt{mn})`$, so the total expected time for this procedure (including the time to construct the auxiliary tree) is $`O(m+nlogn)`$.
###### Lemma 1
We can solve the inverse parametric minimum spanning tree problem, for any constant number of parameters, in randomized expected time $`O(m+nlogn)`$.
In order to remove the unnecessary logarithmic factor from this bound, we resort to another round of sampling. However this time we sample tree edges rather than swaps.
###### Lemma 2
Let $`S`$ be a randomly chosen sample of $`k`$ edges from tree $`T`$, let graph $`G^{}`$ and tree $`T^{}`$ be formed from $`G`$ and $`T`$ respectively by contracting the edges in $`TS`$, and let $`𝐩`$ be the optimal parameter setting for the inverse parametric minimum spanning tree problem defined by $`G^{}`$ and $`T^{}`$. Then the expected number of the remaining edges of $`T`$ that take part in a constraint violated by this optimum is at most $`d(nk\mathrm{1})/(k+\mathrm{1})`$.
Proof: Consider selecting $`S`$ in the following way: choose a random permutation on the edges of $`T`$, and let $`S`$ be the first $`k`$ edges in the permutation. Let $`e`$ be the $`(k+\mathrm{1})`$st edge in the permutation. Then since $`e`$ is equally likely to be any remaining edge, the expected number of edges that take part in a violated constraint is just $`nd\mathrm{1}`$ times the probability that $`e`$ takes part in a violated constraint. But this can only happen if $`e`$ is one of the at most $`d`$ edges involved in the optimal base for $`S\{e\}`$. Since this subset is just the first $`k+\mathrm{1}`$ edges in the permutation, and any permutation of this subset is equally likely, this probability is at most $`d/(k+\mathrm{1})`$.
Thus we can apply the following algorithm:
1. Let set $`S`$ be initialized to empty.
2. Repeat $`d`$ times:
1. Let set $`R`$ be a random sample of $`d\sqrt{n}`$ edges of $`T`$.
2. Contract the edges in $`T(RS)`$ to produce $`T^{}`$ and $`G^{}`$.
3. Find the optimal parameter setting $`𝐩`$ for $`T^{}`$ and $`G^{}`$ using the algorithm of Lemma 1.
4. Add to $`S`$ the tree edges that take part in a constraint violated by $`𝐩`$.
3. Contract the edges in $`TS`$ to produce $`T^{}`$ and $`G^{}`$.
4. Find the optimal parameter setting $`𝐩`$ for $`T^{}`$ and $`G^{}`$ using the algorithm of Lemma 1.
The arguments for termination and correctness are the same as before. It remains to explain how we find the set of violated tree edges. This can be done in time $`O(m\alpha (m,n))`$ by an algorithm of Tarjan , but using this algorithm directly would lead to a nonlinear overall time bound. More recent minimum spanning tree verification algorithms can be used to find the violated non-tree edges, but not the tree edges. However, in our case we can perform this verification task efficiently due to the expected small number of differences between $`T`$ and the minimum spanning tree for $`𝐩`$.
###### Lemma 3
In the algorithm above, the tree edges that take part in a violated constraint can be found in expected linear time.
Proof: We use the linear time randomized minimum spanning tree algorithm of Karger et al. , and let $`X`$ denote the set of edges that are in the MST and not in $`T`$. Note that $`X`$ has exactly as many edges as are in $`T`$ and not in the MST; since each edge in the latter set takes part in a violated swap constraint, the expectation of $`|X|`$ is $`O(\sqrt{n})`$ by Lemma 2. Then it is easy to see that, if tree edge $`e`$ takes part in any violated constraints, at least one must be the constraint corresponding to swap $`(e,f)`$, where $`f`$ is the minimum weight edge in $`X`$ forming a swap with $`e`$.
To find this minimum weight swap for each tree edge, we contract $`T`$ as follows. While $`T`$ has a degree-one vertex that is not adjacent to any edge in $`X`$, we remove it and its incident edge; that edge can not take part in any swaps with $`X`$. While $`T`$ has a degree-two vertex that is not adjacent to any edge in $`X`$, we remove it and merge its two incident edges into a single edge; these two edges share the same minimum swap edge.
After this contraction process, the contracted tree $`T^{}`$ has $`O(|X|)`$ vertices with degree less than three, and therefore $`O(|X|)`$ total vertices. We apply Tarjan’s nonlinear minimum spanning tree verification algorithm to this contracted tree to find the best swap in $`X`$ for each contracted tree edge. We then undo the contraction process and propagate the best swap information to the original tree edges. Finally, once we have computed the best swap $`(e,f)`$ for each tree edge $`e`$, we simply compute $`w(e)`$ and $`w(f)`$ and compare the two weights to determine whether this swap leads to a violated constraint.
###### Theorem 1
We can solve the inverse parametric minimum spanning tree problem, for any constant number of parameters, in randomized linear expected time.
Proof: The problem is solved by the algorithm above. In each iteration the expected size of the set added to $`S`$ is $`O(\sqrt{n})`$, so the total size of $`RS`$ is $`O(d\sqrt{n})=O(\sqrt{n})`$. In each iteration we add one more member of the optimal base to $`S`$, so the algorithm terminates with the correct solution. The steps in which we find the optimal parameter setting for $`T^{}`$ and $`G^{}`$ can be performed by applying Lemma 1; since $`T^{}`$ has $`O(\sqrt{n})`$ edges, the time for these steps is $`O(m+\sqrt{n}logn)=O(m)`$. The step in which we find the edges that take part in a violated constraint can be performed in linear expected time by Lemma 3.
### 2.2 Deterministic Spanning Tree Algorithm
To solve the inverse parametric minimum spanning tree problem deterministically, we derandomize a different sampling technique also based on a method of Clarkson . However, as in our randomized algorithm, we modify this technique somewhat by sampling edges instead of constraints.
We begin by applying the multi-level restricted partition technique of Frederickson to the given tree $`T`$.
By introducing dummy edges, we can assume without loss of generality that $`T`$ is binary and that the root $`t`$ of $`T`$ has indegree one. These dummy edges will only be used to form the partition and will not take part in the eventual optimization procedure.
###### Definition 1
A restricted partition of order $`z`$ with respect to a rooted binary tree $`T`$ is a partition of the vertices of $`V`$ such that:
1. Each set in the partition contains at most $`z`$ vertices.
2. Each set in the partition induces a connected subtree of $`T`$.
3. For each set $`S`$ in the partition, if $`S`$ contains more than one vertex, then there are at most two tree edges having one endpoint in $`S`$.
4. No two sets can be combined and still satisfy the other conditions.
Such a partition (for $`z=\mathrm{2}`$) is depicted in Figure 2(a). In general such a partition can easily be found in linear time by merging sets until we get stuck. Alternatively, by working bottom up we can find an optimal partition in linear time. We will defer until later choosing a value for $`z`$; for now we leave it as a free parameter.
###### Lemma 4 (Frederickson )
Any order-$`z`$ partition of a binary tree $`T`$ has $`O(n/z)`$ sets in the partition. For $`z=\mathrm{2}`$ we can find a partition with at most $`\mathrm{5}n/\mathrm{6}`$ sets.
Contracting each set in a restricted partition gives again a binary tree. We form a multi-level partition by recursively partitioning this contracted binary tree (Figure 2(b)).
We now use these partitions to construct a set $`\mathrm{\Pi }`$ of paths in $`T`$. We include in $`\mathrm{\Pi }`$ the path in $`T`$ between any two vertices that are in the same set at some level of the partition. Note that, although the vertices at higher levels of the partition correspond to contracted subtrees of $`T`$, the path in $`T`$ between two such subtrees can still be unambiguously defined.
###### Lemma 5
The set of paths defined above has the following properties:
* There are $`O(nz)`$ paths.
* Each edge in $`T`$ belongs to $`O(z^\mathrm{2}log_zn)`$ paths.
* Any path in $`T`$ can be decomposed into the disjoint union of $`O(log_zn)`$ paths.
Proof: The first property follows immediately from Lemma 4, since each set of the partition contributes $`O(z^\mathrm{2})`$ paths, there are $`O(n/z)`$ sets at the bottom level of the partition, and the number of sets decreases at least geometrically at each level. Similarly, the second property follows, since an edge can belong to $`O(z^\mathrm{2})`$ paths per level and there are $`O(log_zn)`$ levels.
Finally, to prove the third property, let $`p`$ be an arbitrary path in $`T`$. We describe a procedure for decomposing $`p`$ into few paths $`\pi _i\mathrm{\Pi }`$. More generally, suppose we have a path $`p`$ contained in a set $`S`$ at some level of a multi-level decomposition (note that the whole tree is the set at the highest level of the partition). Then $`S`$ can be decomposed into at most $`z`$ sets at the next level of the partition; $`p`$ has endpoints in at most two of these sets, and may pass completely through some other sets. Therefore, $`p`$ can be decomposed into the union of two smaller paths in the sets containing its endpoints, together with a single path $`\pi _i`$ connecting those two sets. By repeating this decomposition recursively at each level of the tree, we obtain a decomposition into at most two paths per level, or $`O(log_zn)`$ paths overall.
We now describe how to use this path decomposition in our inverse optimization problem. For each path $`\pi _i\mathrm{\Pi }`$, let $`A_i`$ denote the set of edges in $`T`$ belonging to $`\pi _i`$, and let $`B_i`$ denote the set of edges in $`GT`$ such that $`\pi _i`$ is part of the decomposition of the tree path between each edge’s endpoints. The total size of all the sets $`A_i`$ and $`B_i`$ is $`O((m+nz^\mathrm{2})log_zn)`$, and all sets can be constructed in time linear in their total size.
A pair $`(e,f)`$ is a swap if and only if there is some $`e`$ for which $`eA_i`$ and $`fB_i`$. With this decomposition, the inverse parametric minimum spanning tree problem becomes equivalent to asking for a parameter $`𝐩`$ such that, for each $`i`$, the weight of every member of $`A_i`$ is less than the weight of every member of $`B_i`$.
For a single value of $`i`$, one could solve such a problem by a $`(d+\mathrm{1})`$-dimensional linear program in which we augment the parameters by an additional variable that is constrained to be greater than each $`eA_i`$ and less than each $`fB_i`$, however adding a separate variable for each $`i`$ would make the dimension nonconstant.
Instead, we use a standard derandomization technique from computational geometry, $`ϵ`$-nets. If we graph the weight of each edge in a $`(d+\mathrm{1})`$-dimensional space, where the parameter values are independent variables and the weight is the dependent variable, the result is a hyperplane. For any set $`S`$ of these hyperplanes, and any $`ϵ>\mathrm{0}`$, define an $`ϵ`$-net for vertical line segments to be a subset $`S^{}`$ such that, if any vertical line segment intersects at least $`ϵ|S|`$ hyperplanes in $`S`$, the same segment must intersect at least one hyperplane in $`S^{}`$ (Figure 3). More generally, if the members of $`S`$ are given costs, an $`ϵ`$-net must contain at least one member of any subset that is formed by intersecting the hyperplanes with a vertical segment and that has total cost at least $`ϵ`$ times the total cost of $`S`$. If $`\mathrm{1}/ϵ=O(\mathrm{1})`$, an $`ϵ`$-net of size $`O(\mathrm{1})`$ can be found in time linear in $`|S|`$ .
Our algorithm can then be described as follows. We will use $`ϵ=\mathrm{1}/\mathrm{3}d`$.
1. Use a recursive partition to find the sets $`A_i`$ and $`B_i`$.
2. Assign unit cost to each edge in the graph.
3. Repeat until terminated:
1. Construct $`ϵ`$-nets $`A_i^{}`$ and $`B_i^{}`$ for each $`A_i`$ and $`B_i`$.
2. Let $`S`$ be the set of swaps involving only $`ϵ`$-net members. Find the optimal parameter setting $`𝐩`$ for constraints from $`S`$.
3. Find the maximum weight $`a_i`$ of an edge in each $`A_i`$ and the minimum weight $`b_i`$ of an edge in each $`B_i`$, where weights are measured according to $`𝐩`$. If $`a_i<b_i`$ for each $`i`$, terminate the algorithm.
4. Find the maximum weight $`a_i^{}`$ of an edge in each $`A_i^{}`$ and the minimum weight $`b_i^{}`$ of an edge in each $`B_i^{}`$. Double the cost of each edge in $`A_i`$ with $`w(e)>a_i^{}`$, and each edge in $`B_i`$ with $`w(e)<b_i^{}`$.
The set of edges in $`A_i`$ for which the costs are doubled is defined by the intersection of $`A_i`$ with a vertical line segment: the segment with parameter coordinates $`𝐩`$ and with weight coordinate beween $`a_i^{}`$ and $`\mathrm{}`$. It does not contain any member of $`A_i^{}`$, so it must have total cost at most $`ϵ`$ times the cost of $`A_i`$. Therefore each iteration increases the total cost of all the sets $`A_i`$ (and similarly $`B_i`$) by a factor of at most $`\mathrm{1}+ϵ=\mathrm{1}+\mathrm{1}/\mathrm{3}d`$.
If there is any constraint violated by the solution $`𝐩`$, then at least one violated constraint must be a member of the $`d`$-swap base defining the optimal overall solution. Note however that, in any iteration of the loop, $`a_i^{}<b_i^{}`$ because of how we computed $`𝐩`$, so any violated constraint coming from a swap $`(e,f)`$ must have $`w(e)>a_i^{}`$ or $`w(f)<b_i^{}`$. Therefore at least one of the $`\mathrm{2}d`$ edges involved in the optimal base must have its cost doubled, and the cost of the optimal base increases by a factor of at least $`\mathrm{1}+\mathrm{1}/\mathrm{2}d`$.
Since the base’s cost increases at a rate faster than the total cost, it can only continue to do so for $`O(dlogn)`$ iterations before it overtakes the total cost, an impossibility. So at some point within those $`O(dlogn)`$ iterations the algorithm must terminate the loop.
###### Theorem 2
We can solve the inverse parametric minimum spanning tree problem, for any constant number of parameters, in worst case time $`O(mlognlog_{m/n}n)`$.
Proof: We use the algorithm described above, setting $`z=max(\mathrm{2},\sqrt{m/n})`$. Therefore, the total size of the sets $`A_i`$ and $`B_i`$ (and the total time to find these sets and to perform each iteration) is $`O(mlog_{m/n}n)`$. Since $`d`$ is constant, there are $`O(logn)`$ iterations, and the total time is $`O(mlognlog_{m/n}n)`$.
## 3 Other Optimal Subgraph Problems
We now describe a method for solving inverse parametric optimization on a more general class of optimal subgraph problems, in which we are given a graph with parametric edge weights and must find the minimum weight suitable subgraph, where suitability is defined according to the particular problem. The minimum spanning tree problem considered earlier has this form, with the suitable subgraphs simply being trees. The shortest path and minimum weight matching problems also have this form. In order to solve these problems, we resort to the ellipsoid method from linear programming. This has the disadvantage of being not strongly polynomial nor very practical, but its advantages are in its extreme generality – not only can we handle any optimal subgraph problem for which the optimization version is polynomial, but (unlike our MST algorithms) we are not limited to a fixed number of parameters.
A good introduction to the ellipsoid method and its applications in combinatorial optimization can be found in the book by Grötschel, Lovász, and Schrijver .
###### Lemma 6 (Grötschel, Lovász, and Schrijver , p. 158)
For any polyhedron $`P`$ defined by a strong separation oracle, and any rational linear objective function $`f`$, one can find the point in $`P`$ maximizing $`f`$ in time polynomial in the dimension of $`P`$ and in the maximum encoding length of the linear inequalities defining $`P`$.
The strong separation oracle required by this result is a routine that takes as input a $`d`$-dimensional point and either determines that the point is in $`P`$ or returns a closed halfspace containing $`P`$ and not containing the test point. One slight technical difficulty with this approach is that it requires the polyhedron to be closed (else one could not separate it from a point on one of its boundary facets) while our problems are defined by strict inequalities forming open halfspaces. To solve this problem, we introduce an additional parameter $`\delta `$ measuring the separation of the desired optimal subgraph from other subgraphs, and attempt to maximize $`\delta `$.
###### Theorem 3
Let $`(G,X)`$ be an inverse parametric optimization problem in which $`G`$ is a graph with parametric edge weights, $`X`$ is the given solution for an optimal subgraph problem, and there exists a polynomial time algorithm that either determines that $`X`$ is the unique optimal subgraph or finds a different optimal subgraph $`Y`$. Then we can solve the inverse parametric optimization problem for $`(G,X)`$ in time polynomial in the number of parameters, in the size of the graph, and in the maximum encoding length of the linear functions defining the edge weights of $`G`$.
Proof: We define a polyhedron $`P`$ by linear inequalities $`w(X)w(Y)\delta `$ where $`w`$ denotes the weight of a subgraph for the given point $`𝐩`$, $`Y`$ can be any suitable subgraph, and $`\delta `$ is an additional parameter. To avoid problems with unboundedness, we can also introduce additional normalizing inequalities $`𝟏𝐩𝟏`$. Clearly, there exists a point $`(𝐩,\delta )`$ with $`\delta >\mathrm{0}`$ in $`P`$ if and only if $`𝐩`$ gives a feasible solution to the inverse parametric optimization problem.
Although there can be exponentially many inequalities, we can easily define an oracle that either terminates the entire algorithm successfully or acts as a strong separation oracle: to test a point $`(𝐩,\delta )`$, simply compute the optimal subgraph $`Y`$ for the weights defined by $`𝐩`$. If $`X=Y`$, we have solved the problem. If $`w(X)w(Y)\delta `$, the point is feasible. Otherwise, return the halfspace $`w(X)w(Y)\delta `$.
Therefore, we can apply the ellipsoid method to find the point maximizing $`\delta `$ on $`P`$. If the method returns a point with $`\delta >\mathrm{0}`$ or terminates early with $`X=Y`$, we must have solved the problem, otherwise the problem must be infeasible.
###### Corollary 1
We can solve the inverse parametric minimum spanning tree, shortest path, or matching problems in time polynomial in the size of the given graph and in the encoding length of its parametric weight functions.
As a variant of this result, by using an algorithm for finding the second best subgraph, we can complete the ellipsoid method without early termination and find a parameter value for which $`X`$ is optimally separated from other subgraphs. Efficient second-best algorithms are known for minimum spanning trees , shortest paths , and matching ; in general the second-best subgraph is the best subgraph within all graphs formed by deleting one edge of $`X`$ from $`G`$.
## 4 Game Tree Search
As described in the introduction, we would like to be able to tune the weights of a game program’s evaluation function so that a shallow search (to some fixed depth $`D`$) makes the correct move for each position in a given test suite. However, because of the possibility of making the right move for the wrong reasons, this problem seems to be highly nonlinear. So, in order to apply our inverse parametric optimization technique to this problem, we need some further assumptions.
Define an unavoidable set of positions for a given player and depth $`D`$ to be a set of positions, each of which occurs $`D`$ half-moves from the present situation, such that, no matter what the opponent does, the given player can force the game to reach some position in the set. More generally, we can define an unavoidable set for any subset of positions to be a set such that, if the game ends within that subset, the player on move can force it to be in the unavoidable set. For any given position, one can prove that one particular move is best by exhibiting an unavoidable set $`A_i`$ for the positions reachable from that move (from the perspective of the player to move) and an unavoidable set $`B_i`$ (from the perspective of the other player) for the positions reachable from the other moves, such that the minimum evaluation of any position in $`A_i`$ is greater than the maximum evaluation of any position in $`B_i`$. Minimax or alpha-beta search can be interpreted as finding both of these sets.
For a given position in a test suite, we will assume that the position can be solved correctly by searching sufficiently deeply: that is, there exists a depth $`D^{}>D`$ such that, if we search (with some untuned or previously-tuned evaluation function) to depth $`D^{}`$, we will find the correct move, and not only that but we will find a correct depth-$`D`$ strategy: unavoidable sets $`A_i`$ and $`B_i`$ at depth $`D`$ such that any good evaluation function should evaluate all positions in $`A_i`$ greater than all positions in $`B_i`$. We will therefore say that an evaluation function evaluates the position correctly if it evaluates all positions in $`A_i`$ greater than all positions in $`B_i`$. If it does (and it implements a correct minimax search routine), it must make the correct move in the given position.
Thus, the problem of finding an evaluation function that evaluates each test suite position correctly can be cast into the same form used in the deterministic minimum spanning tree algorithm: a family of sets $`A_i`$ and $`B_i`$, and a requirement that the parameter choice correctly sort the members of $`A_i`$ from the members of $`B_i`$. However, there are two problems with using the $`ϵ`$-net based sampling approach of that algorithm. First, the game evaluation problem seems likely to have many more parameters than the minimum spanning tree problem, casting into doubt the requirement that the number of parameters be a fixed constant. And second, doing a deep search to compute and store the unavoidable sets for each test suite position could be very costly.
Instead, we take the same approach used for the other optimal subgraph problems, of using the ellipsoid method for linear programming with a separation oracle. In this case, the separation oracle consists of running a depth-$`D`$ search on each test position, until one is found at which the wrong move is made. Once that happens, we can compute $`A_i`$ and $`B_i`$ for that one position, using a deep search, and compare the values of the evaluation function on those sets. (In fact the unavoidable sets by which the shallow search “proves” that it has the correct move for its evaluation must intersect $`A_i`$ and $`B_i`$ in at least one member, so we can do this comparison by a single shallow search.) If this separation oracle finds an $`aA_i`$ and $`bB_i`$ that have evaluations in the wrong order, it returns a constraint that the evaluation of $`a`$ should be greater than the evaluation of $`b`$. Otherwise, if it fails to find a separating constraint, we may still not evaluate each position correctly, but we must make the correct move in each position.
###### Theorem 4
If there exists a setting of weights for an evaluation function that evaluates each position of a given test suite correctly, then we can find a setting that makes each move correctly. The algorithm for finding this setting performs a polynomial number of iterations, where each iteration makes at most one shallow search on each position of the suite, together with a single deep search on a single suite position.
## 5 Conclusions
We have discussed several problems of inverse parametric optimization, provided general solutions to a wide class of optimal subgraph problems based on the ellipsoid method, and faster combinatorial algorithms for the inverse parametric minimum spanning tree problem.
One difficulty with our approach comes from infeasible inputs: what if there is no linear combination of parameters that leads to the desired solution? Rogers and Langley observe a similar phenomenon in their vehicle routing experiments, and suggest searching for additional parameters to use. This search may be aided by the fact that infeasible linear programs can be witnessed by a small number of mutually inconsistent constraints: in the path planning problem, we can find $`d+\mathrm{1}`$ paths, one of which must be better than the given path for any combination of known parameters. Studying these paths may reveal the nature of the missing parameters. Alternatively, a search for a linear programming solution with few violated constraints may provide a parameter setting for which the user’s chosen solution is near-optimal.
A natural direction for future research is in dealing with nonlinearity. Problems in which the solution weight includes low-degree combinations of element weights (as are used in game programming to represent interactions between positional features) may be dealt with by including additional parameters for each such combination. But what about problems in which the element weights are nonlinear combinations of the parameters? For instance, if the parameters are coordinates of points, any problem involving comparisons of distances will involve quadratic functions of those coordinates. The question of finding coordinates such that a given tree is the Euclidean minimum spanning tree of the points is known to be NP-hard , but if the points’ coordinates depend only on a constant number of parameters one can solve the problem in polynomial time. Can the exponent of this polynomial be made independent of the number of parameters?
It may be possible to extend our spanning tree methods to other matroids. E.g., transversal matroids provide a formulation of bipartite matching in which the weights are on the vertices of one side of the bipartition, rather than the edges. Can we solve inverse parametric transversal matroid optimization efficiently? Are there natural applications of this or other matroidal problems?
Another open question concerns the existence of combinatorial algorithms for the inverse parametric shortest path problem. It is unlikely that a strongly polynomial algorithm exists without restricting the dimension: one can encode any linear programming feasibility problem as an inverse parametric shortest path (or other optimal subgraph) problem, by using a parallel pair of edges for each constraint. But is there a strongly polynomial algorithm for inverse parametric shortest paths when the number of parameters is small?
|
no-problem/9907/hep-ex9907023.html
|
ar5iv
|
text
|
# DESY–99–054 𝑊 Production and the Search for Events with an Isolated High-Energy Lepton and Missing Transverse Momentum at HERA
## 1 Introduction
This paper reports the results of an investigation into the production of $`W`$ bosons in positron-proton collisions at HERA. The collider operated from 1994 to 1997 with positron and proton beam energies of $`27.5`$ and $`820`$ $`\mathrm{GeV}`$ respectively, resulting in a centre-of-mass energy of $`300`$ $`\mathrm{GeV}`$. During this period the ZEUS detector collected data corresponding to an integrated luminosity of $`47.7`$ $`\mathrm{pb}^1`$.
The Standard Model calculation of the cross section for the production of $`W`$ bosons via the reaction $`e^+pe^+W^\pm X`$ yields a value of roughly $`1`$ $`\mathrm{pb}`$ . The $`W`$-production cross section is sensitive to the couplings at the $`WW\gamma `$ vertex, particularly at large hadronic transverse momentum (“hadronic $`P_T`$”). A measurement of the cross section can therefore provide limits on anomalous $`WW\gamma `$ couplings. In addition, the measurement gives a useful constraint on the $`W`$-production background to a variety of searches for non-Standard-Model physics at HERA.
The search for signals for $`W`$ production in both the electron<sup>1</sup><sup>1</sup>1The term electron is used to refer to both electrons and positrons. and muon decay channels was performed by selecting events with large missing transverse momentum (“missing $`P_T`$”) which also contain isolated leptons with high transverse momentum (“high $`P_T`$”). The result of the search in each decay channel is consistent with Standard Model expectations. The integrated luminosity and the signal-to-background ratio in the electron channel search are sufficient to allow a first estimate of the cross section for $`W`$ production in electron-proton interactions. The searches in the two decay channels are combined at large hadronic $`P_T`$ in order to obtain cross-section limits in this region, and limits on the couplings at the $`WW\gamma `$ triple-gauge-boson vertex are calculated.
The H1 collaboration has recently reported the observation of six events containing isolated high-energy leptons and missing transverse momentum . The number of muon events is significantly larger than the Standard Model expectation, to which $`W`$ production forms the major contribution. A search for such events with the ZEUS detector, using similar cuts and with a similar sensitivity to that of the H1 analysis, yields results in good agreement with the Standard Model.
In Section 2 of this article, the $`W`$-production signal and its simulation are discussed in more detail. Background processes are treated in Section 3. Section 4 describes the ZEUS detector. Details of the event reconstruction and pre-selection are given in Section 5. The results of the analysis in the electron and muon channels are presented in Sections 6 and 7, respectively. Section 8 presents upper limits on the cross section for $`W`$ production at large hadronic $`P_T`$, while Section 9 presents the ZEUS analysis of events containing an isolated high-energy charged particle in addition to large missing transverse momentum. The results are summarised in Section 10.
## 2 $`W`$ Production at HERA
The dominant $`W`$-production process in $`e^+p`$ collisions at HERA is the reaction
$$e^+pe^+W^\pm X,$$
(1)
in which the scattered beam electron emerges at small angles with respect to the lepton beam direction and is generally not found in the central detector. The observed event topology therefore consists of the hadronic final state $`X`$, which typically carries small transverse momentum, and the $`W`$ decay products at comparatively large laboratory angles.
### 2.1 Cross-Section Calculation
The leading-order diagrams for reaction (1) are shown in Fig. 1. Diagrams (a) and (b) correspond to $`W`$ radiation from the incoming and scattered quark, respectively. Diagrams (f) and (g), in which the $`W`$ couples to the incoming or scattered lepton line, are suppressed by a second heavy propagator. Diagram (c) contains the $`WW\gamma `$ triple-gauge-boson coupling. Diagrams (d) and (e), required to preserve gauge invariance, contain off-shell $`W`$’s which give rise primarily to low-$`P_T`$ charged leptons and lepton-neutrino invariant masses far from the $`W`$ mass.
The contributions of the different diagrams are calculated with the Monte Carlo based program EPVEC . The fermion $`u`$-channel pole of diagram (a) is regularised by splitting the phase space into two regions :
$$\sigma =\sigma (|u|>u_{cut})+^{u_{cut}}\frac{d\sigma }{d|u|}d|u|$$
where $`u=(p_qp_W)^2`$ and $`p_q`$, $`p_W`$ are the four momenta of the incoming quark and final state $`W`$ boson, respectively. The first term is calculated using helicity amplitudes for the process $`e^+qe^+Wq^{},Wf\overline{f}^{}`$. The cross section for small values of $`|u|`$ is calculated by folding the cross section for $`q\overline{q}^{}Wf\overline{f}^{}`$ with the parton densities in the proton and the effective parton densities for the resolved photon emitted by the incoming electron. The resulting total cross section for reaction (1) varies little with $`u_{cut}`$, chosen here to be $`25`$ $`\mathrm{GeV}^2`$.
Using the MRS(G) set of parton densities in the proton evaluated at a scale $`M_W^2`$ and the GRVG-LO set of parton densities in the photon evaluated at a scale $`p_W^2/10`$, the cross sections are $`0.52`$ $`\mathrm{pb}`$ for $`W^+`$ and $`0.42`$ $`\mathrm{pb}`$ for $`W^{}`$ production via reaction (1), giving a total of $`0.95`$ $`\mathrm{pb}`$.
The cross section for the process $`e^+p\overline{\nu }W^+X`$, also calculated using EPVEC, is only about $`5\%`$ of that for reaction (1). The $`Z^0`$ production process $`e^+pe^+Z^0X`$, with a cross section of around $`0.3`$ $`\mathrm{pb}`$, has been simulated using EPVEC in order to estimate the contribution of $`Z^0l^+l^{}`$ and $`Z^0\nu \overline{\nu }`$ decays to the high-$`P_T`$ lepton samples considered here.
### 2.2 Cross-Section Uncertainties
The use of different proton and photon parton densities changes the calculated $`W`$-production cross section by up to $`5\%`$ and $`10\%`$, respectively. Large uncertainties also result from the choice of hard scale used to evaluate the structure functions. Added in quadrature, the combined effect of these uncertainties leads to an estimated overall uncertainty in the $`W`$-production cross section of about $`20\%`$.
EPVEC is a leading-order program and includes no QCD radiation. A recent calculation of the cross section for reaction (1) includes a next-to-leading-order (NLO) calculation of the resolved-photon contribution . The result is $`0.97`$ $`\mathrm{pb}`$, close to the EPVEC estimate of $`0.95`$ $`\mathrm{pb}`$. The scale dependence of the NLO cross section is reduced to the $`5`$-$`10\%`$ level, although the structure-function-related uncertainties remain. Changes to the hadronic-$`P_T`$ spectrum due to higher-order effects, which could be important for some acceptance calculations and for setting coupling limits, are currently unknown.
## 3 Background Processes
The most important background to $`W`$ production in the electron decay channel arises from high-$`Q^2`$ charged- and neutral-current deep inelastic scattering. These have both been simulated using the event generator DJANGO6 , an interface to the Monte Carlo programs HERACLES 4.5 and LEPTO 6.5 . Leading-order QCD and electroweak radiative corrections are included and higher-order QCD effects are simulated via parton cascades using both the parton shower and matrix elements approach of LEPTO and the colour-dipole model ARIADNE . The final hadronisation of the partonic final state is performed with JETSET .
Two-photon processes provide an additional source of high-$`P_T`$ leptons which are a significant background in the muon decay channel. The dominant, Bethe-Heitler, process has been simulated using the event generator LPAIR including both elastic and inelastic production at the proton vertex.
Finally, photoproduction has been simulated with the HERWIG Monte Carlo program, including both resolved and direct photon contributions.
## 4 The ZEUS Detector
A detailed description of the ZEUS detector can be found elsewhere . The main components used in this analysis were the compensating uranium-scintillator calorimeter (CAL) and the central tracking detector (CTD).
The CAL is divided into three parts, forward (FCAL) covering the polar angle<sup>2</sup><sup>2</sup>2The ZEUS coordinate system is right-handed with the $`Z`$-axis pointing in the proton beam direction and the horizontal $`X`$-axis pointing towards the centre of HERA. The polar angle, $`\theta `$, is measured with respect to the $`+Z`$-axis and the pseudorapidity, $`\eta `$, is related to the polar angle by $`\eta =\mathrm{ln}(\mathrm{tan}(\theta /2))`$. interval $`3^{}<\theta <37^{}`$, barrel (BCAL) covering the range $`37^{}<\theta <129^{}`$ and rear (RCAL) covering the range $`129^{}<\theta <176^{}`$, as viewed from the nominal interaction point . Each part is divided into towers approximately $`20\times 20`$ $`\mathrm{cm}`$ in transverse size and segmented longitudinally into an electromagnetic (EMC) section and two hadronic (HAC) sections (one in RCAL). Within the EMC section each tower is further subdivided transversely into four cells (two in RCAL). Each cell is read out by a pair of wavelength shifters and photomultiplier tubes. Calorimeter energy resolutions of $`\sigma _E/E=18\%/\sqrt{E(\mathrm{GeV})}`$ for electrons and $`\sigma _E/E=35\%/\sqrt{E(\mathrm{GeV})}`$ for hadrons have been measured under test-beam conditions. An instrumented-iron backing calorimeter (BAC) measures energy leakages from the central uranium calorimeter .
The CTD is a cylindrical multi-wire drift chamber operating in a $`1.43`$ $`\mathrm{T}`$ solenoidal magnetic field . A momentum measurement, for tracks passing through at least 2 of the 9 radial superlayers, can be made in the polar angle range $`15^{}<\theta <164^{}`$. The transverse-momentum resolution for full length tracks can be parameterised as $`\sigma (P_T)/P_T=0.0058P_T0.00650.0014/P_T`$, with $`P_T`$ in $`\mathrm{GeV}`$.
The luminosity is determined from the rate of high-energy photons produced in the process $`epep\gamma `$ which are measured in a lead-scintillator calorimeter located at $`Z=107`$ $`\mathrm{m}`$ .
The ZEUS three-level trigger system efficiently selects events with large missing and total transverse energies . Several triggers at each level are used to tag events used for this analysis, with the relevant energy thresholds generally reduced if a good CTD track is present in addition to large calorimeter energies. Algorithms based on tracking and calorimeter timing information reject non-$`ep`$ backgrounds, consisting mainly of proton beam-gas interactions and cosmic rays.
## 5 Event Reconstruction and Pre-selection
The calorimeter transverse momentum is defined as :
$$\text{calorimeter }P_T=\sqrt{(\underset{i}{}p_{X,i})^2+(\underset{i}{}p_{Y,i})^2},$$
(2)
where $`p_{X,i}=E_i\mathrm{sin}\theta _i\mathrm{cos}\varphi _i`$ and $`p_{Y,i}=E_i\mathrm{sin}\theta _i\mathrm{sin}\varphi _i`$ are calculated using the energies ($`E_i`$) of individual calorimeter cells that are above noise thresholds of $`80`$ $`\mathrm{MeV}`$ (EMC) and $`140`$ $`\mathrm{MeV}`$ (HAC). The angles $`\theta _i`$ and $`\varphi _i`$ are estimated from the geometric cell centres and the event vertex. Note that in $`We\nu `$ events, calorimeter $`P_T`$ as defined above is an estimate of the missing $`P_T`$ or transverse momentum carried by the neutrino. In muon events, a combination of the calorimeter $`P_T`$ and the transverse momentum of the muon track measured in the CTD is used to calculate the missing $`P_T`$.
Electron (hadron) transverse momenta are defined as sums over those calorimeter cells that are (are not) assigned to the electron candidate cluster. Longitudinal momentum conservation ensures that $`Ep_Z`$, defined as :
$$Ep_Z=\underset{i}{}E_i(1\mathrm{cos}\theta _i),$$
peaks at $`2E_e`$ for fully contained events, where $`E_e`$ is the electron beam energy. Smaller values of $`Ep_Z`$ indicate energy escaping detection, either in the rear beam pipe or in the form of muons or neutrinos.
The acoplanarity angle $`\mathrm{\Phi }_{\mathrm{ACOP}}`$, illustrated in Fig. 2, is the azimuthal separation of the outgoing lepton and the vector in the $`\{X,Y\}`$-plane that balances the hadronic-$`P_T`$ vector. For well measured neutral-current events the acoplanarity angle is close to zero, while large acoplanarity angles indicate large missing energies. The transverse mass is defined as
$$M_T=\sqrt{2P_T^lP_T^\nu (1\mathrm{cos}\mathrm{\Phi }^{l\nu })},$$
where $`P_T^l`$ is the lepton transverse momentum, $`P_T^\nu `$ is the magnitude of the missing $`P_T`$ and $`\mathrm{\Phi }^{l\nu }`$ is the azimuthal separation of the lepton and missing-$`P_T`$ vectors, as shown in Fig. 2.
Events that pass the trigger requirements are further required to have a reconstructed calorimeter $`P_T`$ in excess of $`12`$ $`\mathrm{GeV}`$. The transverse momentum, calculated excluding the inner ring of calorimeter cells around the forward beam-pipe hole, must be greater than $`9`$ $`\mathrm{GeV}`$. These offline cuts are more stringent than the corresponding online trigger thresholds in any given year of data taking. Other pre-selection cuts common to both the electron and muon event selections are the requirements that the $`Z`$-coordinate of the tracking vertex be reconstructed within $`50`$ $`\mathrm{cm}`$ of the nominal interaction point and have at least one associated track with transverse momentum greater than $`0.2`$ $`\mathrm{GeV}`$ and a polar angle in the range $`15^{}<\theta <164^{}`$. Cuts on the calorimeter timing and algorithms based on the pattern of tracks in the CTD reject beam-gas, cosmic-ray and halo-muon events.
## 6 Search for $`W`$ Production and Decay $`We\nu `$
Electron-identification criteria are applied to the pre-selected events and the data are subsequently compared to the Monte Carlo simulation. Final results are presented after further cuts designed to optimise the sensitivity to the $`W`$-production signal.
### 6.1 Electron Identification
A neural-network-based algorithm to identify electrons, trained on Monte Carlo events and optimised for maximum electron-finding efficiency and electron-hadron separation, selects candidate electromagnetic clusters in the calorimeter . A cut on the electromagnetic-cluster energy of $`8`$ $`\mathrm{GeV}`$ is made, above which the neural network is fully efficient except at the boundaries between the different calorimeter parts. The impact point of the electron at the face of the calorimeter is determined with a resolution of $`1`$ $`\mathrm{cm}`$ using the pulse height information from the pairs of photomultipliers reading out each cell. The distance of closest approach of a matching extrapolated CTD track to the electromagnetic cluster is required to be less than $`10`$ $`\mathrm{cm}`$, where only tracks with $`15^{}<\theta <164^{}`$ are considered. The background from fake electrons is reduced by requiring that the energy not associated with the electron in an $`\{\eta ,\varphi \}`$ cone of radius $`0.8`$ around the electron direction be less than $`4`$ $`\mathrm{GeV}`$. Moreover, since most fake electron candidates are misidentified hadrons close to jets, this background is further reduced by requiring that the electron track be separated by at least $`0.5`$ units in $`\{\eta ,\varphi \}`$ space from other tracks associated with the event vertex.
### 6.2 Comparison of Data and Monte Carlo
The data are compared to the expectation from the Monte Carlo simulation in Fig. 3, after requiring that the transverse momentum of the electron, $`P_T^e`$, be greater than $`5`$ $`\mathrm{GeV}`$ and the polar angle of the electron measured in the calorimeter, $`\theta _e`$, be less than $`2.0`$ $`\mathrm{rad}`$. Neutral-current background events dominate the sample at this stage of the selection, as is evident from the steeply falling missing-$`P_T`$ spectrum and the concentration of events at small acoplanarity angles. A Jacobian-peak structure is visible in the transverse-mass distribution for the Monte Carlo simulation of the signal events, shown in Fig. 3(f). Figure 3(g) shows the $`P_T`$ reconstructed in the backing calorimeter and Fig. 3(h) shows the azimuthal separation between the BAC $`P_T`$ and uranium-calorimeter missing-$`P_T`$ directions, for events with BAC energy deposits. All distributions show reasonable agreement between the data and Monte Carlo.
### 6.3 Final Cuts and Results
The neutral-current background is heavily suppressed by requiring the missing $`P_T`$ to be greater than $`20`$ $`\mathrm{GeV}`$ and the acoplanarity angle to be greater than $`0.3`$ $`\mathrm{rad}`$, indicated by the arrows in Figs. 3(d) and (e). The latter cut is only applied to events with a hadronic $`P_T`$ in excess of $`4`$ $`\mathrm{GeV}`$, for which the acoplanarity angle is well defined. Electrons in the final event sample must, in addition, have $`P_T^e>10`$ $`\mathrm{GeV}`$ and $`\theta _e<1.5`$ $`\mathrm{rad}`$. Neutral-current background is further reduced by removing events with energy deposits in the backing calorimeter that are closely aligned with the direction of the missing $`P_T`$. Finally, requiring that the matching electron track have a transverse momentum greater than $`5`$ $`\mathrm{GeV}`$, as measured in the CTD, removes most of the remaining fake electrons.
Three data events, all of which have a final-state $`e^+`$, survive these cuts. The properties of these events are given in Table 1 and compared in Fig. 4 to the $`W`$-production Monte Carlo with all cuts applied.
For these figures and the table, the electron and hadron energies used in calculating the missing $`P_T`$ and transverse mass have been corrected for the effect of the inactive material between the $`ep`$ interaction point and the calorimeter. The corrections are typically a few percent for the electron and $`10\%`$ for the hadron transverse momenta. The momenta of the electrons measured in the CTD agree within errors with the corrected associated calorimeter energies. Given the different cross sections and selection efficiencies for $`W^+`$ and $`W^{}`$ production and decay, roughly $`60\%`$ of signal events are expected to have an $`e^+`$ rather than an $`e^{}`$ in the final state, while background events are expected to contain predominantly $`e^+`$ candidates. The charge composition of the sample is therefore consistent with expectations. The event in Table 1 with the highest transverse mass is illustrated in Fig. 5. Event 2 has a similar topology while event 1 has no visible hadronic jet and consequently a small value for the reconstructed hadronic $`P_T`$.
The Monte Carlo expectation after all cuts is $`2.1`$ signal events and $`1.1`$ $`\pm `$ $`0.3`$ background events. The background consists mainly of charged-current DIS, with smaller contributions from Bethe-Heitler di-lepton production and $`Z^0\nu \overline{\nu }`$ events in which the beam electron is scattered at small polar angles. Taking into account the efficiency for selecting $`We\nu `$ events of $`38\%`$ and the small extra sensitivity resulting from a $`2\%`$ efficiency for selecting $`W\tau \nu `$ decays, the three events, after subtracting the expected background, correspond to a measured cross section for reaction (1) of :
$$\sigma (e^+pe^+W^\pm X)=\mathrm{\hspace{0.25em}\hspace{0.25em}\hspace{0.25em}0.9}_{\mathrm{\hspace{0.25em}0.7}}^{+\mathrm{\hspace{0.25em}1.0}}\text{(stat.)}\pm \mathrm{\hspace{0.25em}\hspace{0.25em}0.2}\text{(syst.)}\mathrm{pb}.$$
The systematic error is a combination of the uncertainty in the selection efficiency for $`We\nu `$ events, the uncertainty in the estimate of the remaining background to $`W`$ production and decay, and a small contribution from the uncertainty in the total integrated luminosity. The systematic uncertainty in the selection efficiency arises from uncertainties in the electron-finding procedure and the $`3\%`$ uncertainty in the absolute calorimeter energy scale. The error on the background estimate is a combination of statistical errors and uncertainties due to the choice of model for simulating parton cascades in the DIS Monte Carlo.
From the three observed events, an explicit upper limit on the $`W`$-production cross section has been derived, using the background expected from Monte Carlo and applying the method described in :
$$\sigma (e^+pe^+W^\pm X)<\mathrm{\hspace{0.25em}\hspace{0.25em}3.3}\mathrm{pb}\mathrm{at}95\%\mathrm{C}.\mathrm{L}.$$
The selection efficiency in the electron channel depends little on the recoiling hadronic $`P_T`$. This implies that the upper limit given above is insensitive to uncertainties in the underlying hadronic-$`P_T`$ distribution arising from higher-order effects or anomalous $`WW\gamma `$ couplings.
## 7 Search for $`W`$ Production and Decay $`W\mu \nu `$
The search for reaction (1) with the subsequent decay $`W\mu \nu `$ begins with the same sample of events with a large calorimeter missing $`P_T`$ used in the electron analysis (see Section 5). However, since a high-energy muon leaves only a small energy deposit in the calorimeter, this selection necessarily restricts the acceptance to $`W`$-production events with large hadronic transverse momenta.
### 7.1 Muon Identification
The energy deposited by minimum-ionising particles (MIP’s) can be distributed across several calorimeter clusters. Therefore neighbouring clusters are grouped together into larger scale objects which, providing they pass topological and energy cuts, are called calorimeter MIP’s. In this analysis, a muon candidate is simply a calorimeter MIP that matches an extrapolated CTD track within $`20`$ $`\mathrm{cm}`$, where only tracks in the polar angle range $`15^{}<\theta <164^{}`$ are considered. The muon transverse momentum, $`P_T^\mu `$, and direction, including the polar angle, $`\theta _\mu `$, are obtained from the matching CTD track. The same energy- and track-isolation requirements made in the electron analysis are also applied here to the muon candidate.
### 7.2 Comparison of Data and Monte Carlo
The data and Monte Carlo predictions are compared in Fig. 6, after requiring $`P_T^\mu >5`$ $`\mathrm{GeV}`$ and $`\theta _\mu <2.0`$ $`\mathrm{rad}`$. Events with more than one muon candidate having $`P_T^\mu >2`$ $`\mathrm{GeV}`$ have been removed. Fig. 6(d) shows the missing transverse momentum, calculated by combining the muon and calorimeter $`P_T`$’s in the transverse plane after subtracting the muon’s contribution to the latter. In each case the distribution of events is similar to that expected from the Monte Carlo simulation of the background, which is dominated by Bethe-Heitler di-muon production.
### 7.3 Final Cuts and Results
The final stage in the selection of $`W\mu \nu `$ events requires the missing transverse momentum to be greater than $`15`$ $`\mathrm{GeV}`$. No data event survives this final cut, shown in Fig. 6(d), to be compared with an expected $`0.76`$ events from $`W`$ production and $`0.65\pm 0.22`$ from background. The latter is dominated by charged-current DIS and Bethe-Heitler $`\mu ^+\mu ^{}`$ production. The efficiency for selecting $`W\mu \nu `$ events is $`13\%`$, lower than the corresponding efficiency in the electron channel due to the soft hadronic-$`P_T`$ spectrum expected for reaction (1). The resulting $`95\%`$ C.L. upper limit on the cross section for this reaction is therefore weaker, at $`3.7`$ $`\mathrm{pb}`$. Note that the value for the efficiency in the muon channel, calculated here using EPVEC, is much more sensitive to assumptions about the hadronic-$`P_T`$ distribution than the corresponding efficiency in the electron channel.
## 8 $`W`$ Production at Large Hadronic $`P_T`$
The final electron and muon samples described in Sections 6.3 and 7.3, respectively, are combined and events which have large hadronic $`P_T`$ are selected. The number of events with a corrected hadronic $`P_T`$ above specified values is shown in Table 2, along with the number of $`W`$-production and background events expected from Monte Carlo. The efficiencies listed in the table have been calculated for the subset of $`W`$-production events that have a true hadronic $`P_T`$ above the given value. They are averaged over all decay channels, thereby including a small contribution from $`W\tau \nu `$ decays. The resulting $`95\%`$ C.L. upper limits on the cross section for the reaction $`e^+pe^+W^\pm X`$ at large hadronic $`P_T`$ are also given in the table. Note that the selection efficiency in the muon channel is small for low hadronic $`P_T`$, reaching a plateau comparable to the efficiency in the electron channel at around $`20`$ $`\mathrm{GeV}`$. Cross-section limits for cuts on the hadronic $`P_T`$ at or above this value are insensitive to the underlying hadronic-$`P_T`$ distribution.
The limits given in Table 2 can be used to constrain various any new physics processes which produce events with a $`W`$ boson and large hadronic $`P_T`$. Such model-dependent analyses are outside of the scope of this paper. It is nevertheless useful to parameterise such effects in terms of anomalous $`WW\gamma `$ couplings, which give rise to a harder distribution of the transverse momentum of the $`W`$ than expected in the Standard Model. The most general effective Lagrangian that is consistent with Lorentz invariance, CP conservation and electromagnetic gauge invariance, has two free couplings at the $`WW\gamma `$ vertex that are conventionally labelled $`\kappa `$ and $`\lambda `$ . In the Standard Model they take the values $`\kappa =1`$ and $`\lambda =0`$; deviations are parameterised in terms of the anomalous couplings $`\mathrm{\Delta }\kappa =\kappa 1`$ and $`\lambda `$. The dependence of the total $`W`$-production cross section on these anomalous couplings is calculated using EPVEC. For example, the upper limit of $`0.58`$ $`\mathrm{pb}`$ for hadronic $`P_T>20`$ $`\mathrm{GeV}`$ corresponds to the following $`95\%`$ C.L. limits on $`\mathrm{\Delta }\kappa `$ and $`\lambda `$ :
$`4.7<`$ $`\mathrm{\Delta }\kappa `$ $`<\mathrm{\hspace{0.25em}\hspace{0.25em}1.5}(\lambda =0),`$
$`3.2<`$ $`\lambda `$ $`<\mathrm{\hspace{0.25em}\hspace{0.25em}3.2}(\mathrm{\Delta }\kappa =0).`$
These limits on anomalous $`WW\gamma `$ couplings are insensitive to the assumed couplings at the $`WWZ`$ vertex due to the suppression by the propagator mass. They are however significantly larger than the limits derived from analyses at the Tevatron and LEP2 .
## 9 Isolated-Track Search
The H1 collaboration has recently reported the observation of six events containing an isolated high-$`P_T`$ lepton and large missing $`P_T`$ in $`36.5`$ $`\mathrm{pb}^1`$ of $`e^+p`$ data . The H1 search required isolated tracks with $`P_T>10`$ $`\mathrm{GeV}`$ and a calorimeter $`P_T`$ exceeding $`25`$ $`\mathrm{GeV}`$. Five of the events contain muons, which may be compared to the $`0.5`$ $`W`$ events and $`0.25`$ other events (mainly $`\gamma \gamma \mu ^+\mu ^{}`$) expected.
Although the ZEUS data presented above are in good agreement with Standard Model expectations, a separate search has been performed for isolated high-$`P_T`$ tracks in events with a large missing $`P_T`$, applying cuts similar to those used by H1. Because of the typical $`10\%`$ difference between observed and corrected hadronic-$`P_T`$ values, all events that have an uncorrected calorimeter $`P_T`$ greater than $`20`$ $`\mathrm{GeV}`$ are selected. In addition, the events are required to contain at least one jet with $`E_T>5`$ $`\mathrm{GeV}`$, an electromagnetic fraction less than $`0.9`$ and an angular size greater than $`0.1`$ $`\mathrm{rad}`$. Events with a neutral-current topology that have an acoplanarity angle less than $`0.2`$ $`\mathrm{rad}`$ are excluded. The high-$`P_T`$ track must pass through at least 3 radial superlayers of the CTD (corresponding to $`\theta 0.3`$ $`\mathrm{rad}`$) and have $`\theta <2.0`$ $`\mathrm{rad}`$. The isolation variables $`\mathrm{D}_{\mathrm{jet}}`$ and $`\mathrm{D}_{\mathrm{trk}}`$ are defined for a given track as the $`\{\eta ,\varphi \}`$ separation of that track from the nearest jet and the nearest neighbouring track in the event, respectively. All tracks with $`P_T>10`$ $`\mathrm{GeV}`$ in the selected events are plotted in the $`\{\mathrm{D}_{\mathrm{trk}},\mathrm{D}_{\mathrm{jet}}\}`$ plane in Fig. 7. The $`3`$ tracks selected with $`\mathrm{D}_{\mathrm{trk}}>0.5`$ and $`\mathrm{D}_{\mathrm{jet}}>1.0`$ agree well with the expectation of $`5.7\pm 0.8`$ tracks from combined Monte Carlo sources.
All three isolated tracks are positively charged and are identified as electrons using the algorithm and criteria described above. This is consistent with the $`3.5\pm 0.7`$ ($`2.0\pm 0.4`$) electron type (muon type) events expected from Monte Carlo, of which $`0.9`$ ($`0.4`$) are from $`W`$ production. Two of the isolated tracks correspond to events 2 and 3 of Table 1. The third track is found in an event with neutral-current topology in which there is evidence of a large energy leakage into the backing calorimeter. There is therefore no evidence of an excess of isolated high-$`P_T`$ tracks, whether identified as leptons or not, in the $`19941997`$ ZEUS data.
## 10 Conclusions
A search for the decay $`We\nu `$ in $`e^+p`$ collisions at a centre-of-mass energy of $`300`$ $`\mathrm{GeV}`$ yields three candidate events, of which $`1.1\pm 0.3`$ are estimated to arise from sources other than $`W`$ production. This results in an estimate of the cross section for the process $`e^+pe^+W^\pm X`$ of $`0.9_{0.7}^{+1.0}\pm 0.2`$ $`\mathrm{pb}`$, consistent with the Standard Model prediction. The corresponding $`95\%`$ C.L. upper limit on the cross section of $`3.3`$ $`\mathrm{pb}`$ is insensitive to uncertainties in the underlying hadronic-$`P_T`$ distribution. A search for the decay $`W\mu \nu `$ yields no candidate event, also consistent with Standard Model expectations. Events with large hadronic $`P_T`$ in the combined electron plus muon sample have been used to set $`95\%`$ C.L. upper limits on the cross section for $`W`$ production, for example $`0.58`$ $`\mathrm{pb}`$ for hadronic $`P_T`$ greater than $`20`$ $`\mathrm{GeV}`$.
A number of events with large missing $`P_T`$ and an isolated high-$`P_T`$ lepton, in excess of Standard Model expectations, has been reported by the H1 collaboration. The search presented in this paper, with similar cuts and sensitivity, has revealed no such excess.
## Acknowledgements
This work would not have been possible without the dedicated efforts of the HERA machine group and the DESY computing staff. We would also like to thank the DESY directorate for their strong support and encouragement throughout. The design, construction and installation of the ZEUS detector would not have been possible without the hard work of many people who are not listed as authors. In addition, it is a pleasure to thank U. Baur, D. Zeppenfeld and M. Spira for providing their calculations and for many fruitful discussions.
|
no-problem/9907/astro-ph9907022.html
|
ar5iv
|
text
|
# The Submillimeter Search for Very High Redshift Galaxies
## 1 Introduction
The cumulative rest-frame far-infared (FIR) emission from all objects lying beyond our Galaxy, known as the cosmic FIR and submillimeter (submm) background, was recently detected by the FIRAS and DIRBE experiments on the COBE satellite (e.g. Puget et al. 1996; Fixsen et al. 1998). The discovery that this background was comparable to the total unobscured emission at optical/ultraviolet wavelengths immediately made it clear that a full accounting of the star formation history of the Universe could only be obtained through the resolution and detailed study of the individual components that make up the FIR/submm background.
The resolution of the background at 850 $`\mu `$m became possible with the installation of the Submillimeter Common User Bolometer Array (SCUBA; Holland et al. 1999) on the 15m James Clerk Maxwell Telescope (JCMT) on Mauna Kea, and results are now available from both blank field and lensed cluster field surveys (Smail, Ivison, & Blain 1997; Hughes et al. 1998; Barger et al. 1998; Barger, Cowie, & Sanders 1999; Blain et al. 1999; Eales et al. 1999; Lilly et al. 1999). Barger, Cowie, & Sanders (1999) used optimal fitting techniques combined with Monte Carlo simulations of the completeness of the source count determinations to show that a differential source count parameterization
$$n(S)=N_0/(a+S^{3.2})$$
(1)
reasonably fits both the cumulative 850 $`\mu `$m source counts and the 850 $`\mu `$m extragalactic background light (EBL) measurements. Here $`S`$ is the flux in mJy, $`N_0=3.0\times 10^4`$ per square degree per mJy, and the range $`a=0.41.0`$ matches the Fixsen et al. (1998) and Puget et al. (1996) EBL values. The 95 percent confidence range for the index is $`2.63.9`$. The extrapolation suggests that the typical submm source is about 1 mJy. The direct counts show that $`30`$ per cent of the 850 $`\mu `$m background comes from sources above 2 mJy.
## 2 Towards a redshift distribution
The redshift distribution of the submm population is needed to trace the extent and evolution of obscured star formation in the distant Universe. However, identifying the optical/NIR counterparts to the submm sources is difficult due to the uncertainty in the SCUBA positions. Barger et al. (1999b) presented a spectroscopic survey of possible optical counterparts to a flux-limited sample of galaxies selected from the 850 $`\mu `$m survey of massive lensing clusters by Smail et al. (1998). Identifications were attempted for all objects in the SCUBA error-boxes that were bright enough for reliable spectroscopy; redshifts or limits were obtained for 24 possible counterparts to 14 of the 17 SCUBA sources in the sample. The remaining three submm sources consisted of two blank fields and a third source that had not yet been imaged in the optical when the spectroscopic data were obtained. The redshift survey produced reliable identifications for six of the submm sources: two sets of galaxy pairs (a $`z=2.8`$ AGN/starburst (Ivison et al. 1998) and a $`z=2.6`$ starburst), two galaxies showing AGN signatures ($`z=1.16`$ and $`z=1.06`$), and two cD galaxies (cluster contamination). The galaxy pairs were later confirmed as the true counterparts through the detection at their redshifts of CO emission in the millimeter (Frayer et al. 1998, 1999). The relative paucity of AGNs in the general field population ($`<1`$%) suggests that if an AGN is identified, then the submm emission is most likely associated with that source. A lower limit of 20 per cent of the submm sources in the sample show signs of AGN activity.
For the eight remaining submm sources for which spectroscopic data of the most probable counterparts were obtained ($`z0.182.11`$), the identifications are uncertain. As mentioned above, two of the submm detections in the sample have no visible counterparts in very deep imaging ($`I26`$), and it is possible that the eight submm sources are similarly optically faint. We show in the next section that this is in fact highly likely. Such sources could either be at very high redshift or be so highly obscured that they are emitting their energy almost entirely in the submm. Using very deep NIR imaging of the cluster fields, Smail et al. (1999) recently detected possible extremely red object (ERO) counterparts to two of the submm sources tentatively identified with bright $`z0.5`$ spiral galaxies in Barger et al. (1999b).
## 3 Pinpointing submm sources via radio continuum observations
A promising alternative method for locating submm sources is through the use of radio continuum observations which can be made with subarcsecond positional accuracy and resolution. A relatively tight empirical correlation between radio continuum emission and thermal dust emission is known to exist in nearby star forming galaxies (Condon 1992). This FIR-radio correlation is due to both radiation processes being connected to massive star formation activity in a galaxy. We therefore decided to test whether the submm source population could be efficiently located by targeting known radio sources with SCUBA. We chose to use the complete Hubble flanking fields (HFF) 1.4 GHz VLA sample (70 sources at $`5\sigma `$) of Richards (1999). This sample is ideally suited for our purpose due to its uniform (8 $`\mu `$Jy at 1$`\sigma `$) sensitivity over the whole flanking field region and to the availability of corresponding deep ground-based optical and NIR imaging (Barger et al. 1999a). Richards et al. (1999) found that at the $`\mu `$Jy level 60% of the radio sources could be identified with bright disk galaxies ($`I<22`$) and 20% with low luminosity AGN. However, the remaining 20% could not be identified to optical magnitude limits of $`I=25`$.
We first made use of the precise radio positions to observe a complete subsample of the radio-selected objects with LRIS on the Keck II 10m telescope. In particular we wished to determine whether we could identify any spectral features in the optically-faint radio sources. We were able to spectroscopically identify nearly all the objects in our subsample to $`K<20`$ (max. redshift $`z1.2`$), but we could not obtain redshifts for the fainter objects. The latter objects might be highly obscured star forming galaxies and hence would be good SCUBA targets.
We targeted 14 of the 16 $`K>21`$ radio sources in the HFF sample using SCUBA in jiggle map mode, which enabled us to simultaneously observe a large fraction of the optical/NIR-bright radio sources (35/54). Even with relatively shallow SCUBA observations (a 3$`\sigma `$ detection limit of 6 mJy at $`850\mu `$m), we were able to make 5 submm detections ($`>3\sigma `$) of the 14 targeted $`K>21`$ sources; in contrast, none of the 35 optical/NIR-bright sources were detected. Additionally, we detected two $`>6`$ mJy sources in our observed fields (which cover slightly more than half of the HFF) that did not have radio detections.
Our high success rate with our targeted observations indicates that selection in the radio is an efficient means of detecting the majority of the bright submm source population. We illustrate this in Figure 1 where we compare our radio-selected submm source counts with the combined source counts from published blank field submm surveys. An important corollary to the optical/NIR-faint radio sources being detectable as bright submm sources is that a large fraction of the sources in submm surveys will have extremely faint optical/NIR counterparts and hence cannot be followed up with optical spectroscopy.
## 4 Millimetric Redshift Estimation
Although we are unable to obtain spectroscopic redshifts for the optical/NIR-faint radio-selected submm sources, we can use the shape of the spectral energy distribution (SED) in the radio and submm to obtain millimetric redshift estimates. The slope of the SED changes abruptly at frequencies below $`10^{11}`$ Hz when the dominant contribution to the SED changes from thermal dust emission to synchrotron radio emission. Carilli & Yun (1999) suggested that the presence of this spectral break would enable the use of the submm-to-radio ratio as a redshift estimator. Figure 2 shows how visual redshift estimates for the radio-selected submm sources can be made from a plot of radio flux, $`S_{1.4\mathrm{GHz}}`$, versus submm flux, $`S_{353\mathrm{GHz}}`$, for a range in luminosities of the prototypical ultra-luminous infrared galaxy (ULIG) Arp 220. All of the submm sources detected in this survey fall in the redshift range $`z=13`$, consistent with the redshifts of the lensed submm sources (Barger et al. 1999b).
For $`z<3`$, we can express the Arp 220 predicted flux ratio as
$$S_{353\mathrm{GHz}}/S_{1.4\mathrm{GHz}}=1.1\times (1+z)^{3.8}$$
(2)
The primary uncertainty in this relation is the dust temperature dependence, which in a local ULIG sample produces a multiplicative factor of 2 range in the ratio relative to Arp 220. We have found that all known submm sources with radio fluxes or limits and spectroscopic identifications are broadly consistent with the expected ratio, although those with AGN characteristics have slightly lower ratios.
## 5 Summary
About 30 per cent of the 850 $`\mu `$m background is already resolved, and the slope of the counts is sufficiently steep (a power law index of -2.2 for the cumulative counts) that only a small extrapolation to fainter fluxes will give convergence with the background. Thus, the typical submm source contributing to the background seems to be in the $`12`$ mJy range.
Identifying the optical/NIR counterparts to submm sources is difficult due to the poor submm spatial resolution and the intrinsic faintness of the sources. In a spectroscopic survey of a lensed submm sample, only about one-quarter of the sources had optical counterparts bright enough to be spectroscopically identified, a large fraction of which showed AGN characteristics. We have recently found an alternative method for locating submm sources by targeting with SCUBA optical/NIR-faint radio sources whose positions are known to subarcsecond accuracy. Our success with this method suggests that a large fraction of the sources in submm surveys have extremely faint optical/NIR ($`K=2122`$) counterparts and hence cannot be followed up with optical spectroscopy. However, redshift estimations can be made for these sources using the submm-to-radio ratios. We find that the detected sources fall in the same $`z=13`$ range as the spectroscopically identified sources. While still preliminary, the results suggest that the submm population dominates the star formation in this redshift range by almost an order of magnitude over the mostly distinct populations selected in the optical/ultraviolet.
###### Acknowledgements.
We thank our collaborators Dave Sanders, Ian Smail, Rob Ivison, Andrew Blain, and Jean-Paul Kneib for contributions to the work presented here.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.