id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9907/hep-ex9907042.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Experiments at particle colliders have established, with great precision, the validity of todays Standard Model (SM) of particle physics as an accurate description of high energy physics up to mass scales of a few 100 GeV. Among the most precise tests , one finds the $`W`$ and $`Z`$ boson masses and their coupling strength to fermions. Combining all these measurements and staying within the SM one starts being sensitive to the last missing SM particle, the Higgs boson. Furthermore, accurate measurements of the $`Z`$ boson width, allowed to determine the number of light neutrino families to be 2.9835$`\pm `$0.0083 and to establish lepton universality with an accuracy well below the 1% level. These precise laboratory measurements lead thus to the requirement that a more complete model has to include todays SM as a low energy approximation. It is often said that physics beyond the SM is required because some fundamental questions are not addressed by the SM. These problems are related to the “unnatural” mass splitting between the known fundamental fermions with neutrino masses close to 0 eV and $``$ 175$`10^9`$ eV for the top quark and the so called hierarchy problem or fine tuning problem of the Standard Model. The hierarchy problem originates from theoretical ideas to extrapolate todays knowledge at mass scales of a few 100 GeV to energy scales of about $`10^{15}`$ GeV and more. A purely theoretical approach to this extrapolation has lead theorists to Supersymmetry and the so–called “Minimal Supersymmetric Standard Model” (MSSM) , which could solve some of these conceptual problems by introducing super-symmetric partners to every known boson and fermion and at least an additional Higgs multiplett. As these SUSY particles should have been produced abundantly shortly after the Big Bang, Supersymmetry with R–parity conservation offers a lightest stable super symmetric particle as the “cold dark matter” candidate.
Despite the variety of SUSY models with largely unconstrained masses of SUSY particles and the absence of any indications for Supersymmetry, searches for Supersymmetry at existing and sensitivity estimates of future collider experiments became an important aspect of high energy physics.
This report is structured as follows: its starts with an overview of experimentation at high energy colliders, which includes some recent experimental highlights, we discuss basics concepts of Supersymmetry and the applied search strategies. We than describe a few examples of experiments with negative results at LEP II and the TEVATRON and give an outlook to future perspectives at the LHC.
## 2 Experimentation and Experiments at high Energy Colliders
The high energy frontier of particle colliders are currently covered by the $`e^+e^{}`$ collider LEP at CERN and the proton–antiproton collider TEVATRON at FERMILAB. In contrast to $`e^+e^{}`$ colliders which investigate nature directly at the available center–of–mass energy $`\sqrt{s}`$, proton colliders study of quark and gluon collisions over a wide $`\sqrt{s_{eff}}`$ range. The maximal effective $`\sqrt{s_{eff}}`$ is however, depending on the available luminosity, about a factor of $``$4–6 smaller than the nominal $`\sqrt{s}`$ of hadron–hadron collisions.
The LEP collider is currently running at center–of–mass energies, $`\sqrt{s}`$, of 196 GeV and will soon reach $`\sqrt{s}`$ 200 GeV. The TEVATRON experiments CDF and D0 have collected data corresponding to a luminosity of about 100 pb<sup>-1</sup> per experiment at center–of–mass energies of 1.8 TeV. The upgraded TEVATRON with its improved experiments is expected to restart running in the year 2000 at 2 TeV center–of–mass energies and should provide luminosity of 1–2 fb<sup>-1</sup> per year and experiment. Around the year 2005 the LHC, CERN’s large hadron collider, is expected to come into operation. The LHC is a proton–proton collider with 14 TeV center–of–mass energies and high luminosity. The LHC will allow to increase the sensitivity to new physics well into the TeV mass range.
Modern Collider experiments have essentially a cylindrical structure with an outer radius of 5–7 m and a length between 10–25 m. The onion like structure of these experiments consists of:
1. Precision detectors which measure precisely the trajectories of charged particles. These detectors are embedded in a magnetic field with a z–axis along the beam direction, which allows to measure the momentum of charged particles.
2. electromagnetic and hadron calorimeters which measure accurately the impact position and the energy electrons, photons and charged or neutral hadrons and,
3. muon detection systems surround the calorimeters and hadron absorbtion length, which measure the position and direction of muons.
While todays collider experiments are constructed and operated by international collaborations of up to 500 physicists one expects that tomorrows LHC experiments will unite nearly 2000 physicists.
Depending only slightly on the main aim of the experiments, the detectors allow to measure the energy and momentum of long lived charged ($`\pi ^\pm ,K^\pm ,p,\overline{p},e^\pm `$ and $`\mu ^\pm `$) as well as the neutral particles $`\gamma ,K^0`$ and $`n,\overline{n}`$. These individual particles can than be combined to search for mass peaks of short lived particles, $`\tau `$ decay products and bunches of hadrons identified as jets. These jets can be separated using their characteristic lifetime and kinematics into light quark (u,d, and s) or gluon jets, c(harm) and b(eauty) flavoured jets. Furthermore, modern experiments have achieved essentially a $`4\pi `$ angular coverage for the various individual energy and momentum measurements, which allows the determination of the missing energy and momentum due to “invisible” neutrinos or neutrino like objects. Examples of a few interesting events, produced at LEP II and the TEVATRON, are shown in Figures 1–4. These events indicate how the original physics process can be reconstructed from the detectable particles.
### 2.1 Highlights from Collider Experiments
Despite the non observation of physics beyond the SM, recent experiments at particle colliders gave a large variety of impressive results. Especially remarkable are the measurements of the $`Z`$ boson parameters with the resulting number of light neutrino families being 2.9835$`\pm `$0.0083, the discovery of the top quark at the TEVATRON, the measured cross section of the reaction $`e^+e^{}WW`$ at LEP II and the observed energy dependence of the strong coupling constant $`\alpha _s`$. Some experimental results and the corresponding theoretical expectations are shown in Figures 5–7.
Combining the results of the large variety of collider measurements, one is forced to accept that the SM is at least an excellent approximation of nature. Furthermore, one starts, as shown in Figure 8, to have some indirect constraints on the Higgs boson mass. These indirect constraints are from the combination of the different measurements of electroweak observables, like the various asymmetry measurements in $`Z`$ decays and the masses of the $`W^\pm `$ and the top quark. The accuracy of this procedure is however limited as there is only a soft logarithmic Higgs mass dependence. Nevertheless, assuming that the Higgs mass is the only unknown SM parameter, a fit to all precision data constrains the Higgs mass to $`92\pm _{45}^{78}`$ GeV and with a confidence level of 95% c.l. to less than about 245 GeV . This result agrees with Higgs mass estimates of $`160\pm 20`$ GeV , which assume the validity of the SM up to very large mass scales like the Planck scale. It agrees also with expectations from Supersymmetry with the minimal Higgs sector where the lightest Higgs must have a mass of less than $``$ 130 GeV.
The precise measurements of the energy dependence of the strong, the electromagnetic and the weak coupling parameters indicate comparable couplings at energies close to $`10^{15}`$ GeV. However, the expectation from the simplest Grand-Unification theories of a perfect matching is now excluded. It might however be achieved if some new physics, like Supersymmetry, exists at nearby mass scales. Another indication of physics beyond todays SM comes from the observed “unnatural” large mass splitting between the otherwise identical fermion families which cover at least 11 orders of magnitude and the “large” number of free parameters within the SM and the exclusion of gravity.
## 3 Beyond the Standard Model of Particle Physics: Supersymmetry, SUSY Models and SUSY Signatures
Among the possible extensions of the Standard Model the Minimal Supersymmetric Standard Model (MSSM) is usually considered to give the most serious theoretical frame. The attractive features of this approach are:
* It is quite close to the existing Standard Model.
* It explains the so called hierarchy problem of the Standard Model.
* It allows to calculate.
* Predicts many new particles and thus “Nobel Prizes” for the masses.
An example for the small difference between the SM and the MSSM in terms of electroweak observables is shown in Figure 9 , which compares the measurements of the $`W`$–boson and the top–quark masses with predictions of the SM and the MSSM.
Unfortunately todays data, $`M_W=80.394\pm 0.042`$ GeV and $`M_{top}=174\pm 5`$ GeV , favor an area which is perfectly consistent with both models. Similar conclusion can be drawn from other comparisons of todays precision measurements with the SM and the MSSM.
A large number of new heavy particles should exist within the MSSM model. In detail, one expects spin 0 partners, called sleptons and squarks for every quark and lepton and spin 1/2 partners, called gluinos, charginos and neutralinos, for the known spin 1 bosons and for the hypothetical scalar Higgs bosons. Due to identical quantum numbers, some mixing between the different neutralinos and charginos might exist. In addition, at least 5 Higgs bosons ($`h^0,H^0,A^0`$ and $`H^\pm `$) are required. The masses of these Higgs bosons are strongly related. Essentially one needs to know “only” the mass of $`h^0`$ and one other Higgs boson or the mass of one Higgs boson and $`\mathrm{tan}\beta `$, the ratio of the higgs vacuum expectation values.
Experimental searches for Supersymmetry can thus be divided into a) the MSSM Higgs sector and b) the direct SUSY particle search.
The advantages of searches for a Higgs boson are that at least one Higgs boson with a mass smaller than about 130 GeV should exist and that cross sections and decay modes can be calculated accurately as a function of the mass and $`\mathrm{tan}\beta `$. The disadvantage is however that, in order to distinguish between the SM and Supersymmetry, at least two MSSM Higgs bosons need to be discovered.
In contrast to Higgs searches, searches for SUSY particles can look for a variety of SUSY particles and the discovery of one SUSY particle could be a proof of Supersymmetry. Unfortunately the masses of SUSY particles cannot be predicted and values far beyond todays and perhaps even tomorrows center–of–mass energies are possible. Furthermore, having over 100 free SUSY parameters a large variety of SUSY signatures needs to be studied.
As will be discussed in section 4.1, todays negative search results for Higgs particles at LEP II indicate that the mass of the lightest SUSY Higgs must be greater than about 80-90 GeV. The absence of any indication for super symmetric particles at LEP II and the TEVATRON, discussed below imply mass limits of about 90 GeV for sleptons, 95 GeV for charginos and about 200 GeV for squarks and gluinos .
### 3.1 Signatures of SUSY Particles
Essentially all signatures related to the MSSM are based on the consequences of R–parity conservation . R–parity is a multiplicative quantum number like ordinary parity. The R–parity of the known SM particles is 1, while the one for the SUSY partners is –1. As a consequence, SUSY particles have to be produced in pairs. Unstable SUSY particles decay, either directly or via some cascades, to SM particles and the lightest super symmetric particle, LSP, required by cosmological arguments to be neutral. Such a massive LSP’s, should have been abundantly produced after the Big Bang and is currently considered to be “the cold dark matter” candidate.
This LSP, usually assumed to be the lightest neutralino $`\stackrel{~}{\chi }_1^0`$ has neutrino like interaction cross sections and cannot be observed in collider experiments. Events with a large amount of missing energy and momentum are thus the SUSY signature in collider experiments. Due to neutrinos produced in weak decays of for example $`\tau `$ leptons and measurement errors, the missing energy and momentum signature alone are usually not sufficient to identify SUSY particles.
However, SM backgrounds can be strongly reduced if the decay kinematics of heavy particles are exploited. The decay products of heavy particles obtain a relatively large $`p_{}`$ with respect to the momentum vector of the decaying particle and can thus be emitted with large angles. Consequently, the observable decay products of pair produced heavy SUSY particles should thus be seen in non back–to–back events. Due to the detection hole close to the beam line, missing momentum along the beam direction is also be expected for standard physics reactions. SUSY searches concentrate thus on the missing momentum in the plane transverse to the beam direction and require usually some non back–to–back signature in this x–y plane. Essentially all the characteristics of SUSY searches, large missing momentum, energy and the non back–to–back signature are also used to select $`e^+e^{}WW\mathrm{}^+\nu \mathrm{}^{}\overline{\nu }`$ events at LEP II as visualized in events like the one shown in Figure 3. Due to the relatively large $`W`$ mass and cross section, SUSY searches at LEP II and other colliders need to consider especially the potential backgrounds from the SM reaction $`e^+e^{}WW`$.
Possible SUSY search examples, which exploit the above signatures are the pair production of sleptons with their subsequent decays, $`e^+e^{}\stackrel{~}{\mathrm{}}^+\stackrel{~}{\mathrm{}}^{}`$ and $`\stackrel{~}{\mathrm{}}\mathrm{}\stackrel{~}{\chi }_1^0`$. Such events would appear as events with a pair of isolated electrons or muons with high $`p_t`$ and large missing transverse energy.
Starting from the MSSM, the so called minimal model, one counts more than hundred free parameters. So many unconstrained parameters do not offer a good guidance for experimentalist which prefer to use additional assumptions. The perhaps simplest approach is the MSUGRA (minimal super gravity) model with only five free parameters ($`m_0,m_{1/2},\mathrm{tan}\beta ,A^0`$ and $`\mu `$). Within the MSUGRA model, the masses of SUSY particles are strongly related to the so called universal fermion and scalar masses $`m_{1/2}`$ and $`m_0`$. The masses of the spin 1/2 SUSY particles are directly related to $`m_{1/2}`$. One expects approximately the following mass hierarchy:
* $`\stackrel{~}{\chi }_1^01/2m_{1/2}`$
* $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^\pm m_{1/2}`$
* $`\stackrel{~}{g}`$ (the gluino) $`3m_{1/2}`$
The masses of the spin 0 SUSY particles are related to $`m_0`$ and $`m_{1/2}`$ and allow, for some mass splitting between the “left” and “right” handed scalar partners of the degenerated left and right handed fermions. One finds the following simplified mass relations:
* $`m(\stackrel{~}{q})`$(with q=u,d,s,c and b) $`\sqrt{m_0^2+6m_{1/2}^2}`$
* $`m(\stackrel{~}{\nu })m(\stackrel{~}{\mathrm{}^\pm })`$ (left) $`\sqrt{m_0^2+0.52m_{1/2}^2}`$
* $`m(\stackrel{~}{\mathrm{}^\pm })`$ (right) $`\sqrt{m_0^2+0.15m_{1/2}^2}`$
The masses of the left and right handed stop quarks ($`\stackrel{~}{t}_{\mathrm{},r}`$) might, depending on other parameters, show a large mass splitting. As a result, the right handed stop quark might be the lightest of all squarks.
Following the above mass relations and using the known SUSY couplings, possible SUSY decays and the related signatures can be defined. Already with the simplest MSUGRA frame one finds a variety of decay chains.
For example the $`\stackrel{~}{\chi }_2^0`$ could decay to $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0+X`$ with $`X`$ being:
* $`X=\gamma ^{}Z^{}\mathrm{}^+\mathrm{}^{}`$
* $`X=h^0b\overline{b}`$
* $`X=Zf\overline{f}`$
Other possible $`\stackrel{~}{\chi }_2^0`$ decay chains are $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^{\pm ()}+\mathrm{}^\pm \nu `$ and $`\stackrel{~}{\chi }_1^{\pm ()}\stackrel{~}{\chi }_1^0\mathrm{}^\pm \nu `$ or $`\stackrel{~}{\chi }_2^0\stackrel{~}{\mathrm{}}^\pm \mathrm{}^{}`$.
Allowing for higher and higher masses, even more decay channels might open up. It is thus not possible to define all search strategies a priori. Furthermore, possible unconstrained mixing angles between neutralinos, lead to model dependent search strategy for squarks and gluinos.
## 4 Where we did not discover Supersymmetry
### 4.1 The Higgs Search at LEP II and beyond
Experiments at LEP II and $`\sqrt{s}200`$ GeV will have an excellent sensitivity to the SM Higgs with masses of about 100–105 GeV, using the process $`e^+e^{}Z^{}Zh^0`$. The experimental signatures for this process are given by the combination of the various decay products of a $`Z`$ boson and two b–jets or $`\tau \tau `$ final states coming from the decay of the Higgs boson. Furthermore, due to kinematic constraints the mass of the system recoiling against the $`Z`$ system can be measured with an accuracy of about $`\pm `$ 2–3 GeV and a signal should show up in the recoil mass spectrum. The latest LEP results, obtained using the 1998 data at $`\sqrt{s}=189`$ GeV and a luminosity of $``$ 180 pb<sup>-1</sup>/experiment, do not show any signal. Only OPAL sees a two sigma excess of events with a recoil mass close to the $`Z`$ mass as shown in Figure 10 .
The observed number of events in the peak region is 31 compared to a background expectation of $``$ 22 events. Unfortunately the excess is neither confirmed by the new OPAL data collected up to July 1999 nor by the combination of the four LEP experiments and the 1998 data as shown in Figure 11 .
For higher values of $`\mathrm{tan}\beta `$ the couplings of the $`h^0`$ to the weak bosons are reduced proportional to $`\mathrm{cos}\beta `$. However, the reaction $`e^+e^{}Z^{}h^0A^0`$, if kinematically allowed, appears to be detectable. This process results in a distinct signature of events with four b–jets. The search for such 4 b–jet events during the future LEP II running will thus give sensitivity to masses of $`M_h,M_A<90100`$ GeV and all $`\mathrm{tan}\beta `$ values. The possible final SM Higgs sensitivity from the 1999/2000 data taking at LEP II has been estimated to be about 105 GeV, using a luminosity of about $`4\times 200`$ pb<sup>-1</sup> at $`\sqrt{s}=200`$ GeV . One finds that this sensitivity translates to a Higgs sensitivity of the MSSM for values of $`\mathrm{tan}\beta `$ of about roughly 6–7 (3–4) with no (maximal) mixing using the process $`e^+e^{}Z^{}Zh^0`$.
Searches for Higgs bosons with masses beyond the expected LEP II sensitivity have probably to wait for the LHC or perhaps even for a future high luminosity high energy linear $`e^+e^{}`$ collider.
Todays sensitivity studies from both large LHC experiments, ATLAS and CMS , indicate an excellent sensitivity to SM Higgs boson up to masses of about 1 TeV. The sensitivity to the Higgs sector of the MSSM scenario appears currently to be somehow restricted. For the lightest Higgs, with a mass below 120–130 GeV, the only established signature is the decay $`h^0\gamma \gamma `$. For masses of $`M_A`$, greater than 400 GeV the $`h^0`$ behaves essentially like the SM Higgs rates and should be discovered a few years after the LHC start. For smaller masses of $`M_A`$, the branching ratio $`h\gamma \gamma `$ becomes too small to be observable and 5 standard deviation $`h^0`$ signals can only be expected from the combination of the $`h^0\gamma \gamma `$ search with other $`h^0`$ decay modes, like $`ppt\overline{t}h^0b\overline{b}`$, $`h^0ZZ^{}`$ and $`h^0WW^{}`$. The regions where one should see 5 standard deviation MSSM Higgs signals and with integrated luminosity of 30 fb<sup>-1</sup>, about three years after the LHC start and with a “final” luminosity of 300 fb<sup>-1</sup> are indicated in Figures 12a and b, respectively. The expected sensitivity of the LHC experiments to other Higgs bosons of the MSSM are also indicated in Figure 12.
### 4.2 Examples of todays direct SUSY Searches
Todays searches cover a wide range of MSSM SUSY models, going from the most “conservative” MSUGRA model to more “radical” assumptions like Gauge mediated models and models with R–parity violation. But so far and despite the large variety of studied signatures, no indication for SUSY like particles has been found at LEP II, at the TEVATRON or at HERA.
To demonstrate the good experimental sensitivity it appears to be useful to discuss the actual outcome of a few SUSY searches at LEP II and the TEVATRON.
The first example is the search for acoplanar lepton pairs events from OPAL at LEP II . The distribution of the lepton energy scaled by the beam energy and the reconstructed scattering angle multiplied by the charge of the most energetic lepton with respect to the incoming electron direction are shown in Figures 13a and b respectively. The data are in agreement with expectations from the dominant $`WW`$ backgrounds. The possibility to discriminate potential signal events from the backgrounds using the characteristic charge dependent angular distribution is nicely seen in Figure 13b.
The second example are results from L3, summarized in Table 1 . The observed number of events in the data are compared with expected backgrounds and optimized searches for sleptons and charginos, with small, medium and large mass differences between the lightest stable SUSY particle and the studied SUSY particle. Not even a two sigma excess is seen. An interpretation of the L3 results, within the MSSM model, is given in Figure 14.
The third example is related to a search for trilepton events, originating from the reaction $`q_i\overline{q}_j\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^\pm `$ and leptonic decays of the neutralino and chargino. Such events can be detected from an analysis of events with three isolated high $`p_t`$ leptons and large missing transverse energy. The potential of this trilepton signature at hadron colliders like the LHC has been described in several phenomenological studies . It was found, that trilepton events with jets need to be rejected in order to distinguish signal events from SM and other SUSY backgrounds.
After the removal of jet events, the only remaining relevant background comes from leptonic decays of $`WZ`$ events. Potential backgrounds from dilepton events like $`W^+W^{}\mathrm{}^+\nu \mathrm{}^{}\overline{\nu }`$ and hadrons misidentified as electrons or muons are usually assumed to be negligible. Depending on the analyzed SUSY mass range, the background from leptonic decays of $`WZ`$ events, in contrast to a potential signal, will show a $`Z^0`$ mass peak in the dilepton spectrum. The results of a TEVATRON (CDF) trilepton search are shown in Table 2.
The last example is related to the famous lonely CDF event, which has large missing transverse energy, 2 high $`p_t`$ isolated photons and 2 isolated high $`p_t`$ electron candidates . The presence of high $`p_t`$ photons does not match MSUGRA expectations but might fit into so called gauge mediated symmetry breaking models, GMSB . This event has motivated many additional, so far negative searches. Particular sensitive searches at LEP II come from the analysis of events with one or more energetic photons and nothing else. Such events can originate essentially only from initial state bremsstrahlung in the reaction $`e^+e^{}Z\gamma \gamma `$ and the $`Z`$ decaying to neutrinos or from the neutralino pair production and subsequent decays to photons and invisible gravitinos. No excess of such events has been seen by any of the LEP experiments. These negative results exclude essentially the SUSY interpretation of the CDF event. Typical results, here from L3, for the recoil mass distribution of single and double $`\gamma `$ events and their interpretation in comparison with the area allowed by the CDF event, are shown in Figures 15a-c .
### 4.3 Where we might discover SUSY particles
The data taking at LEP II will continue during the year 2000, with an expected maximal $`\sqrt{s}`$ of $`200`$ GeV. In contrast to the MSSM Higgs search, where a still a sizeable fraction of the parameter space can be covered, the increase in the mass range from $``$ 90 GeV to 95 GeV appears to be small compared to possible TeV scale masses of SUSY particles.
The next phase of direct SUSY searches will thus be dominated by hadron colliders. The improved TEVATRON collider is expected to start data taking during the year 2000. The expected yearly luminosity for the so called RUN II should reach a few fb<sup>-1</sup> per experiment. This should allow to discover the reaction $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^\pm `$ with SUSY masses up to 130 GeV. Further improvements could come from the third phase of the TEVATRON (RUN III), which could reach chargino masses up to about 210 GeV and integrated luminosities of 20–30 fb<sup>-1</sup> . The final test of the MSSM version of Supersymmetry should come from CERN’s LHC, currently expected to start operation during the year 2005. LHC experiments are especially sensitive to strongly interacting particles with their huge production cross section. For example, the pair production cross section of squarks and gluinos with a mass of $``$ 1 TeV has been estimated to be as large as 1 pb resulting in 10<sup>4</sup> produced SUSY events for one “low” luminosity LHC year . Depending on the SUSY model, a large variety of massive squark and gluino decay channels and signatures might exist. A complete search analysis for squarks and gluons at the LHC should consider the various signatures resulting from the following decay channels.
* $`\stackrel{~}{g}\stackrel{~}{q}q`$ and perhaps $`\stackrel{~}{g}\stackrel{~}{t}t`$
* $`\stackrel{~}{q}\stackrel{~}{\chi }_1^0q`$ or $`\stackrel{~}{q}\stackrel{~}{\chi }_2^0q`$ or $`\stackrel{~}{q}\stackrel{~}{\chi }_1^\pm q`$
* $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0\mathrm{}^+\mathrm{}^{}`$ or $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0Z^0`$ or $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0h^0`$
* $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_1^0\mathrm{}^\pm \nu `$ or $`\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_1^0W^\pm `$.
The various decay channels can be separated into at least three distinct event signatures.
* Multi–jets plus missing transverse energy. These events should be “circular” in the plane transverse to the beam.
* Multi–jets plus missing transverse energy plus n(=1,2,3,4) isolated high $`p_t`$ leptons. These leptons originate from cascade decays of charginos and neutralinos.
* Multi–jets plus missing transverse energy plus same charge leptons pairs. Such events can be produced in events of the type $`\stackrel{~}{g}\stackrel{~}{g}\stackrel{~}{u}\overline{u}\stackrel{~}{d}\overline{d}`$ with subsequent decays of the squarks to $`\stackrel{~}{u}\stackrel{~}{\chi }_1^+d`$ and $`\stackrel{~}{d}\stackrel{~}{\chi }_1^+u`$ followed by leptonic chargino decays $`\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^0\mathrm{}^+\nu `$.
It is easy to imagine that the observation and detailed analysis of the different types of squark and gluino signatures might allow to measure some of the many MSSM parameters.
The above signatures have already been investigated with the data from the TEVATRON RUN I. The negative searches gave mass limits for squarks and gluinos up to $`200`$ GeV. The estimated 5–sigma sensitivity for RUN II and RUN III reaches values as high as 350–400 GeV. More details about the considered signal and backgrounds can be found from the TeV2000 studies and the ongoing TEVATRON RUN II workshop .
A simplified search strategy for squarks and gluinos at the LHC would study jet events with large visible transverse mass and some missing transverse energy. Such events can then be classified according to the number of isolated high $`p_t`$ leptons. Once an excess above SM backgrounds is observed for any possible combination of the transverse energy spectra, one would try to explain the observed types of exotic events and their cross section(s) for different SUSY $`\stackrel{~}{g},\stackrel{~}{q}`$ masses and decay modes and models. An interesting approach to such a multi–parameter analysis uses some simplified selection variables. For example one could use the number of observed jets and leptons and their transverse energy, their mass and the missing transverse energy to separate signal and backgrounds. Such an approach has been used to perform a “complete” systematic study of $`\stackrel{~}{g}`$ and $`\stackrel{~}{q}`$ decays . The proposed variable $`E_t^c`$ is the value of the smallest of $`E_t`$(miss), $`E_t`$(jet1), $`E_t`$(jet2). The events are further separated into the number of isolated leptons. Events with lepton pairs are divided into same sign (charge) pairs (SS) and opposite charged pairs (OS). Signal and background distributions for various squark and gluinos masses, obtained with such an approach are shown in Figure 16. According to this classification the number of expected signal events can be compared with the various SM background processes. The largest and most difficult backgrounds originate mainly from $`W+`$jet(s), $`Z+`$jet(s) and $`t\overline{t}`$ events. Using this approach, very encouraging signal to background ratios, combined with quite large signal cross sections are obtainable for a large range of squark and gluino masses. The simulation results of such studies indicate, as shown in Figure 17, that the LHC experiments are sensitive to squark and gluinos masses up to masses of about 2 TeV, $`m_{\stackrel{~}{g}}=3m_{1/2}`$, and a luminosity of 100 fb<sup>-1</sup>.
Figure 17 indicates further, that detailed studies of branching ratios are possible up to squark or gluino masses of about 1.5 TeV, where significant signals can be observed with many different channels. Another consequence of the expected large signal cross sections is the possibility that the “first day” LHC luminosity $``$ 100 pb<sup>-1</sup> should be sufficient to discover squarks and gluinos up to masses of about 600–700 GeV, well beyond even the most optimistic TEVATRON Run III mass range.
### 4.4 SUSY discovered, what can be studied at the LHC?
Being convinced of the LHC SUSY discovery potential, one certainly wants to know if “the discovery” is consistent with Supersymmetry and if some of the many SUSY parameters can be measured. To answer such a question one should try find many SUSY particles and measure their decay patterns as accurately as possible. For example one finds that the production and decay of $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^\pm `$ provides good rates for a trilepton signature if the chargino and neutralino masses are below 200 GeV. The observation of such events should allow to measure accurately the dilepton mass distribution which is sensitive to the mass difference between the two neutralinos. Depending on the used MSUGRA parameters one finds that the $`\stackrel{~}{\chi }_2^0`$ can have two or three body decays. The relative $`p_t`$ spectra of the two leptons can be used to distinguish the two possibilities.
In contrast to the rate limitations of weakly produced SUSY particles at the LHC, detailed studies of the clean squark and gluino events are expected to reveal much more information. One finds that the large rate for many distinct event channels allows to measure masses and mass ratios for several SUSY particles, which are possibly being produced in cascade decays of squarks and gluons. Many of these ideas have been discussed at a 1996 CERN Workshop . Especially interesting appears to be the idea that the $`h^0`$ might be produced and detected in the decay chain $`\stackrel{~}{\chi }_2^0\stackrel{~}{\chi }_1^0h^0`$ and $`h^0b\overline{b}`$. The simulated mass distribution for $`b\overline{b}`$ jets in events with large missing transverse energy is shown in Figure 18. Clear Higgs mass peaks above background are found for various choices of $`\mathrm{tan}\beta `$ and $`m_0,m_{1/2}`$.
## 5 Summary
Searches for Supersymmetry at the highest energy particle colliders can be divided into the search for the MSSM Higgs sector and for the direct search for SUSY particles. So far no signs of neither a Higgs boson nor of any SUSY particles have been found.
The expected energy increase of the LEP II collider during the year 2000 might be just right to detect a Higgs boson up to a mass of about 105 GeV. In contrast, it appears that the LEP II experiments have reached almost the kinematical limit for the direct detection of supersymmetric particles as only marginal improvements, about 5%, can be expected from the future LEP II running.
The future high luminosity running of the TEVATRON might improve the existing sensitivity for SUSY particles by a factor of about 1.5–2 compared to todays mass limits. Consequently charginos might be seen up to masses of 200 GeV and squarks and gluinos up to masses of about 400 GeV. This reach should be compared with the expectations from the future LHC experiments. ATLAS and CMS studies indicate a good sensitivity up to masses of about 2 TeV. In addition, the detectable LHC squark and gluino cross sections, even for moderate masses well above any possible TEVATRON limit, are huge and LHC SUSY discoveries might be possible even with a luminosity of a few 100 pb<sup>-1</sup> only, obtainable almost immediately at the LHC switch on.
To finish this report on “Searches for Supersymmetry” we would like quote a few authorities:
“Experiments within the next 5–10 years will enable us to decide whether Supersymmetry, as a solution to the naturalness problem of the weak interaction is a myth or reality” H. P. Nilles 1984
“One shouldn’t give up yet” …. “perhaps a correct statement is: it will always take 5-10 years to discover SUSY” H. P. Nilles 1998
“Superstring, Supersymmetry, Superstition” Unknown
“New truth of science begins as heresy, advances to orthodoxy and ends as superstition” T. H. Huxley (1825–1895).
Acknowledgments I would like to thank the organizers of the Colloque Cosmologie for the possibility to review “Supersymmetry searches with existing and future collider experiments”. This invitation gave me the opportunity to listen to many interesting presentations about the status and open questions of cosmology.
|
no-problem/9907/astro-ph9907382.html
|
ar5iv
|
text
|
# The Molecular Gas in the Circumnuclear Region of Seyfert GalaxiesBased on observations carried out with the IRAM Plateau de Bure Interferometer. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain).
## 1 INTRODUCTION
Molecular gas in the circumnuclear regions of nearby Seyfert galaxies can now be studied via mm-interferometry at sub-arcsecond resolution and high sensitivity. In the standard unified scheme a torus of dense molecular gas and dust surrounding the AGN and its accretion disk (see Peterson 1997 for an overview) is responsible for determining whether the source is seen as a Seyfert 1 or Seyfert 2 depending only on whether the viewing angle onto the central engine is blocked by the torus or not.
Two questions which can be addressed by mm-interferometric observations arise from this picture: (1) On which scales and how much is the molecular gas contributing to the obscuration? (2) What is the transport mechanism which brings the molecular gas down to the AGN?
We have chosen NGC 3227 and NGC 1068 as representative Seyfert 1 and Seyfert 2 templates to address these issues through a detailed analysis of the molecular gas. An extensive description of the results for both sources will be given in two forthcoming papers (Schinnerer et al. 1999 and Schinnerer, Eckart, Tacconi 1999; see also Schinnerer 1999). NGC 3227 (D = 17.3 Mpc, group distance; Garcia 1993) is of type 1.2 (Osterbrock 1977) and contains a large amount of molecular gas in its central region (Meixner et al. 1990). It has an ionization cone mapped with HST in the \[O III\] line emission (Schmitt & Kinney 1996).
NGC 1068 (D = 14 Mpc; Bland-Hawthorn et al. 1997) is the archetypical Seyfert 2 galaxy. Antonucci & Miller (1985) observed emission lines widths typical for BLR lines of Seyfert 1’s in the polarized optical emission of this galaxy. In addition to bright molecular spiral arms at r $``$ 1 kpc there is also molecular gas observed at the nuclear region (Jackson et al. 1993, Tacconi et al. 1994, Helfer & Blitz 1995, Tacconi et al. 1997, Baker & Scoville 1998). A prominent ionization cone is traced in the \[O III\] line emission (Macchetto et al. 1994).
## 2 OBSERVATIONS AND RESULTS
In both galaxies the <sup>12</sup>CO (2-1) line emission was observed in the winter of 1996/1997 using the IRAM millimeter interferometer (PdBI) on the Plateau de Bure, France, in its AB configuration. The five antennas were positioned in three (NGC 3227) and two (NGC 1068) different configurations providing 30 and 20 baselines, respectively. The angular resolution is $``$ 0.6” for NGC 3227 and $``$ 0.7” for NGC 1068. Further details of the observations will be given in Schinnerer, Eckart, Tacconi (1999) and Schinnerer et al. (1999).
As shown in Fig. 1a and 1b the nuclear <sup>12</sup>CO (2-1) emission in both galaxies has a ring-like distribution. Fig. 2 and Fig. 3 show position velocity diagrams taken along or close to the kinematic axes of NGC 3227 and NGC 1068, respectively. The extreme velocities (indicated by arrows) seen for the first time in the molecular line emission of both galaxies indicate rising rotation curves towards smallest radial separations and large enclosed masses of $``$ 2$`\times `$10<sup>7</sup> M for NGC 3227 and $``$ 10<sup>8</sup> M for NGC 1068 (see caption of Fig. 2 and Fig. 3). The general gas motion in NGC 3227 can be described by a rotating disk for radii $`>`$1”. For this region we obtained a rotation curve using the GIPSY routine ROTCUR. For NGC 1068 a lower limit to the rotation velocity at each radius $``$12” (well within the spiral arms at r$``$15”) was obtained by averaging the extreme measured velocities on opposite sides of the nucleus independent of position angle. In the inner few parsecs the rotation curve was replaced by Keplerian rotation velocities corresponding to the estimated enclosed masses.
## 3 THE MODELING
The observed position-velocity data cubes were modeled using three dimensional gas orbits and translating the motion along the line-of-sight axis onto the spectral axis. The orbits representing gas motion are not self-intersecting and do not have strong cusps, since these lead to clumping, dissipation of kinetic energy, and therefore result in unstable orbits (e.g. Friedli & Martinet 1993). Under these assumptions two ways to approximate the gas motions in the circumnuclear region are: (a) planar elliptical orbits and (b) tilted circular orbits. In case (a) the elliptical orbits resemble the two main $`x_1`$\- and $`x_2`$-families which are present in bar potentials (see review by Sellwood & Wilkinson 1993). At the position of resonances the stars flip from one family to the other, whereas the gas smoothly follows this change. The behavior can be mimicked by ellipses with changing eccentricities and position angles (see for example Fig. 7 in Telesco & Decher 1988). In case (b) the tilted circular orbits form a precessing warp in the gas disk. Such warps are quite common in HI disks in the outer regions of galaxies (see review by Binney 1992), and are also observed in the accretion disks around AGN (e.g. NGC 4258, Miyoshi et al. 1995). We neglect radiative transfer processes assuming that, due to the large nuclear velocity dispersions, the bulk of the molecular gas is not strongly effected by self-absorption.
The model subdivided the disk into many single (circular or elliptical) orbits of molecular gas. For the modeling the inclination, position angle and shape of the rotation curve for each host galaxy were held fixed. Each fitting process was started at large radii and successively extended towards the center. For each case we tried several start set-ups that all converged to similar (best) solutions with mean deviations from the data of less than about 10 km/s and 0.1” for each velocity and radius in the pv-diagrams and 10<sup>o</sup> in the position angle of the mapped structures.
To model the warp we followed the method of Tubbs (1980; see also Quillen et al. 1992). In this approach the warp is produced by a smooth variation of the tilt $`\omega (r)`$ of each orbit relative to the plane of the galaxy and the precession angle $`\alpha (r)`$. A torque acting on an orbit with a circular velocity $`v_c(r)`$ introduces a precession rate $`d\alpha /dt\xi v_c/r`$. After a time $`\mathrm{\Delta }`$$`t`$ one obtains $`\alpha (r)=\xi \mathrm{\Omega }\mathrm{\Delta }t+\alpha _0`$. Here $`\alpha _0`$ is a constant, $`\mathrm{\Omega }`$=$`v_c`$/$`r`$, and $`\xi `$ is given by the acting torque. We considered for our analysis models with constant $`\xi \mathrm{\Delta }t`$ and assume the molecular gas to be uniformly distributed.
For the bar approach we fitted the orbital eccentricity $`ϵ(r)=b(r)/a(r)`$ ($`b`$ is the minor axis, $`a`$ is the major axis) and the position angle $`PA_{ellipse}(r)`$ as curves varying smoothly with radius under the constraint that the orbits do not overlap with each other. The orbits lie in a plane and resemble velocity and density distributions similar to the bar models of Athanassoula (1992).
The fitting was done on host galaxy kinematic major and minor axes pv-diagrams and checked on the overall intensity map and the velocity field. Here we concentrate on the central few arcseconds and present the results for the two galaxies by first describing the essential properties of the pv-diagrams and then giving an outline of the best bar and warp models.
### 3.1 NGC 3227
For NGC 3227 two representative pv-diagrams are shown in Fig. 2. They are taken along the major kinematic axis (PA 158<sup>o</sup>) and a position angle of PA 40<sup>o</sup> consistent with spatial extent of the nuclear emission. Along the major axis the drop of the rotational velocity can be seen down to r $``$ 0.5”. For smaller radii an apparent counter rotation is observed between 0.2” $`r`$ 0.5”. For even smaller radii a second flip back to the original sense of rotation is detected. These changes in rotation form a S-shape in the inner 1” of the pv-diagram. At PA 40<sup>o</sup> (close to the kinematic minor axis) a similar behavior is seen with the exception that the change in the sense of rotation is already occurring at $`r`$ 0.6”. East of the dynamical nucleus this pv-diagram clearly shows an enhancement of emission which is poorly reproduced by the models, since it is not axisymmetric.
The bar approach: Fig. 2 shows that we are not able to account for the observed amount of counter rotation along the kinematic major axis and especially along PA 40<sup>o</sup> close to the kinematic minor axis. Also, in both pv-diagrams the S-shape in the inner 1” is not all reproduced. Combined with the fact that high resolution NIR observations (Schinnerer, Eckart, Tacconi 1999, Schinnerer 1999) show no evidence for a nuclear bar, this suggests that the observed structure and kinematics of the inner 1” are not well represented by motion of gas in a nuclear bar potential. However, the circumnuclear ring ($`r`$ 1.5”) may well be due to an ILR of an outer structure.
The warp approach: Fig. 2 shows the fits of the best warp model to both pv-diagrams. The model reproduces the observed S-shaped changes in the rotational sense in both pv-diagrams, as well as the observed intensity distribution in the inner 1” (Fig. 1a). This success in reproducing both the kinematics and source structure suggests that a warped disk, rather then an inner bar, is a more realistic description of the molecular gas in the central 70 pc of this galaxy.
### 3.2 NGC 1068
The kinematic major axis of NGC 1068 lies along a position angle of PA $``$ 110<sup>o</sup>; and the position-velocity cut along this PA is remarkably symmetric (Fig. 3). The large scale disk emission is observed at radii $``$ 2”. A large velocity dispersion, which even rises above the rotational velocity, is seen at $`r=`$ 1”. Finally we also observe emission at the systemic velocity which is detected at radii $``$ 0.9” and in a ridge which connects the two high dispersion points at $`r=`$ 1”.
The bar approach: The best bar model reproduces the overall shape of the pv-diagram. However, Fig. 3 shows that although the high velocity dispersion at 1” can be fitted, this model fails to produce the rise in velocity relative to the neighboring disk emission and the emission ridge. The bar model requires highly elliptical orbits inside the ring thus making the interpretation as an ILR of the 2 kpc stellar NIR bar (Scoville et al. 1988) very unlikely, since it is expected that the orbits become more circular inside the ILR (e.g. Athanassoula 1992). We regard the bar model as a satisfying but not good fit to the data - especially given the fact that the nuclear stellar cluster as mapped by Thatte et al. (1997) shows no indication for separate bar-like structure that might induce the highly elliptical orbits.
The warp approach: The best warp model is displayed in Fig. 3. All observed kinematic features are reproduced by this model: the disk emission, the high dispersion plus rise in velocity, the emission at the systemic velocity as well as the ridge between the high dispersion peaks. We thus favor a warped disk over a nuclear bar distribution to describe the gas distribution and motion in NGC 1068 at r $``$ 1.5”.
In this model the sudden increase in observed projected velocity as well as the high velocity dispersion at 1” is due to molecular gas forming an edge-on disk at this radial distance (see Fig. 4). Such a disk is in agreement with the observed orientation of the NIR/MIR polarization vectors (Young et al. 1996, Lumsden et al. 1999) and the extinction band across the nuclear region (Catchpole & Boksenberg 1997). The three-dimensional geometry of the warp also naturally provides a cavity for the ionization cone in consistency with the observed orientation (see Fig. 4).
## 4 CONSEQUENCES AND IMPLICATIONS
The warp model provides the better fit to the kinematics of the molecular gas in the circumnuclear regions of both studied Seyfert galaxies (as outlined above). The bar model might work if one allows for high streaming motions providing further velocity modulation in the pv-diagrams. This would, however, favor stronger nuclear bars which are not indicated by high resolution NIR observations (Thatte et al. 1997, Schinnerer, Eckart, Tacconi 1999, Schinnerer 1999).
The fits to the data imply that even at radii as small as $`r`$ 75 pc the gas stays in a thin disk with low velocity dispersion ($`<`$30 km/s) while the stars show an almost spherical distribution at these distances. The magnitude of the torque estimated from the warp model (Sparke 1996) implies that the most likely cause for such a warp of this thin gas layer is a torque induced by the gas pressure of the ionization cones in both galaxies. An important future test will be a comparision of molecular gas kinematics in galaxies with and without ionization cones. Alternatively, as a transient phenomenon, complexes of molecular clouds which do not participate in the overall gas motion can also induce a torque on the gas disk.
Since the observed features are symmetric with respect to the dynamical center, a scenario in which the source structure is dominated by randomly distributed molecular cloud complexes also appears to be very unlikely. Our results suggest that future theoretical studies of molecular gas dynamics in the circumnuclear regions will require 3 dimensional modeling.
The molecular gas being distributed in a thin, warped disk can have important consequences for AGN obscuration (and the structure and evolution of the NLR). Such a disk when viewed edge-on - as in the case of NGC 1068 - can effectively hide the Seyfert nucleus from direct view. This conclusion is supported by other recent studies of samples of Seyfert galaxies which find that the molecular gas and dust at distances of about 100 pc can play a significant role in the classification of Seyfert galaxies (Malkan et al. 1998, Cameron et al. 1993).
Our results also indicate that that nuclear bars are not necessarily the primary fueling mechanism for AGN activity, since we can describe the gas distribution very well by a uniform warped disk. This is in agreement with data presented by Regan & Mulchaey (1999) who did not find evidence for strong bars in their Seyfert sample. As the ionization cone is likely to cause the warp this would then also provide the connection between the AGN and the inner part of its host galaxy. Supported by these statistical findings our results imply that the AGN and its host galaxy are not discrete systems but are naturally linked to each other.
We like to thank the IRAM Plateau de Bure staff for carrying out the observations and the staff at IRAM Grenoble, especially R. Neri and D. Downes, for their help during the data reduction. For fruitful and stimulating discussions we thank A. Baker, D. Downes, P. Englmaier, R. Genzel, O. Gerhard, A. Quillen, N. Scoville and L. Sparke.
|
no-problem/9907/astro-ph9907334.html
|
ar5iv
|
text
|
# Eclipsing Binaries in the OGLE Variable Star Catalog. IV. The Pre-Contact, Equal-Mass Systems
## 1 INTRODUCTION
This paper is a continuation of the analysis of the eclipsing binaries detected by the OGLE microlensing project in the nine central fields of Baade’s Window (BWC to BW8) toward the Galactic Bulge (Udalski et al. 1994, 1995a, 1995b). The three previous papers of this series addressed the properties of contact binaries which are the most common type in the sample of 933 eclipsing systems in Baade’s Window (BW). The first paper (Rucinski 1997a = R97a) showed that – due to their high frequency of incidence – the contact systems of the W UMa-type can be useful distance indicators along the line of sight all the way to the Galactic Bulge and that they belong to the old galactic disk population. The second paper (Rucinski 1997b = R97b) discussed the light curves of those systems. The light-curve amplitude distribution strongly suggested a mass-ratio distribution steeply climbing toward low mass-ratios (i.e. unequal masses). The systems with unequal temperatures of components, which are seen in the contact-binary sample as a small admixture at the level of 2 percent, in their majority are not poor-thermal-contact systems but rather semi-detached binaries with matter flowing from the hotter, more massive component and forming an accretion hot spot on the cooler companion. The third paper (Rucinski 1998a = R98a) dealt with contact systems with orbital periods longer than one day. The W UMa-type sequence was shown to continue up to the orbital periods of 1.3 – 1.5 day, and then sharply terminate in this period range. The results of the three previous papers, R97a, R97b, R98a, were re-discussed in a more general comparison of the contact binaries of the Galactic Disk in the BW sample with those in old open clusters (Rucinski 1998b = R98b). It was found that the luminosity function for the contact binaries is very similar in shape to that for the solar neighborhood main-sequence (MS) stars, implying a flat apparent frequency-of-occurrence distribution. In the accessible interval $`2.5<M_V<7.5`$, the apparent frequency of contact binaries relative to MS stars was found to be equal about 1/130 – 1/100. The resulting spatial (inclination-corrected) frequency of some 1/80 (with a combined uncertainty of about $`\pm 50`$ percent) implies a well-defined and high peak in the orbital period distribution, well above the period distribution for MS binaries by Duquennoy & Mayor (1991), extrapolated to periods shorter than one day. This peak most probably results from piling-up of short-period binaries as they lose angular momentum and form relatively long-lived contact systems.
This paper is an attempt to extend the analysis of the eclipsing systems discovered by OGLE into the pre-contact domain. In utilizing the observed period distributions, it is logically related to the studies on the orbital period evolution of tidally-locked late-type main-sequence binaries by angular momentum loss (AML) that were published in a series of papers by Maceroni and Van’t Veer (1989, 1991 =MV91 ; see also see Maceroni (1999) for an update and further references). In particular, a functional form of the AML rate was derived in MV91 and Maceroni (1992) by fitting the observed period distribution of field binaries. The results nicely confirmed the need of a braking mechanism which is weakly rotation-dependent (or, perhaps, totally independent of the rotation rate) for fast rotators, but the conclusions from these analyses suffered from the unavoidable inhomogeneity of the all-sky sample. A subsequent study by Stȩpień (1995) arrived at basically the same results through a very different route, via specific assumptions on the efficiency of the magnetic-field generation and an analysis of the homogeneous (but very small) sample of the Hyades binaries. Attempts to relate these theoretical predictions to the statistics of short-period binaries have so far been encountering severe limitations related to the smallness of the samples. We note that early indications that the orbital period evolution at very short periods is slower than initially expected, based on bright field-star binary statistics, were presented by Rucinski (1983), but suffered as well from low-number statistical uncertainty. The currently on-going microlensing surveys offer, for the first time, rich and homogeneous samples of binaries to overcome those limitations.
This paper uses the sample of eclipsing binaries observed by the OGLE project in the direction of Baade’s Window sample. The sample is of moderate size (933 systems) by the rapidly-evolving standards of the microlensing projects, yet it remains the only widely-available sample of that type. As we explain in Section 3, the discussion is limited – by necessity – to eclipsing binaries with almost equally-massive components. This again limits the size of the available sample. Although the results are tentative, we decided to present them for completeness and as a guidance for the future, larger surveys.
Section 2 very briefly summarizes the expected trends for the pre-contact domain while Section 3 contains the definition of the sample and its properties. Section 4 presents analysis of the observed period distribution. The last Section 5 summarizes the main results of the paper.
## 2 ANGULAR-MOMENTUM-LOSS EVOLUTION OF CLOSE BINARY SYSTEMS
The evolution of a close binary system crucially depends on its orbital period. When the period is short enough for an effective tidal synchronization, the angular momentum lost by the individual components through the action of magnetized wind is extracted from the orbit. The orbital separation shrinks, the components rotate progressively faster, possibly draining even more angular momentum from the orbit. Eventually, a contact system forms as a penultimate stage before merging of components and formation of a single star. This general description has been first suggested by Van’t Veer (1979), later developed by Vilhu (1982), and then explored in more detail by Maceroni and Van’t Veer (1991) and by Stȩpień (1995).
The implications of the angular momentum loss (AML) evolution are most obviously noticeable in the orbital period distribution. The crucial quantities here are the moment of inertia of the layers effectively braked during this process and the rate of the AML. The dependence of the AML rate on the rotation period, $`P_{rot}`$, or on the stellar angular velocity of rotation, $`\omega =2\pi /P_{rot}`$, is frequently called the braking “law”, and is written as variants of $`\dot{\omega }=\dot{\omega }(\omega )`$ or $`\dot{P}_{rot}=\dot{P}_{rot}(P_{rot})`$. Frequently, rigid body rotation of the whole star is assumed as this simplifies derivation of the moment of inertia of the star. The detailed models of MV91 show that the tidal synchronization mechanisms operate on such a short time scale compared to the AML one, that the hypothesis of perfect synchronization is fully justified after a few million years of evolution, so that one can write $`P_{rot}=P_{orb}P`$, dropping the suffixes.
The braking law is usually expressed in a parametric form of the type: $`\dot{\omega }=const\omega ^\alpha `$, with $`\alpha =3`$ for the well know Skumanich relation (Skumanich 1972) which is known to be valid for slowly-rotating, single, solar-type stars. In a perfect synchronization regime the angular momentum loss by magnetic braking $`\dot{H}P^\alpha `$ is equal to the decrease of the orbital angular momentum $`\dot{H}_{orb}P^{2/3}\dot{P}`$, so that the rate of change of the orbital period is $`\dot{P}P^{2/3\alpha }`$. The period distribution is then expected to be dependent on time, with the population of the period-distribution bins directly related to the shape of the initial distribution and to the time scale of period change.
The initial period distribution is poorly known in the short-period range of interest here. Current assumptions are extrapolations towards short periods either of the $`\mathrm{log}P`$ flat distribution of Abt and and Levy (1976) (as for instance in MV91), or of the more recent one by Duquennoy and Mayor (1991 = DM91), that is derived for a relatively unbiased sample. Sometimes a short-period cut-off has been introduced, as in Stȩpień (1995). The DM91 distribution has the shape, in the logarithm of the period, of a wide Gaussian with a maximum at $`\mathrm{log}P=4.8`$ and $`\sigma \mathrm{log}P=2.3`$ (with the period $`P`$ expressed in days); it can be approximated, within the range $`0<\mathrm{log}P<1`$ days, by $`N(\mathrm{log}P)P^{0.35}`$.
The simplest case of evolution of the period distribution is that for a sample of systems all formed at the same time $`t_0`$. If the initial distribution is $`f_0(\mathrm{log}P)`$, the evolved one at a later time $`t_{}`$ (e.g. the present time of observation), $`f_{}(\mathrm{log}P)`$, results from the requirement of the constancy of the total number of the systems. The implied transformations of both $`f_0(\mathrm{log}P)`$, and the bin size $`\mathrm{d}\mathrm{log}P(t_0)`$ are related through:
$$f_{}(\mathrm{log}P)=f_0(\mathrm{log}P)\left|\frac{\mathrm{d}\mathrm{log}P(t_0)}{\mathrm{d}\mathrm{log}P(t_{})}\right|=f_0(\mathrm{log}P)\frac{\tau _{}}{\tau _0}$$
(1)
which incorporates the period evolution from the time $`t_0`$ to $`t_{}`$: $`P_{}=P_{}(P_0)`$. For brevity, we use $`P_n`$ to signify $`P(t_n)`$. The quantity $`\tau =|P/\dot{P}|`$ is the timescale of the period evolution. Thus, from Eq. 1 we see that the present period distribution function $`f_{}(\mathrm{log}P)`$ is proportional to the ratio of the present and the initial time-scales (i.e. rapid evolution will locally deplete the distribution). Obviously, in the more realistic case of the time-independent formation process over some time interval, the present period distribution would be a result of an integration of the right side of Eq. 1 over the whole time interval.
The systems of our sample cannot be considered strictly coeval: The present population of each bin presumably consists of binaries of somewhat different age reaching the relevant bin by means of the AML occurring since their formation. On the other hand, the stars in Baade’s Window in their majority probably belong to a relatively old population, possibly older than $`5`$ Gyr (Ng et al. 1996, Kiraga et al. 1997), so that products of recent formation events are quite unlikely to be seen there. In that hypothesis, and according to all the numerical models of period evolution we mentioned before, the systems presently observed as pre-contact binaries come from a initial period range where the time-scale of period change was very long and weakly time dependent for the first few Gyr of evolution. (Such a case corresponds to the period evolution functions being nearly straight and parallel to the time axis, as shown for instance in Figures 4 and 7 of MV91.) As a consequence the result of the integration over the formation time will be proportional to $`f_0\tau _{}/\tau _0`$, where $`\tau _0`$ will be a mean over the formation time. As long as we study only the shape of the period distribution we can just use the simpler expression in Equation 1.
According to the power law expressing the rate of orbital period change, the period evolution function relating $`P_0`$ to $`P_{}`$ can be written as:
$$P_0=P_{}\left[1+\left(\alpha +1/3\right)\frac{T}{\tau _{}}\right]^{1/(\alpha +1/3)}$$
with $`T=t_{}t_0`$ and with $`\tau _{}`$ being the present time-scale of period evolution, The present period distribution obtained from the initial distribution $`f_0(\mathrm{log}P)P_0^\beta `$ will be:
$$f_{}(\mathrm{log}P)P^\beta \left[1+\left(\alpha +1/3\right)\frac{T}{\tau _{}}\right]^{\frac{\beta \alpha 1/3}{\alpha +1/3}}$$
(2)
Figure 1 shows two examples of the evolution of an initial distribution assumed to be as a power law $`f(\mathrm{log}P)P^\beta `$ with $`\beta =0.35`$ (the local fit of DM91), for two values of $`\alpha `$, 1.49 and 3; the first value corresponds to a local power-law fit of the of Stȩpień (1995) braking relation ($`0.5<\mathrm{log}P<1.0`$), the second is the Skumanich’s value (1972). In both cases identical solar-mass components were assumed.
The initial period ranges were assumed to be different for each panel; they were adjusted to correspond to intervals of initial periods that – after 8 Gyr – could populate the distribution down to 0.3 days, the approximate value of the period for contact systems consisting of solar components. In the same way only the parts of the intermediate age distributions that finally spread up to that lower boundary are shown. As can be seen from Equation 2 the evolving distribution, in a log–log plot, gradually changes its slope from the initial $`\beta `$ to the asymptotic value of $`\alpha +1/3`$. The speed of the process depends on the value of time-scale $`\tau `$ (and not just on its slope with $`\mathrm{log}P`$) , this in its turn depends on the stellar parameters. Again, for illustrative purposes in Figure 1, we have used values derived from the Stȩpień and the Skumanich predictions. Writing $`\tau =kP^{\alpha +1/3}`$ the multiplicative constant $`k`$ turns out to be $`k5.2`$ and $`k0.5`$ , respectively for the Stȩpień and the Skumanich models ($`\tau `$ expressed in Gyr and $`P`$ in days). The much shorter value of $`\tau `$ for the extrapolation to short periods of the Skumanich law gives a very rapid transition of the slope to the asymptotic value, and relatively long initial periods. However even with the reduced braking efficiency at short periods of the Stȩpień law, the short period part of the distribution very soon loses memory of the initial period distribution. We expect therefore a relatively old population of binaries, as that of Baade’s Window, to contain in the short period range of Figure 1 information on the time-scale dependence of $`P`$ rather than any vestiges of the initial distribution.
If the braking law were indeed as steep as implied by the Skumanich relation of Figure 1 ($`\alpha =3`$), the AML evolution would progressively accelerate with a rapid shortening of $`P_{rot}=P_{orb}`$ as the time scale would decrease as $`\tau P^{10/3}`$ and there would be practically no very short-period, synchronized binaries. For the solar-type dwarfs of spectral types FGK, the region around orbital periods of about 0.3 to 2 days would become quickly “evacuated”. However, there are many indications that $`\alpha `$ is definitely smaller than the Skumanich value for fast rotators, and that the braking law may, in fact, “saturate”, converging to a constant. Then, the rate of the evolution would still depend on the period, but with a much shallower slope ($`N\tau P^{1/3}`$ for the limiting value of $`\alpha =0`$) and longer $`\tau `$.
The strongest dependence on the AML efficiency is in any case expected to take place at the very short end of the period distribution. Presumably some temporary increase in the number of very short-period systems that formed with longer periods may occur there, but – over time - the distribution will reflect the AML efficiency as the distribution evolves into the steady state determined by the braking time-scale . At the very short period end one expects a prominent effect of truncation as the close detached systems become converted into contact binaries. The final piling in the contact binary domain is very clearly seen in the currently most extensive statistical data for the Galactic Disk contact binaries visible in the Baade’s Window direction (R98b).
Figure 5 compares these data with the extrapolation of the DM91 period distribution to very short periods. Validity of such an extrapolation is put in question by a small but homogenous sample of dwarfs in the young cluster of Hyades (Griffin 1985) where an increase in star numbers toward short periods within the range $`1<P<10`$ days is actually observed (see Figure 5). Although the statistics for the Hyades binaries is poor, the trend is quite obvious. It is not clear if this is a remnant of the formation process or an indication that the AML evolution actually slows down at very short periods due to the inversion in the braking efficiency (negative $`\alpha `$). Such a possibility is not entirely excluded as the rapid-rotation regime is very poorly understood in terms of the AML efficiency. For example, presence of thinly-populated “tails” of extremely rapidly-rotating late-type dwarfs in young clusters (Hartmann & Noyes 1987) may indicate such an inverted AML efficiency. Besides, very close-binary stars do not have to behave exactly as single stars due to the influence of the tidal effects on the magnetic field generation modes.
In summary: The value of the exponent $`\alpha `$ in the braking efficiency law, $`\dot{H}P^\alpha `$, for the high rotation-rates in close binary systems is currently a very poorly known quantity so that any observational results which are free of systematic effects, would be of great value. An estimate on $`\alpha `$ was the goal of the present paper.
## 3 THE SAMPLE OF PRE-CONTACT SYSTEMS
### 3.1 Definition of the sample
The extraction of the sample of pre-contact systems is not a trivial matter, given the limited photometric information for the periodic variable stars in the OGLE catalog. The availability of only single color light curves is particularly inconvenient because – without color curves – we could not eliminate the semi-detached Algols, which are particularly easy to detect (due to deep primary minima) and are thus expected to dominate in number over the Main Sequence stars in surveys similar to OGLE. Of course, Algols can be recognized, even from a single color light curve, thanks to their characteristic large difference of depth of the minima. One can eliminate them, as we did, by applying a criterion of equal eclipse depth, but this introduces a restrictive limitation to systems with components having similar effective temperatures. On the Main Sequence, this criterion is basically equivalent to selection of binaries with similar components, i.e. with mass-ratios ($`q=M_2/M_11`$) close to unity ($`q1`$). Such systems may still be numerous; for example the shortest-period currently known Main Sequence binary, BW3.038<sup>1</sup><sup>1</sup>1The naming convention used here is the same as in the previous papers of this series: BW for Baade’s Window (these letters are sometimes omitted), followed by the OGLE field number, and then the variable number, after the dot. The central field BWC is identified by zero., at the very short-period end of our distributions is such a system (Maceroni & Rucinski 1997 = MR97). Nevertheless, a limitation to the mass-ratios close to unity may be considered a drawback in our approach. However, it was inevitable, in view of the very limited information of single-color light curves that we had in our disposal.
The “pre-contact binary” light-curve shape filter that we used is based on the Fourier cosine series decomposition of the light curves. The basic philosophy of the approach follows the principles of the filter used to select a sample of contact binaries described in R97a and R97b (consult in particular Figure 5 in R97a). The first coefficient, $`a_1`$, reflects the difference between the two eclipses and for equally-deep minima goes to zero; the second, $`a_2`$ is the largest of the coefficients and represents the total amplitude of the light variations; $`a_4`$ measures the eclipse “peakedness” and goes to small values for the light curves of contact systems. As was shown in R97a, the pair ($`a_2`$, $`a_4`$) forms a powerful separator/discriminant of contact/detached binaries.
The procedure to select our sample of short period pre-contact systems required a few steps. By means of the contact/detached binaries filter of R97a, a first sample of all the non-contact systems (339 objects) was selected. These are the systems falling above the “contact line” in the $`a_4`$ vs. $`a_2`$ plane, in the left panel of Figure 5.
The application of the shape filter based on the $`a_1`$ coefficient, to keep only binaries with similar eclipse depths, is shown in the right panel of the same figure. This step involves the selection of systems with similar components. The equal eclipse-depth criterion implies the retention of values of $`a_1`$ close to zero. A reasonable lower limit on $`a_1`$, and hence a maximum allowed difference of the minima can be decided by inspection of the $`a_1`$ distributions which are shown in the two left-side panels of Fig. 5. The lower panel shows the distribution of the current sample of 339 detached binaries while the upper one is for the contact binaries of the R-sample in R97a. The R-sample is composed by W UMa binaries with very similar effective temperatures and thus equally-deep minima, hence the distribution peaks at small $`|a_1|`$. In contrast to the R-sample of contact binaries, the distribution of $`a_1`$ for our sample of detached systems is strongly bimodal. The comparison of the two distributions suggests that similar depths of eclipses are selected for a limiting value of $`a_10.017`$. This criterion establishes, however, only a necessary condition, as $`a_1`$ is not only temperature-difference dependent but also approximately scales with the light curve amplitude. A filter based only on $`a_1`$, would pass, therefore, many low-amplitude light curves for systems with low orbital inclination, but appreciable temperature difference. Since the second Fourier coefficient $`a_2`$ is related to the light curve amplitude, the simplest way to take the light-curve amplitude scaling into account is by means of the ratio $`r=a_1/a_2`$. The distribution of $`r`$ for the same samples of contact and detached binaries is shown in the right panels of Figure 5. A reasoning similar to that given above for the $`a_1`$ distributions suggests a threshold value of $`r_{max}=0.2`$. The selection on the depth difference has been done, therefore, using both conditions.
A special comment is needed for the few systems with positive values of $`a_1`$, and hence negative values of $`r`$ ($`a_2`$ is always negative). By definition (see R97a), $`a_1`$ is negative for a light curve where the primary minimum is the deepest one. The OGLE team assigned the primary eclipses to the deeper minima, as is customary for eclipsing variables, so that $`a_1`$ should in principle be always negative. Positive $`a_1`$’s can, therefore, be only due to errors introduced either in this assignment or in the calculation of the Fourier coefficients; the latter circumstance may result from large photometric errors or because of a poor or uneven light-curve phase coverage. The presence in Figure 5 of several points at $`a_1>0`$, requires a criterion for rejection of the positive $`a_1`$’s as well. Otherwise the poor light curves would be favored with respect to the good-quality ones. We decided, after a check of the light curves of the most extreme cases, that the pure rejection of positive $`a_1`$ systems would not be justified, as some curves were indeed of rather poor quality, but of the expected shape. The simplest choice was therefore to apply the previously defined limits on $`a_1`$ in its absolute value. The selection was then performed according to: $`|a_1|<0.017`$ and $`|r|<0.2`$. This additional constraint does not change the sample in a significant way as only four systems are rejected, but improves the consistency of the selection.
One of the four systems rejected at this stage, BW5.173, is a potentially interesting object for further studies. It is a faint binary ($`I=17.82`$) with an orbital period $`P0.66^d`$ and the light curve of a well-detached system with components of similar effective temperatures. After the de-reddening procedure (see Section 3.2), it turns out, with $`(VI)_0=2.28`$, to be the reddest system of the whole sample of detached binaries, with an absolute magnitude $`M_I=7.55`$ and a distance $`d=791`$ pc. Thus, the system seems to be very similar to BW3.038, i.e. it consists of a pair of M-type dwarfs, but with a period three times longer. Since such eclipsing systems are rare, the system is of significance as it increases the small number of the potential calibrators of the red end of the main sequence.
In addition to the light-curve shape filter described above, a physical condition on the orbital period was used as the final step in the sample-definition to eliminate the rare systems with evolved components. These can be easily recognized by their relatively long periods, so we set an upper limit of $`P=8^d`$ as a reasonable value for tidally locked binaries with MS components. The systems selected through all the criteria described above are marked by filled symbols in Figure 5, which shows the details of our Fourier filter.
The sample of systems selected through the light-curve filter and the period criterion $`P<8^d`$ has been further checked for presence of systems which could deteriorate the quality of the sample. In particular, the Fourier coefficients for systems with partially-covered light curves may be entirely erroneous. Visual examination of the light curves led to removal of two systems BW2.072 and BW4.099. After removal of six further systems without measured $`(VI)`$ colors, the sample consisted of 77 systems. All these, except three rejected because of too blue intrinsic colors (see Section 3.2), formed the final sample of 74 systems used in this paper. The systems are listed in Table 5, where – in addition to the original OGLE data of the period, $`P`$, the maximum magnitude and color $`I`$ and $`(VI)`$, and the amplitude $`\mathrm{\Delta }I`$ – we give the derived values (see below): the absolute magnitude $`M_I`$, the de-reddened color $`(VI)_0`$ and the distance in parsecs, $`d`$. The Fourier coefficients are available from the authors through their respective Web pages<sup>2</sup><sup>2</sup>2The tables of Fourier coefficients for all 933 binaries discovered by the OGLE project are located in http://www.astro.utoronto.ca/$``$rucinski/ogle.html and http://www.mporzio.astro.it/$``$maceroni/ogle.html.
### 3.2 Properties of the systems in the sample
Additional information on the properties of the systems came from the consideration of their absolute magnitudes and de-reddened colors. These were derived by an iterative process of distance determination, in an approach somewhat similar to that described in R97a, and identical to that used in the study of the system BW3.038 (MR97).
The procedure was as follows: An adopted absolute-magnitude calibration $`M_I=M_I((VI)E_{VI}(d))`$ for the main sequence was used to find the distance $`d`$ and reddening $`E_{VI}`$. The procedure was iterative with the reddening allowed to vary linearly with distance between zero and the maximum value derived from the background Bulge giants by Stanek (1996), assuming that $`E_{VI}^{max}`$ is reached at the distance of 2 kpc, and then does not increase. The adopted MS relation was that of Reid and Majewski (1993), but with a shift by 0.75 magnitude to allow for two identical stars, in consistency with the definition of the sample of $`q1`$ systems. The results of the approach are the de-reddened colors, $`(VI)_0`$, the distances $`d`$ and the absolute magnitudes, $`M_I`$ (see Table 5).
The intrinsic colors of three system, 7.004 with $`(VI)_0=0.21`$, and 1.221 and 6.081, both with $`(VI)_0=0.44`$ were found to be too early for consideration in the sample of late-type stars. The elimination of those three systems led to the reduction of the sample to 74 systems. The intrinsic color distribution (Figure 5) shows most of the systems in the range $`0.6<(VI)_0<1.3`$ with a tail extending to red colors. One may expect that M-type dwarfs populating the tail may have different AML properties than the FGK-type stars. Thus, we separately considered 64 systems with $`(VI)_0<1.3`$ and 10 systems with redder colors. The border line is located approximately at the spectral type K5V.
The division into the two spectral groups was dictated not only by the possibility of the different regime in the AML, but also because the sampled spatial volumes are expected to be vastly different as a function of absolute magnitudes, leading to a possibility of very different discovery selection effects. We can gain some insight into the matter of the spatial depth of the samples by consideration of the increase in the number of stars with the distance. Figure 5 shows the logarithmic plots of the cumulative number of stars versus the distance for both groups. For the Euclidean geometry, the slope is expected to be 3, which is approximately fulfilled by the nearby M-dwarfs. For the FGK-type stars we see a definite deficit at small distances, then the expected increase in number and then a strong cut-off at about 3 kpc. The deficit at small distances is due to the bright limit of the OGLE sample at $`I=14`$, while the cut-off at large distances is due to the combination of the faint limit of the OGLE sample and of the line-of-sight leaving the galactic disk (see Section 9.1 in R98b). The cut-off is relatively sharp because 72 and 92 percent of stars of the FGK group are located closer than 3 kpc and 4 kpc, respectively.
The main goal of this paper is the derivation of a statistically sound orbital-period distribution. This distribution may – and probably does – depend on the spectral type range, but with the small number of systems we cannot avoid the necessity of grouping the systems into relatively coarse samples. We therefore check first if the division into only two broad spectral groups is a legitimate one and whether the range of colors in the FGK group is not selected too wide.
Previous studies on all-sky samples in the same period range suggested a change in the period distribution which is dependent on the spectral type, although the division seems to be located between early A–F and late G–M type binaries. The distributions of about 1200 close binaries in the sky field, grouped by spectral type of Farinella et al. (1979) show a change in shape from unimodal, for systems with O–F type components, to bimodal for G types, and to multimodal for K–M spectral types. A later study by Giuricin et al. (1984), of 600 eclipsing and spectroscopic field binaries also shows – though less clearly defined – a trend towards broader distributions for later spectral types. These large field samples are, however, so heterogeneous that is practically impossible to deal with the many selection effects and misclassifications affecting them. The current small, but homogeneous one, provides a totally independent and external check on these results.
A first test of the homogeneity of our sample of 74 systems was done by splitting it into color-range defined subsamples. In this test, we tried two partitions: into two equal-size samples of 37 systems each (containing systems respectively bluer and redder than $`(VI)_0=0.93`$), and into two sub-samples of FGK and M binaries, as defined above (the division at $`(VI)_0=1.3`$). The null hypothesis of the same parent population was checked by means of a standard two–sided Kolmogorov-Smirnov test, by computing the maximum absolute difference between the two cumulative distributions, $`D_{KS}`$, and the significance level of the null hypothesis $`P_{KS}`$. The cumulative period distributions are shown in Figure 5. We see some subtle differences between the sub-samples, but – taking into account the small number of objects in the sub-samples – they are not significant to the point of rejection of the null hypothesis of the same period distribution. We note that for the equal-number division, the blue sub-sample extends from 2 to 6 kpc while the red one extends only from 0.5 to 3 kpc so that different discovery selection biases are not excluded. The results of the KS test, reported in Table 5, do not allow firm conclusions: The significance levels for both divisions are close to 0.6; the differences are probably mostly due to systematic trends in discovery selection effects, but – at least – the results indicate that the null hypothesis cannot be rejected.
Similarly to the cumulative period distributions, no obvious dependences are visible in the scatter diagrams for the $`(VI)_0`$ vs. $`\mathrm{log}P`$ and $`M_I`$ vs. $`\mathrm{log}P`$ relations, as shown in Figure 5. This is confirmed by the values of the correlation coefficients which are all close to zero. The results of application of the Kendall ($`r_K`$) and Spearman rank ($`r_S`$) correlation coefficients are given in Table 5. We should note that although $`M_I`$ is partly derived from $`(VI)_0`$, it also depends on $`I`$ so that the correlation coefficients given in the table do not have to be exactly same, as they appear to be.
The lack of correlation between periods and colors or absolute magnitudes is the reason why no attempt at the definition of a volume-limited sample (say to 3 kpc) was made, thus allowing inclusion of more distant systems as well. We retained partition into the FGK and M groups, however, entirely on the basis of an expectation that the M-dwarf sub-sample may have different AML properties. For that reason, we consider in this paper both, the FGK group of 64 systems as well as the full sample of 74 systems. Obviously, the M-dwarf group is too small for any separate period-distribution considerations.
## 4 THE PERIOD DISTRIBUTION AND DISCOVERY SELECTION BIASES
### 4.1 The orbital-period distribution
We have used the sample of 64 FGK-type binaries and then the full sample augmented by 10 M-type binaries to analyze the orbital period distribution and thus infer the AML-driven orbital-period evolution in the pre-contact stages. The implicit assumption was that the orbital periods and mass-ratios are not correlated. In the opposite case, the pre-selection of $`q1`$ systems might lead to a bias in the period distribution.
The observed period distributions, for the whole sample of 74 systems and for the FGK group of 64 systems, are shown in Figure 5. Except for very short periods, $`P<0.35`$ day, where no detached FGK binaries could exist because of the onset of contact, the data show a trend of progressively decreasing number of binary systems for increasing period. Weighted fits to the histograms of $`\mathrm{log}N`$ versus $`\mathrm{log}P`$ (Figure 5 in the form: $`\mathrm{log}N=A_0+A_1\mathrm{log}P`$, with weights calculated on the basis of Poissonian errors in bins $`\mathrm{\Delta }\mathrm{log}P=0.2`$ wide, gave $`A_0=1.05\pm 0.06`$, $`A_1=0.79\pm 0.16`$ for the whole sample, and $`A_0=1.02\pm 0.07`$, $`A_1=0.80\pm 0.17`$ for the FGK sub-group. The values of $`\chi ^2`$ for the fits were 3.1 and 3.7 for the 6 log-period bins. The linear fits are thus only marginally appropriate, but definitely much better than flat distributions.
To define the uncertainty limits on the coefficients $`A_0`$ and $`A_1`$, a Monte-Carlo experiment has been conducted in which weighted fits were made to several thousand artificial Poisson distributions with the same mean values for each log-period bin. Because the mean values do not form a linear dependence, such fits provide more realistic estimates of the uncertainties of the coefficients $`A_0`$ and $`A_1`$. The results, expressed in terms of the distributions of the individual determinations of the coefficients around their median values, are given in Table 5. The fits are shown in the second panel of Figure 5.
The linear fits in Figure 5 show an unexpected slope as the number of systems decreases, rather than increases with period. Even for a “saturated” AML rate (see Section 2) the slope should be the opposite of what is found. For any $`\alpha 0`$ in $`\dot{H}P^\alpha `$ the logarithmic slope of the number distribution should be larger than 1/3, as in $`NP^{\alpha +1/3}`$. In other words, the short-period end of the period distribution for pre-contact binaries should be always less populated.
We think that the observed trend, which is contrary to the expectations, is entirely due to discovery-selection bias effects scaling in proportion to the period length. For the first time we have a well-defined sample of eclipsing systems and for the first time we can see the selection effects so clearly, without other discovery biases. There are several reasons why systems with longer periods are more difficult to detect: (1) Chances of observing eclipses decrease with the increasing separation of the components, as the range of orbital inclinations rapidly shrinks; (2) Chances of detecting eclipses decrease as they become progressively shorter; (3) Fewer orbital cycles and thus fewer eclipses are observed for a given duration of the survey.
The only rigorous approach in handling the detection biases would be to simulate the whole discovery process, starting from the data-taking through all the following reduction stages. Such simulations could be done only by the OGLE team. Since they are not available, simplified approaches of handling the discovery selection have been attempted.
### 4.2 Orbital-period selection biases
The discovery selection biases can be estimated by consideration of probability that a distant observer notices eclipses. One way to estimate this probability is by evaluating the solid angle subtended by the sum of the fractional radii $`(r_1+r_2)/a`$, relative to hemisphere visible by a distant observer, i.e. by dividing it by $`2\pi `$ steradian. This relative solid angle is given by the integral of the eclipse relative durations over the range of inclinations that can result in eclipses, which evaluates to $`1\left(\sqrt{1\left(\frac{r_1+r_2}{a}\right)^2}\right)\left(\frac{r_1+r_2}{a}\right)^2`$. For fixed radii, the probability of discovery of an eclipsing system scales as the inverse of its orbital separation in square, $`a^2`$. The same proportionality is obtained by considering the fraction of the sky one star “sees” covered by the other star. Obviously, such very simple estimates only very approximately represent trends in the depth and duration of the eclipses, as seen by a distant observer, but do show that the discovery selection effects rather strongly depend on parameters of binary systems.
Below, we will take a pragmatic approach and consider other “strengths” (other power-law dependences) of the discovery biases, but we feel that the discovery-probability scaling according to $`a^2`$ should be a particularly reasonable assumption for the equal-mass, hence presumably equal-radius component systems that we selected. The correction factor that we should use to multiply the statistics of periods is then expected to behave as: $`corra^2M_{tot}^{2/3}P^{4/3}`$. The mass dependence can be further removed by observing that, as we described in Section 3, the orbital periods in our sample do not correlate with the color or absolute magnitude. We can therefore assume that they do not correlate with the total mass of the system, $`M_{tot}`$. The correction factor to apply to the histograms in Figure 5 would be then: $`corrP^{4/3}`$. Since we considered the logarithmic slopes in fitting the $`\mathrm{log}N`$ versus $`\mathrm{log}P`$ dependences, we will from now on – instead of $`corr`$ – use the slope correction $`C`$, as in $`corrP^C`$, with the most likely value of $`C=4/3`$.
The relation between the braking-efficiency exponent, $`\alpha `$, the observed slope, $`A_1`$, and the observational selection correction, $`C`$ is: $`\alpha =A_1+C1/3`$. We will consider implications of assuming various values of $`C`$ . For the value of $`A_1`$, we have a choice of selecting between the directly determined values of $`A_1=0.79`$ or $`0.80`$, or from the Monte Carlo experiment, $`A_1=0.74`$ or $`0.73`$. For simplicity, and because any results will in fact be dominated by the systematic effects characterized by $`C`$, we set – from now on – $`A_1=0.75\pm 0.20`$ (the error estimate comes from the Monte Carlo experiment, see Table 5). Some illustrative cases are discussed below:
This is a case of no discovery selection effects. This case is hardly possible, not only because the detection biases almost certainly exist, but also because we obtain then a strongly inverted braking efficiency law with $`\alpha =1.08`$. This would imply that rapidly rotating stars lose relatively less angular momentum than the slowly rotating ones.
A perfectly “saturated” law $`dH/dt`$ with no dependence on the period. The bias correction would be then $`C=+1.08`$ which is only slightly less than the most preferred by us value of 4/3. We also that evolution would in this case produce almost no change of the initial slope (going from a value of $`\beta =0.35`$ to $`\alpha +1/3=1/3`$).
This is our preferred value of the bias resulting in the value of $`\alpha =+0.25`$, i.e. close to saturation, yet with a weak acceleration of the AML evolution with shortening of the period.
This inequality is considered here following the theoretical arguments of Stȩpień (1995) that $`\alpha 2/3>0`$ (see his Figure 1), (a particular value in this range is the local fit of the braking law, as used in Figure 1 that yields $`C=2.57`$). These solutions would imply that the current sample suffers from stronger discovery selection biases than expected for $`C=4/3`$.
In conclusion, we feel that the most likely value of the AML braking-law exponent is close to $`\alpha 0`$. The discovery selection effects are strong and important, yet the departure from the bias exponent $`C4/3`$ would imply implausible combinations of parameters. For $`C=0`$, the period distribution would have a genuinely negative slope, with more binaries accumulating at the short-period end of the period distribution, just before the conversion into contact binaries. We note that such a law would agree with the statistics of short-period systems in Hyades (Griffin 1985; see Figure 5 in Section 2), provided, of course, that it does not have its own detection biases. However, it is really hard to imagine that the discovery biases do not exist in the OGLE database, so that almost certainly $`C0`$. Applying progressively larger corrections $`C`$ would make $`\alpha `$ closer to zero and then positive. If $`C>4/3`$, then the discovery biases would be exceptionally strong. We cannot exclude this possibility, but consider it unlikely.
The referee of the first version of the paper pointed out that a saturated braking law extending to periods as long as our upper limit may be in disagreement with the results for late-type single stars. The activity indicators that can be related to the AML rate show a change of slope – from a saturated to a steeper law – at rotation periods around 3–4 days (see for instance Wichmann et al. 1998). A mass-dependent slope transition (from $`P2^d`$ for $`m=m_{}`$ to $`P9^d`$ for $`m=0.5m_{}`$) was included in the theoretical models of Bouvier et al. (1997) and found to provide a good fit of PMS and MS rotational data. The small size of our sample and the consequent relatively coarse binning did not allow us to properly analyze this feature. The expected change of the slope would fall in the tail of our distribution since we currently have only three systems in the bin $`P>5`$ days. On a qualitative ground, Figure 5 does suggest that an exclusion of the last bin from the fit would result in a slightly flatter braking law, but we feel that an estimate of the slope based on such reduced data would actually be an over-interpretation of the available material.
## 5 DISCUSSION AND CONCLUSIONS
We have used the by-product of the OGLE microlensing project, the database of eclipsing binaries toward Baade’s Window, to analyze the period distribution of short-period ($`0.19<P<8`$ days), late-type, main-sequence systems. This distribution was used to shed light on the angular-momentum-loss efficiency for rapidly-rotating, late-type stars. The final results are very tentative as they are totally dominated by the systematic effects of discovery biases. Yet, they may be of importance for similar future applications of microlensing databases, so that certain lessons can be learned from our experiences.
The main, obvious lesson is thus a very high importance of testing the variability-discovery algorithms for period-length biases. This can be done only by the observing teams, by simulations of detectability of variable stars. Any a posteriori corrections to the observed statistics have debatable value, as is the case with our slope correction $`C`$, introduced in Section 4. Depending on its value, we can obtain any braking law $`dH/dtP^\alpha `$, with $`\alpha =1.1+C`$. We tend to favor a relatively strong detection bias law described by $`C=+4/3`$ and (implying a braking law close to the “saturated” one of $`\alpha 0`$), but any value of $`C`$ is basically permissible. It seems very unlikely that $`C=0`$, that is that the OGLE database has no period biases. For this case, however, the negative slope of the braking law would agree with the statistics of short-period binaries in Hyades (Griffin 1985), with more binaries at the short-period end.
The other lesson is the availability of color curves. Because the light curves were observed in one color only, we had to use a very strong light-curve-shape criterion to filter out the easily-detectable Algols and define a sample of short-period, main-sequence systems with almost identical components. The sample that we defined consists of 74 objects, but would be larger if all systems with MS components having un-equal temperatures were not rejected. Because of this restriction, we could not meaningfully consider any quantities related to spatial frequency of occurrence (such as luminosity or period functions discussed extensively for the OGLE contact systems in R98b). This resulted in a limitation that the available quantity was the slope of the period distribution, and even that affected by strong discovery selection biases.
We note that our sample of 74 systems consisted of 64 binaries of spectral types F, G and K and 10 M-type systems with a large spread in colors. We expected differences in properties between these samples and thus considered them separately, but did not notice any major disparities which would affect the period-distribution statistics.
In addition to these somewhat general statements, we add a minor, but firmer conclusion that there exists no evidence in our homogeneous sample for any bimodal period distribution, similar to that found by Farinella et al. (1979) with a dip at $`\mathrm{log}P0.2`$. We feel that this dip and a secondary maximum in the distribution were most probably produced by misclassification of evolved systems as consisting of main-sequence components. A smooth period distribution, without any structure, implies a different form of the braking law with respect to that derived by MV91 and Maceroni (1992). Though the functional form remains similar, in the sense of a “saturated” braking at short orbital periods ($`\alpha =0`$), the other main feature, namely the sudden increase of the braking efficiency for rotation of $`10\omega _{}`$ is no longer mandated by the data. That feature was needed to reproduce the bimodal distribution of field G-type systems, that had a pronounced dip around $`\mathrm{log}P=0.2`$ days in the distribution of Farinella et al. (1979).
We would like to acknowledge the OGLE team for the access to their database. This work was partially supported by research grants to CM of the Italian MURST (Ministery of University, Scientific and Technological Research) and of the Italian Space Agency. This work was started by SMR during his employment by the Canada-France-Hawaii Telescope.
Captions to figures:
|
no-problem/9907/hep-th9907126.html
|
ar5iv
|
text
|
# 1 Wilson lines for moving partons with the attached AdS surface.
## Abstract
Using the AdS/CFT correspondence and the eikonal approximation, we evaluate the elastic parton-parton scattering amplitude at large $`N`$ and strong coupling $`g_{YM}^2N`$ in N=4 SYM. We obtain a scattering amplitude that reggeizes and unitarizes at large $`\sqrt{s}`$.
KIAS-P99045
Elastic Parton-Parton Scattering
From AdS/CFT
Mannque Rho<sup>a,b</sup><sup>1</sup><sup>1</sup>1E-mail: [email protected], Sang-Jin Sin<sup>a,c</sup><sup>2</sup><sup>2</sup>2E-mail: [email protected] and Ismail Zahed<sup>a,d</sup><sup>3</sup><sup>3</sup>3E-mail: [email protected]
<sup>a</sup> School of Physics, Korea Institute for Advanced Study, Seoul 130-012, Korea
<sup>b</sup> Service de Physique Théorique, CE Saclay, 91191 Gif-sur-Yvette, France
<sup>c</sup> Department of Physics, Hanyang University, Seoul 133-791, Korea
<sup>d</sup> Department of Physics and Astronomy, SUNY-Stony-Brook, NY 11794
1. Elastic quark-quark and gluon-gluon scattering at large $`s`$ and fixed $`t`$ (Mandelstam variables) pertains to the domain of non-perturbative QCD. Theoretical procedures based on resuming large classes of perturbative contributions have been proposed , partially accounting for the reggeized form of the scattering amplitude and the phenomenological success of pomeron/odderon exchange models (and references therein). In an inspiring approach, Nachtmann and others suggested to use non-perturbative techniques for the elastic scattering amplitude. In the eikonal approximation and to leading order in $`t/s`$ the quark-quark amplitude at large $`s`$ was reduced to a correlation function of two light-like Wilson-lines. The latter was assessed in the stochastic vacuum model .
Recently, Maldacena has made the remarkable conjecture that the large $`N`$ behavior of $`N=4`$ supersymmetric gauge theory is dual to the string theory in a non-trivial geometry. This AdS/CFT conjecture, made more precise by Gubser, Klebanov and Polyakov and by Witten and extended to the non-supersymmetric case by Witten , provides an interesting and nonperturbative avenue for studying gauge theories at large $`N`$ and strong coupling $`g_{YM}^2N`$. In particular the heavy quark potential was found to be Coulombic for supersymmetric theories and linear for non-supersymmetric theories .
In this letter we suggest to use the AdS/CFT approach to analyze the elastic parton-parton scattering amplitude in the eikonal approximation for $`N=4`$ SYM. In section 2 we briefly discuss the salient features of the parton-parton reduced elastic amplitude at large $`\sqrt{s}`$. The scattering amplitude in the eikonal approximation is reduced to the Fourier transform of the connected part of a correlator of two time-like Wilson lines. In section 3 we use the AdS/CFT approach to calculate the correlator. Following , we propose that it is given by the minimum (regularized) area of the world-sheet with the time-like parton trajectories at its boundaries, and give a simple variational method that allows for a closed-form expression for the minimal area in the presence of a finite time cutoff. In section 4 we summarize and conclude.
2. First consider the quark propagator in an external non-Abelian gauge field. In the first quantized theory, the propagator from $`x`$ to $`y`$ in Minkowski space reads
$`𝐒(x,y;A)=<x|{\displaystyle \frac{i}{i\text{/}m+i0}}|y>={\displaystyle _0^{\mathrm{}}}𝑑Te^{i(mi0)T}<x|e^{\text{/}T}|y>`$ (1)
that is
$`𝐒(x,y;A)={\displaystyle _0^{\mathrm{}}}𝑑Te^{i(mi0)T}{\displaystyle _x^y}d[x]\delta (1\dot{x}^2)𝐏_ce^{ig_{YM}{\scriptscriptstyle 𝑑sA\dot{x}}}{\displaystyle \frac{1}{2}}𝐏_se^{\frac{i}{2}{\scriptscriptstyle 𝑑s\sigma ^{\mu \nu }\dot{x}_\mu \dot{\dot{x}}_\nu }}`$ (2)
where $`𝐏_{c,s}`$ are orderings over color and spin matrices, and $`\sigma _{\mu \nu }=[\gamma _\mu ,\gamma _\nu ]/2`$. The first exponent in the path integral is an arbitrary Wilson line in the fundamental representation of SU(N)<sub>c</sub>, and the second exponent is a string of infinitesimal Thomas-precession along the Wilson line. The integration is over all paths with $`x_\mu (0)=x_\mu `$ and $`x_\mu (T)=y_\mu `$, where $`T`$ is the proper-time . The dominant contributions come from those paths with $`T1/m`$. Heavy quarks travel shorter in proper-time than light quarks and the mass gives an effective cutoff of the proper-time range. This observation will be important below.
A quark with large momentum $`p`$ travels on a straight line with 4-velocity $`\dot{x}_\mu =u_\mu =p_\mu /m`$ and $`u^2=1`$. Throughout, we will distinguish between the 4-velocity $`u_\mu `$ and the instantaneous 3-velocity $`v=dx/dt=p/E1`$. For a straight trajectory, the 4-acceleration $`\dot{\dot{x}}_\mu =a_\mu =0`$ and the spin factor drops. This is the eikonal approximation for $`𝐒`$ in which an ordinary quark transmutes to a scalar quark. The present argument applies to any charged particle in a background gluon field, irrespective of its spin or helicity. The only amendments are: for antiquarks the 4-velocity $`v_\mu `$ is reversed in the Wilson line and the color matrices are in the complex representation, while for gluons the Wilson lines are in the adjoint representation. With this in mind, quark-quark scattering can be also extended to quark-antiquark, gluon-gluon or scalar-scalar scattering. We note that for quark-antiquark scattering the elastic amplitude dominates at large $`\sqrt{s}`$ since the annihilation part is down by $`\sqrt{t/s}`$.
Generically, we will refer to elastic parton-parton scattering as
$`Q_A(p_1)+Q_B(p_2)Q_C(k_1)+Q_D(k_2)`$ (3)
with $`s=(p_1+p_2)^2`$, $`t=(p_1k_1)^2`$, $`s+t+u=4m^2`$. We denote by $`AB`$ and $`CD`$ respectively, the incoming and outgoing color and spin of the quarks (polarization for gluons). Using the eikonal form for (1) and LSZ reduction, the scattering amplitude $`𝒯`$ may be reduced to
$`𝒯_{AB,CD}(s,t)2is{\displaystyle d^2be^{iq_{}b}<\left(𝐖_1(b)\right)_{AC}\left(𝐖_2(0)\right)_{BD}>_c}`$ (4)
where
$`𝐖_{1,2}(z)=𝐏_c\mathrm{exp}\left(ig_{YM}{\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑\tau A(b+u_{1,2}\tau )u_{1,2}\right).`$ (5)
We are using the normalization $`𝐖=1`$, and retaining only the connected part in (4). The normalization can be relaxed if needed. The 2-dimensional integral in (4) is over the impact parameter $`b`$ with $`t=q_{}^2`$. In the CM frame $`p_1(E,E,0_{})`$, $`p_2(E,E,0_{})`$, $`q=p_1k_1(0,0,q_{})`$ and $`s4E^2`$. The averaging in (4) is over the gauge configurations using the QCD action. The total cross section for $`\sqrt{s}>>\sqrt{t}>0`$ follows from (4) in the form $`\sigma =Im𝒯/s<\mathrm{ln}^2s`$, where the last inequality is just the Froissart bound.
The amplitude (4) allows for two gauge-invariant decompositions (repeated indices are summed over)
$`𝒯_1=𝒯_{AB,AB}𝒯_2=𝒯_{AB,BA}`$ (6)
assuming that the gluon-gauge fields are periodic at the end-points. We note that $`𝒯_2`$ is down by $`1/N`$ in comparison to $`𝒯_1`$. For gluon-gluon scattering the lines are doubled in color space (i.e., adjoint representation) and several gauge-invariant contractions are possible. For quark-quark scattering the singlet exchange in t-channel is 0<sup>+</sup> (pomeron) while for quark-antiquark it is 0<sup>-</sup> (odderon) as the two differ by charge conjugation.
3. In the eikonal approximation the parton-parton scattering amplitude is related to an appropriate correlator of two Wilson lines. The typical duration of these light-like lines is $`T1/m`$ as we noted above. We now suggest to analyze the gauge-invariant correlators using the AdS/CFT approach for N=4 SYM.
The correlation function in large $`N`$ and strong coupling $`g_{YM}^2N`$ can be obtained from the minimal surface in the five-dimensional AdS space, with the light-like Wilson lines at its boundaries as shown in Fig. 1.
The classical action for a string world-sheet is
$`S={\displaystyle \frac{1}{2\pi \alpha ^{}}}{\displaystyle 𝑑\xi 𝑑\sigma \sqrt{det(G_{MN}_\alpha X^M_\beta X^N)}}.`$ (7)
The AdS metric $`G_{MN}`$ in Poincaré coordinates is given by
$`ds^2=R^2{\displaystyle \frac{dt^2+dx^2+dy^2+dw^2+dz^2}{z^2}},`$ (8)
where $`R=(2g_{YM}^2N\alpha ^2)^{1/4}`$ is the radius of the AdS space, with $`2\pi g_{st}=g_{YM}^2`$. The AdS space has a boundary in Minkowski space $`M_4`$ at $`z=0`$. The boundary condition on the string world-sheet is given by the two time-like trajectories
$`x=vt=u\tau ,y=0,z=0;\text{ and }x=vt,y=b,z=0`$ (9)
with 3-velocities $`v=u/\gamma `$ and $`t`$ the real time <sup>#1</sup><sup>#1</sup>#1We are using $`t`$ alternatively for the Mandelstam variable and the real time, and $`u`$ alternatively for the Mandelstam variable and the 4-velocity. In each case the meaning should be clear from the text.. The minimal surface associated to (7-9) leads to a set of coupled partial differential equations, which we have not yet managed to solve exactly. Instead, we provide a variational estimate as we now explain.
First, we divide the string world-sheet by constant time slices, each containing a string connected to two boundary points. Then we assume that for the minimal surface, this string length is minimal. So by finding the minimal length and carrying the integration over time, we will obtain an approximate minimal area. Specifically, we choose an orthogonal coordinate system $`(\xi ,\sigma )`$ with the property $`_\xi X^\mu _\sigma X^\mu =0`$ on the world-sheet. Then,
$`S={\displaystyle \frac{R^2}{2\pi \alpha ^{}}}{\displaystyle 𝑑\xi 𝑑\sigma \sqrt{(_\xi X^\mu )^2}\sqrt{\frac{(_\sigma X^\mu )^2}{z^4}}}.`$ (10)
Let
$`l(t):={\displaystyle _{L(t)}}𝑑\sigma \sqrt{{\displaystyle \frac{(_\sigma x)^2+(_\sigma y)^2+(_\sigma w)^2+(_\sigma z)^2}{z^4}}}`$ (11)
be the ‘length’ of the string ending on the two receding quarks at the boundary with separation $`L(t)`$. $`l(t)`$ depends on $`t`$ only through $`L(t)(=L(\tau ))`$, so that $`l(t)`$ and $`l(\tau )`$ represent the same quantity. First, we minimize this ‘length’ and then form the area by adding up the areas of the strips between the hyper-planes of time $`t`$ and $`t+dt`$. The height of the strip at the boundary is $`dt\sqrt{1v^2}=d\tau `$ where $`\tau `$ is the proper-time at the boundary. The height of the strip at the central point of the string is given by $`dt\sqrt{1\dot{z_0}(t)^2}`$ where $`z_0(t)`$ is the maximum value of $`z`$ at fixed $`t`$. Therefore by the trapezoidal rule, the minimal area is given by
$`A_{min}{\displaystyle \frac{1}{2}}{\displaystyle 𝑑t\left(\sqrt{1v^2}+\sqrt{1\dot{z_0}^2}\right)l_{min}(t)},`$ (12)
where $`l_{min}(t)`$ is the minimal length for a fixed time slice. To summarize: we have replaced $`d\xi \sqrt{(_\xi X^\mu )^2}`$ by $`dt(\sqrt{1v^2}+\sqrt{1\dot{z_0}^2})/2`$, which is $`\sigma `$ independent. The latter substitution is made in the CM frame. Lorentz invariance follows by rewriting the results in terms of Lorentz scalars.
The minimal length $`l_{\mathrm{min}}(t)`$ can be found by choosing a coordinate system such that the two quarks are located at $`x=L(\tau )/2`$ and $`x=L(\tau )/2`$. Then
$`l(t)={\displaystyle _{L(t)/2}^{+L(t)/2}}𝑑x\sqrt{{\displaystyle \frac{1+z^2}{z^4}}},`$ (13)
where $`L(t)=\sqrt{b^2+4v^2t^2}`$ ($`u\tau =vt`$) is the separation between the two time-like receding partons at the boundary $`z=0`$. In the instantaneous approximation, the string adjusts instantaneously to the change in the minimal length $`L(t)`$ at the boundary. <sup>#2</sup><sup>#2</sup>#2 This implicit approximation follows from the fact that we have neglected the orthogonality constraint imposed on the coordinate system in the variational estimate. Its physical consequence will be addressed below. It follows that the problem of finding a minimal $`l(t)`$ is almost identical to the problem of finding a static $`q\overline{q}`$ potential . The result for a properly regularized length is
$`l(t)={\displaystyle \frac{c_0}{L(t)}}`$ (14)
with $`c_0=(2\pi )^3/\mathrm{\Gamma }(\frac{1}{4})^4`$ and $`z_0(t)`$ given by
$`z_0(t)=c_1L(t),`$ (15)
where $`c_1=1/\sqrt{c_0}0.834`$. Hence under the instantaneous approximation, the ‘area’ is obtained by integrating the static potential.
$`A_{min}`$ $``$ $`{\displaystyle \frac{1}{2}}{\displaystyle _T^T}𝑑t\left(\sqrt{1v^2}+\sqrt{1\dot{z_0}^2}\right){\displaystyle \frac{c_0}{\sqrt{b^2+4v^2t^2}}}`$ (16)
$`=`$ $`\left(\sqrt{1v^2}+\sqrt{1(2c_1v)^2}\right){\displaystyle \frac{c_0}{2v}}\mathrm{sinh}^1\left({\displaystyle \frac{2vT}{b}}\right).`$
According to the AdS/CFT correspondence , the connected part of the Wilson-line correlator in N=4 SYM at large $`N`$ and fixed $`g_{YM}^2N`$ is
$`<\mathrm{𝐖𝐖}>_c\mathrm{exp}(iS)=\mathrm{exp}\left[ic_0\sqrt{2g_{YM}^2N}\left(\sqrt{1v^2}+\sqrt{1(2c_1v)^2}\right){\displaystyle \frac{1}{2v}}\mathrm{ln}\left({\displaystyle \frac{4vT}{b}}\right)\right].`$ (17)
This result may be contrasted with the one-gluon exchange contribution to the connected and untraced correlator (dropping color factors)
$`<\mathrm{𝐖𝐖}>_{1c}{\displaystyle \frac{g_{YM}^2}{4\pi ^2}}{\displaystyle _{\mathrm{}}^+\mathrm{}}𝑑\tau _1𝑑\tau _2{\displaystyle \frac{u_1u_2}{\left((u_1\tau _1u_2\tau _2)^2+b^2\right)}}=i{\displaystyle \frac{g_{YM}^2}{4\pi }}\left(v+{\displaystyle \frac{1}{v}}\right)\mathrm{ln}\left({\displaystyle \frac{T}{b}}\right).`$ (18)
The natural infrared cutoff in the problem is the mass; $`T=1/\mu 1/m`$. For quarks and gluons, it is simply their constituent mass. <sup>#3</sup><sup>#3</sup>#3For parallel moving quarks, the result is $`1/b`$ instead of $`\mathrm{ln}b`$. Coulomb’s law is 2-dimensional for non-parallel moving light-like quarks and 3-dimensional for heavy or parallel moving light-like quarks. In QED, (18) exponentiates with a noticeable difference from (17) : the time dilatation factors generated by the string are absent in QED. This very difference will cause the scattering amplitude to reggeize in our case instead of eikonalize as we will show below.
To restore Lorentz invariance, we can rewrite eq.(17) in terms of the Mandelstam variable $`s`$,
$`<\mathrm{𝐖𝐖}>_c`$ $``$ $`\left({\displaystyle \frac{4\sqrt{14m^2/s}}{\mu b}}\right)^{ic_0\sqrt{2g_{YM}^2N}\left(\frac{m}{\sqrt{s4m^2}}+\frac{1}{2}\sqrt{\frac{s}{s4m^2}(2c_1)^2}\right)}`$ (19)
where we have made the substitutions : $`u=\sqrt{\frac{s}{4m^2}1}`$, $`v=\sqrt{1\frac{4m^2}{s}}`$ and $`T=1/\mu `$ for the time cutoff. At this stage, several remarks are in order.
* Notice that $`\sqrt{1(2c_1v)^2}`$ is purely imaginary for $`v>1/2c_10.6`$, resulting in a suppression of the scattering amplitude at large $`\sqrt{s}`$. This happens because the central part of the string moves into the AdS space with a velocity $`2c_1v>1`$, which is classically forbidden by the 5 dimensional kinematics. This is a flaw of the instantaneous approximation we discussed above, that can be ultimately resolved by an analytical or numerical investigation of the exact solution to the minimal surface problem. Here, we observe that this pathology can be removed by physical arguments on the CFT side. Indeed, from (19) we note that the pathological behavior of the string yields a new branch-point singularity in the s-channel, besides the expected free threshold at $`s=4m^2`$. In a non-confining and conformally invariant SYM theory, this is unphysical. This singularity disappears if and only if $`2c_1`$ is renormalized to 1. This means that the exact treatment of the minimal surface should yield a result where the maximum speed of the central point of the string is 1, in accordance with relativity. When this happens,
$`<\mathrm{𝐖𝐖}>_c`$ $``$ $`\left({\displaystyle \frac{4\sqrt{14m^2/s}}{m\mu b}}\right)^{ic_0\sqrt{2g_{YM}^2N}\left(\frac{2m}{\sqrt{s4m^2}}\right)}`$ (20)
* Eq.(16) might be interpreted to suggest using the proper-time $`d\tau =\sqrt{1v^2}dt`$ instead of the global varaible $`t`$ in the final stage of the calculation. It is important to realize that if the $`fixed`$ time cutoff $`T=1/\mu `$ was substituted by a $`fixed`$ proper time cutoff then the result would be (20) with the substitution $`1/\mu \sqrt{s}/2m\mu `$. The former ($`t`$) is favored by the string theory calculation of the minimal surface and to compare the well-known eikonal form in the Abelian case such as QED as well as the result of the perturbative calculation eq.(18). The latter($`\tau `$) is favored by manifest Lorentz invariance and the representation (1-2). The situation is such that string theory calculation with time cutoff gives the gaugy theory results with formalism using the proper-time. Fortunately, whether we use the proper-time or time cutoff, both lead to the same scattering amplitude asymptotically (see below). Note that the elastic amplitude involves typically momenta of order $`\sqrt{t}`$, so that $`b1/\sqrt{t}`$. As a result, the ‘cross singularity’ of the two Wilson loops at $`b=0`$ is dynamically regulated for fixed Mandelstam variable $`t`$.
* The Minkowski AdS/CFT approach followed here is subtle. For example, the concept of a minimum surface is not well defined in a metric with indefinite signature. A more rigorous treatment is to setup the problem in a metric with an Euclidean signature and then perform a Wick-rotation of the outcome. We have checked that our present answer is unchanged by this procedure. The factor $`i`$ in the exponent of (20) follows from $`1/v_E`$ ($`v_E=dx/dt_E,it=t_E`$), which is real with an Euclidean signature but imaginary with a Minkowski signature.
Using (20), the gauge-invariant combination of the parton-parton scattering amplitude (6) now reads
$`𝒯(s,t)4\pi \alpha (s){\displaystyle \frac{\mathrm{\Gamma }(1i\alpha (s))}{\mathrm{\Gamma }(1+i\alpha (s))}}\left({\displaystyle \frac{2s}{t}}\right)\left({\displaystyle \frac{2\sqrt{t}}{\mu }}\right)^{2i\alpha (s)}`$ (21)
for large $`N`$ and fixed $`g_{YM}^2N`$. Here the gamma functions come from $`2\pi _0^{\mathrm{}}𝑑b^{}b^{12i\alpha (s)}J_0(b^{})`$ with $`\alpha (s)`$ given by
$`\alpha (s)=c_0\sqrt{2g_{YM}^2N}\left[{\displaystyle \frac{m}{\sqrt{s4m^2}}}\right].`$ (22)
A similar behavior follows from a proper time cutoff through the substitution $`1/\mu \sqrt{s}/2m\mu `$. The result (21) is reminiscent of the QED result , but with important differences. The amplitude has a nonperturbative dependence on $`g_{YM}^2N`$, much like the static potential . The amplitude reggeizes and unitarizes at large $`\sqrt{s}`$. Indeed, asymptotically ($`s>>t,m^2`$)
$`𝒯(s,t)4\pi c_0\sqrt{2g_{YM}^2N}\left({\displaystyle \frac{2s}{t}}\right)\left({\displaystyle \frac{s}{m^2}}\right)^{0.5}`$ (23)
which is real (no inelasticity), independent of the cutoff $`\mu `$, and with a negative intercept of $`0.5`$. The zero imaginary part and the nonzero intercept are both tied to the the occurrence of a string in the AdS space as is explicit from (17). On the boundary, the receding partons with momenta $`\sqrt{s}`$ define a range in rapidity space of the order of $`\mathrm{ln}s`$. Powers of $`\mathrm{ln}s`$ count the number of ‘gluons’ exchanged in the t-channel. Since (23) can be written as a power series in $`\mathrm{ln}s`$, it contains terms with an infinite number of gluon exchange. A similar observation was also made by Verlinde and Verlinde in the process of mapping high-energy scattering onto a two-dimensional sigma model.
Finally, the present arguments also show that at large $`N`$ and strong $`g_{YM}^2N`$, the cross sections for quark-quark and quark-antiquark scattering are the same. The gluon-gluon scattering amplitude could be calculated similarly with the substitution $`NN^2`$, due to the adjoint representation of the gluon.
4. We have presented arguments for evaluating the elastic parton-parton scattering amplitude at large $`N`$ and strong $`g_{YM}^2N`$ in N=4 SYM. Although the latter is conformally invariant in $`M_4`$ the appearance of the string picture and the necessity to regulate the elastic contribution in time has led to a reggeized behavior that unitarizes at large $`\sqrt{s}`$. The result cannot be reached by perturbation theory. The nature of the result depends sensitively on the string character of the underlying description and hence is not applicable to Abelian-like theories such as QED.
Our main result follows from a physically motivated variational estimate of the minimal surface in the AdS space. The exact form of the extremal surface is too involved to be written down analytically. Although an exact result would of course be ideal, we do not expect our estimate of the parton-parton cross section to change appreciably. Indeed, the dominant contributions arise from scattering with large impact parameter $`b1/\sqrt{t}`$, for which our approximation for the extremal surface should be legitimate. For a large impact parameter $`b`$, the extremal surface is smoothly twisted and the eikonal approximation is also good.
Finally and from a different standpoint, Verlinde and Verlinde have shown that at large $`\sqrt{s}`$ the elastic amplitudes in QCD follows from a two-dimensional sigma model with conformal symmetry, where the latter is broken when the light-like quark lines are regulated in the time-like direction. Does the large $`N`$ effective action derived by Verlinde and Verlinde map onto an AdS type action? Is there a way to do better than the variational estimate made in the present analysis? How could we recover the true Regge behavior of non-supersymmetric QCD? Some of these questions will be addressed in a forthcoming publication.
Acknowledgments We wish to thank KIAS for their generous hospitality and support during this work. We are grateful to Kyungsik Kang, Taekoon Lee, Maciek Nowak and Martin Rocek for discussions, and Romuald Janik for comments on the manuscript. We are especially thankful to Igor Klebanov for comments and helpful suggestions. The work of IZ was supported in part by US-DOE grant DE-FG-88ER40388 and that of SJS by the program BSRI-98-2441.
Note Added
After posting our paper, appeared in which a similar approach is suggested for scattering between colorless states at large impact parameters.
|
no-problem/9907/hep-ph9907275.html
|
ar5iv
|
text
|
# Neutrino mass patterns, 𝑅-parity violating supersymmetry and associated phenomenology
## 1 Introduction
Although various options beyond the standard model (SM) of electroweak interactions are being investigated with great interest for quite some time now, the standard model has faced practically no experimental contradictions in terrestrial experiments so far. In this respect, the observed results on solar and atmospheric neutrinos have a unique role to play, in the sense that their confirmation will require the existence of neutrino masses and mixing, and therefore will take one beyond the jurisdiction of the standard model. It is thus quite natural that the apparent oscillation of the muon neutrinos to another species, inferred with far greater confidence than before from the recent data from the SuperKamiokande (SK) experiment , is being enthusiastically examined for traces of some kind non-standard physics answering to a neutrino mass pattern of the suggested type. There are, however, a very number of possibilities to explore, and the credibility of any one of them will depend not only on how well they explain the neutrino data but also on their other testable consequences. In this regard, one must say that the recent developments in neutrino physics have triggered a lot of incisive thinking on other areas of particle phenomenology as well. Here we propose to discuss some such phenomenological issues in the particular context of supersymmetric theories.
Supersymmetry (SUSY) is perhaps the object of the hottest pursuit in terms of physics beyond the SM . Its usefulness in solving the naturalness problem, its tantalisingly spectacular role in achieving the unification of coupling constants, and its almost invariable presence in theories attempting to unify gravity with the other interactions make it an extremely appealing theoretical option. However, there is no concerete experimental evidence in its favour yet. It is therefore quite natural that the possibilities of generating neutrino masses and mixing in a SUSY scenario should be investigated, especially when evidences for the latter are already knocking at our doors.
Neutrino masses will either necessitate the existence of right-handed neutrinos or require violation of lepton number (L) so that Majorana masses are possible. The former possibility entails an augmentation of the particle content of the minimal SUSY standard model (MSSM). The latter one does not require it, but forces one to go beyond the minimal model again, whereby lepton number violation can be allowed in the theory. However, such a violation is inbuilt in those SUSY theories where R-parity, defined as $`R=(1)^{3B+L+2S}`$, is not a conserved quantity anymore . This is quite consistent with the absence of proton decay so long as baryon number (B) is not violated simultaneously, a situation that again may arise in SUSY where there are scalar leptons and baryons and therefore L-violation and B-conservation does not interfere with the gauge current structure of the theory.
In the next section we present a summary on R-parity violating models, with an emphasis on the type which has a key role in our claims, namely, one with R-parity violation through bilinear terms. In the same section we also discuss the generation of neutrino masses in such theories both at the tree-and one-loop levels. Some distinct accelerator signals, of one viable scenario at least, are mentioned in section 3. We conclude in section 4.
## 2 R-parity violation and neutrino mass
The MSSM superpotential is given by
$$W_{MSSM}=\mu \widehat{H}_1\widehat{H}_2+h_{ij}^l\widehat{L}_i\widehat{H}_1\widehat{E}_j^c+h_{ij}^d\widehat{Q}_i\widehat{H}_1\widehat{D}_j^c+h_{ij}^u\widehat{Q}_i\widehat{H}_2\widehat{U}_j^c$$
(1)
where the last three terms give the Yukawa interactions corresponding to the masses of the charged leptons and the down-and up-type quarks, and $`\mu `$ is the Higgsino mas parameter.
When R-parity is violated, the following additional terms can be added to the superpotential:
$$W_{\mathit{}}=\lambda _{ijk}\widehat{L}_i\widehat{L}_j\widehat{E}_k^c+\lambda _{ijk}^{}\widehat{L}_i\widehat{Q}_j\widehat{D}_k^c+\lambda _{ijk}^{\prime \prime }\widehat{U}_i^c\widehat{D}_j^c\widehat{D}_k^c+ϵ_i\widehat{L}_i\widehat{H}_2$$
(2)
with the $`\lambda ^{\prime \prime }`$-terms causing B-violation, and the remaining ones, L-violation. In order to suppress proton decay, it is customary (though not essential) to have one of the two types of nonconservation at a time. In the rest of this article, we will consider only lepton numer violating effects.
The $`\lambda `$-and $`\lambda ^{}`$-terms have been widely studied in conection with phenomenological consequences, enabling one to impose various kinds of limits on them . Their contributions to neutrino masses can be only through loops , and their multitude (there are 36 such couplings altogether) makes the necessary adjustments possible for reproducing the requisite values of neutrino masses and mixing angles. We shall come back to these ‘trilinear’ effects later.
More interesting, however, are the three bilinear terms $`ϵ_iL_iH_2`$ . There being only three terms of this type, the model looks simpler and more predictive with them alone as sources of R-parity violation. This is particularly so because the physical effects of the trilinear terms can be generated from the bilinears by going to the appropriate bases. In addition, they have interesting consequences of their own , since terms of the type $`ϵ_iL_iH_2`$ imply mixing between the Higgsinos and the charged leptons and neutrinos. In this discusion, we shall assume, without ay loss of generality, the existence of such terms involving onl the second and third famililies of leptons.
In the above scenario, the scalar potential contains the following terms which are bilinear in the scalar fields:
$`V_{\mathrm{scal}}`$ $`=`$ $`m_{L_3}^2\stackrel{~}{L}_3^2+m_{L_2}^2\stackrel{~}{L}_2^2+m_1^2H_1^2+m_2^2H_2^2+B\mu H_1H_2`$ (3)
$`+B_2ϵ_2\stackrel{~}{L}_2H_2+B_3ϵ_3\stackrel{~}{L}_3H_2+\mu ϵ_3\stackrel{~}{L}_3H_1+\mu ϵ_2\stackrel{~}{L}_2H_1+\mathrm{}..`$
where $`m_{L_i}`$ denotes the mass of the ith scalar doublet at the electroweak scale, and $`m_1`$ and $`m_2`$ are the mass parameters corresponding to the two Higgs doublets. $`B`$, $`B_2`$ and $`B_3`$ are soft SUSY-breaking parameters.
An immediate consequence of the additional (L-violating) soft terms in the potential is a set of non-vanishing vacuum expectation values (vev) for the sneutrinos . This gives rise to the mixing of the gauginos with neutrinos (and charged leptons) through the sneutrino-neutrino-neutralino (and sneutrino-charged lepton-chargino) interaction terms.
By virtue of both the types of mixing described above, the hitherto massless neutrino states enter into the neutralino mass matrix. This leads to see-saw masses acquired by them via mixing with massive states. The parameters controlling the neutrino sector in particular and R-parity violating effects in general are the bilinear coefficients $`ϵ_2`$ , $`ϵ_3`$ and the soft parameters $`B_2`$, $`B_3`$. For our purpose, however, it is more convenient to eliminate the latter in favour of the sneutrno vev’s using the conditions of electroweak symmetry breaking .
For a better understanding, let us perform a basis rotation , removing the R-parity violating bilinear terms via a redefinition of the lepton and Higgs superfields. This, however, does not eliminate the effects of the bilinear terms, since they now take refuge in the scalar potential. The sneutrino vev’s in this rotated basis (which are functions of both and the $`ϵ`$’s and the soft terms in the original basis) are instrumental in triggering neutrino-neutralino mixing. Consequently, the $`6\times 6`$ neutralino mass matrix in this basis has the following form:
$$=\left(\begin{array}{cccccc}0& \mu & \frac{gv}{\sqrt{2}}& \frac{g^{}v}{\sqrt{2}}& 0& 0\\ \mu & 0& \frac{gv^{}}{\sqrt{2}}& \frac{g^{}v^{}}{\sqrt{2}}& 0& 0\\ \frac{gv}{\sqrt{2}}& \frac{gv^{}}{\sqrt{2}}& M& 0& \frac{gv_3}{\sqrt{2}}& \frac{gv_2}{\sqrt{2}}\\ \frac{g^{}v}{\sqrt{2}}& \frac{g^{}v^{}}{\sqrt{2}}& 0& M^{}& \frac{g^{}v_3}{\sqrt{2}}& \frac{g^{}v_2}{\sqrt{2}}\\ 0& 0& \frac{gv_3}{\sqrt{2}}& \frac{g^{}v_3}{\sqrt{2}}& 0& 0\\ 0& 0& \frac{gv_2}{\sqrt{2}}& \frac{g^{}v_2}{\sqrt{2}}& 0& 0\end{array}\right)$$
(4)
where the successive rows and columns correspond to ($`\stackrel{~}{H}_2,\stackrel{~}{H}_1,i\stackrel{~}{W_3},i\stackrel{~}{B},\nu _\tau ,\nu _\mu `$), $`\nu _\tau `$ and $`\nu _\mu `$ being the neutrino flavour eigenstates in this basis. Also, with the sneutrino vev’s denoted by $`v_2`$ and $`v_3`$,
$$v(v^{})=\sqrt{2}\left(\frac{m_Z^2}{\overline{g}^2}\frac{v_2^2+v_3^2}{2}\right)^{\frac{1}{2}}\mathrm{sin}\beta (\mathrm{cos}\beta )$$
$`M`$ and $`M^{}`$ being the $`\mathrm{SU}(2)`$ and $`\mathrm{U}(1)`$ gaugino mass parameters respectively, and $`\overline{g}=\sqrt{g^2+g_{}^{}{}_{}{}^{2}}.`$
Next, one can define two states $`\nu _3`$ and $`\nu _2`$, where
$$\nu _3=\mathrm{cos}\theta \nu _\tau +\mathrm{sin}\theta \nu _\mu $$
(5)
and $`\nu _2`$ is the orthogonal combination, the neutrino mixing angle being given by
$$\mathrm{cos}\theta =\frac{v_3}{\sqrt{v_2^2+v_3^2}}$$
(6)
Clearly, the state $`\nu _3`$ — which alone develops cross-terms with the massive gaugino states — develops a see-saw type mass at the tree-level. The orthogonal combination $`\nu _2`$ still remains massless.
An approximate expression (neglecting higher order terms in $`m_z/\mu `$) for the tree-level neutrino mass is
$$m_{\nu _3}\frac{\overline{g}^2(v_2^2+v_3^2)}{2\overline{M}}\times \frac{\overline{M}^2}{MM^{}m_Z^2\overline{M}/\mu \mathrm{sin}2\beta }$$
(7)
where $`\overline{g}^2\overline{M}=g^2M^{}+g_{}^{}{}_{}{}^{2}M.`$ The first term is very similar to the usual see-saw formula, with the only difference that couplings between the light and the heavy states is in the present case due to gauge interactions.
The massive state $`\nu _3`$ can be naturally used to account for atmospheric neutrino oscillations, with $`\mathrm{\Delta }m^2=m_{\nu _3}^2.`$ Large angle mixing between the $`\nu _\mu `$ and the $`\nu _\tau `$ corresponds to the situation where $`v_2v_3`$.
The tree-level mass here is clearly controlled by the quantity $`v^{}=\sqrt{v_2^2+v_3^2}`$. This quantity, defined as the ‘effective’ sneutrino vev in the basis where the $`ϵ`$’s are rotated away, can be treated as a basis-independent measure of R-parity violation in such theories. The SK data on atmospheric neutrinos restrict $`v^{}`$ to be on the order of a few hundred keV’s . However, it should be remembered that $`v^{}`$ is a function of $`ϵ_2`$ and $`ϵ_3`$ both of which can still be as large as on the order of the electroweak scale. It has, for example, been shown that in models based on N=1 supergravity, it is possible to have a very small value of $`v^{}`$ starting from large $`ϵ`$’s, provided that one assumes the R-conserving and R-violating soft terms to be of the same order at the scale of dynamical SUSY breaking at a high energy.
Also, one has to address the question as to whether the treatment of $`\nu _3`$ and $`\nu _2`$ as mass eigenstates is proper, from the viewpoint of the charged lepton mass marix being diagonal in the basis used above. In fact, it can be shown that this is strictly possible when $`ϵ_2`$ is much smaller than $`ϵ_3`$, failing which one has to give a further basis rotation to defne the neutrino mass eigenstates. However, the observable consequences that we describe in the following section are found to be equally valid, with the requirement shifted from the angle $`\theta `$ to the effective mixing angle to be in the neighbourhood of maximality.
Furthermore, a close examination of the scalar potential in such a scenario reveals the possibility of additional mixing among the charged sleptons, whereby flavour-changing neutral currents (FCNC) can be enhanced. It has been concluded after a detailed study that the supression of FCNC requires one to have the $`ϵ`$-parameters to be small compared to the MSSM parameter $`\mu `$ (or, in other words, to the electroweak scale) unless there is a hierarchy between $`ϵ_2`$ and $`ϵ_3`$.
However, one still needs to find a mechanism for mass-splitting between the massless state $`\nu _2`$ and the electron neutrino, and to explain the solar neutrino puzzle . This is found to follow naturally if one allows for R-parity (L) violating terms of all types in the superpotential. The existence of the various $`\lambda `$ and $`\lambda ^{}`$-terms will give rise to loop conributions to the neutrino mass matrix. The generic expression for such loop-induced masses is
$$(m_\nu ^{loop})_{ij}\frac{3}{8\pi ^2}m_k^dm_p^dM_{SUSY}\frac{1}{m_{\stackrel{~}{q}}^2}\lambda _{ikp}^{}\lambda _{jpk}^{}+\frac{1}{8\pi ^2}m_k^lm_p^lM_{SUSY}\frac{1}{m_{\stackrel{~}{l}}^2}\lambda _{ikp}\lambda _{jpk}$$
(8)
where $`m^{d,(l)}`$ denote the down-type quark (charged lepton) masses. $`m_{\stackrel{~}{l}}^2`$, $`m_{\stackrel{~}{q}}^2`$ are the slepton and squark mass squared. $`M_{SUSY}(\mu )`$ is the effective scale of supersymmetry breaking. The mass eigenvalues can be obtained by including the above loop contributions in the mass matrix.
If we want the mass thus induced for the second generation neutrino to be the right one to solve the solar neutrino problem, then one obtains some constraint on the value of the $`\lambda ^{}`$s as well as $`\lambda `$s. In order to generate a splitting between the two residual massless neutrinos, $`\delta m^25\times 10^6\mathrm{eV}^2`$ (which is suggested for an MSW solution ), a SUSY breaking mass of about 500 GeV implies $`\lambda ^{}(\lambda )10^410^5`$. The mass-squared difference required for a vacuum oscillation solution to the solar puzzle requires even smaller values of $`\lambda ^{}(\lambda )`$.
## 3 Phenomenological consequences
As we have observed before, the SK data imply a constraint on the basis-independent parameter $`v^{}`$. The allowed range of neutrino mass-squared difference from the SK data, combining the fully contained events, partially contained events and upward-going muons, is about $`1.56.0\times 10^3eV^2`$ at 90% C.L. . For the lightest neutralino mass varying between 50 and 200 GeV, this constrains $`v^{}`$ to be in the approximate range $`0.00010.0003GeV`$.
The experimentally observed signals characteristic of the scenario described above should naturally be associated with decays of the lightest nutralino, since that is a process where contributions from R-parity volating effects will not face any competitions from MSSM processes.
In presence of only the trilinear R-violating terms in the superpotential, the lightest neutralino can have various three-body decay modes which can be genericaly described by $`\chi ^0\nu f\overline{f}`$ and $`\chi ^0lf_1\overline{f_2}`$, $`f`$, $`f_1`$ and $`f_2`$ being different quark and lepton flavours that are kinematically allowed in the final state.
We have already seen that an important consequence of the bilinears is a mixing between neutrinos and neutralinos as also between charged leptons and charginos. This opens up additional decay chanels for the lightest neutralino, namely, $`\chi ^0lW`$ and $`\chi ^0\nu Z`$. When the neutralino is heavier than at least the W, these two-body channels dominate over the three-body ones over a large region of the parameter space, the effect of which can be observed in colliders such as the upgraded Tevatron, the LHC and the projected high-energy electron-positon collider. Different observables related to these decays have been studied in recent times .
Here we would like to stress upon one distinctive feature of the scenario that purportedly explains the SK results with the help of bilinear R-parity violating terms. It has been found that over almost the entire allowed range of the parameter space in this connection, the lightest neutralino is dominated by th Bino. A glance at the neutralino mass matrix reveals that decays of the neutralino ($``$ Bino) in such a case should be determined by the coupling of different candidate fermionic fields in the final state with the massive neutrino field $`\nu _3`$ which has a cross-term with the Bino. Large angle neutrino mixing, on the the hand, implies that $`\nu _3`$ should have comparable strengths of coupling with the muon and the tau. Thus, a necessary consequence of the above type of explanation of the SK results should be comparable numbers of muons and tau’s emerging from decays of the lightest neutralino, together with a $`W`$-boson in each case . Such signals, particlarly those in the form of muons from two-body decays of the lightest neutralino, should distinguish such a scenario. For further details including plots of the branching ratios, the reader is referred to .
Of course, the event rates in the channel mentioned aboe will depend on whether the two-body decays mentioned above indeed dominate over the three-body decays. The latter are controlled by the size of the $`\lambda `$-and $`\lambda ^{}`$-parameters. It has been found that if in this case these parameters have to be of the right orders of magnitude to explain the mass-splitting required by the solar neutrino deficit, then, even for the MSW case, the decay widths driven by the trilinear term are smaller than thsoe for the two-body decays by at least an order of magnitude. For vacuum oscillation, the three body decays turn out to be even smaller. Thus the prediction of comparable numbers of muons and tau’s seem to be quite robust so long as the two-body neutralino decays are kinematically allowed.
The other important consequence of this picture is a large decay length for the lightest neutralino. We have already mentioned that the atmospheric neutrino results restrict the basis-independent R-violating parameter $`v^{}`$ to the rather small value of a few hundred keV’s. This value affects the mixing angle involved in calculating the decay width of the neutralino, which in turn is given by the formula
$$L=\frac{\mathrm{}}{\mathrm{\Gamma }}\times \frac{p}{M(\stackrel{~}{\chi }_1^0)}$$
(9)
where $`\mathrm{\Gamma }`$ is the decay width of the lightest neutralino and $`p`$, its momentum. As can be seen from figure 2 in reference , the decay length decreases for higher neutrino masses, as a result of the enhanced probability of the flip between the Bino and a neutrino, when the LSP is dominated by the Bino. Also, a relatively massive neutralino decays faster and hence has a smaller decay length. The interesting fact here is that even for a neutralino as massive as 250 GeV, the decay length is as large as about 0.1 to 10 millimeters. This clearly will leave a measurable decay gap, which unmistakably characterises the theoretical construction under investigation here .
If the lightest neutralino can have two-body charged current decays, then the Majorana character of the latter also leads to the possibility of like-sign dimuons and ditaus from pair-produced neutralinos. Modulo the efficiency of simultaneous identifiation of W-pairs, these like-sign dileptons can also be quite useful in verifying the type of theory discussed here.
## 4 Summary and conclusions
We have demonstrated that it is posible to explain both the atmospheric and solar neutrino deficits in a SUSY model with R-parity violation inbuilt in it. An important role is played by the blinear R-violating terms in the superpotential, whereby a tree-level mass for one neutrino can be generated via mixing with neutralinos. The mass-squared difference expected from the atmospheric muon neutrino deficiency (for $`\nu _\mu \nu _\tau `$ oscillation) constrains the basis-independent parameter characterising R-parity violation in the neutrino-neutralino and lepton-chargino sectors. Side by side, the existence of trilinear lepton number violating terms in the superpotential can give rise to a mass-splitting between the two remaining neutrinos and thus account for the solar neutrino deficit. The values of the trilinear parameters required for this imply that the lightest neutralino should dominantly decay in two-body channels if it is heavier than the W-boson. Maximal mixing, as required by the SuperKamiokande data, implies that comparable numbers of muons and tau’s should be seen in charged current decays of the neutralino when the two-body decays are kinematically allowed. In addition, the magnitudes of the R-parity violating parameters required by the atmospheric neutrino data causes the neutralinos to have large decay lengths, and therefore leads to displaced vertices in SUSY search experiments. Thus R-parity violating SUSY lends itself as a viable mechanism for generating the expected neutrino masses and mixing patterns, with verifiable (or falsifiable) consequences in collider experiments.
### Acknowledgements:
I am grateful to my collaborators, Asesh K. Datta, Sourov Roy and Francesco Vissani, from whom I have learnt most of what has been discussed above. I also acknowledge useful discussions with Anjan Joshipura. Finally, I thank the organisers of the Discussion Meeting on Neutrino Physics at the Physical Research Laboatory, Ahmedabad, for giving me an opportunity to present my views on this subject.
|
no-problem/9907/nucl-th9907089.html
|
ar5iv
|
text
|
# DPNU-99-23KFKI-1999-03/A Back-to-Back Correlations for Bosons Modified by Medium This research was partly supported by the US - Hungarian Joint Fund, the Hungarian OTKA grant T024094, an OTKA - NWO grant, the U.S. Department of Energy contracts No. DE-FG02-93ER40764, DE-FG-02-92-ER40699 and DE-AC02-76CH00016 and the Grant-in-Aid for Scientific Research No. 10740112 of the Japanese Ministry of Education, Science and Culture.
## 1 INTRODUCTION
In this paper we consider the effect of possible mass shifts in the dense medium on two boson correlations in general. Thus far medium modifications of hadron masses have been mainly considered in terms of effects on such observables as dilepton yields and spectra. Hadron mass shifts are caused by interactions in a dense medium and therefore vanish on the freeze-out surface. Thus, a naive first expectation is that in medium hadron modifications may have little or no effect on two boson correlations, and so the usual Hanbury-Brown Twiss (HBT) effect has been expected to be only concerned with the geometry and matter flow gradients on the freeze-out surface. However, in this paper we show that an interesting quantum mechanical correlation is induced due to the fact that medium modified bosons can be represented in terms of two-mode squeezed states of the asymptotic bosons, which are observables.
In this paper we assume the validity of relativistic hydrodynamics up to freeze-out. The local temperature $`T(x)`$ and chemical potential $`\mu (x)`$ are given. In relativistic heavy ion collisions at CERN SPS, it has been observed that the one particle spectra , simultaneously with the two particle correlation function, can be described by (local) thermal distributions fairly precisely . We assume that the sudden approximation is a valid abstraction in describing the freeze-out process in relativistic heavy ion collisions quantum mechanically and that there exists an abrupt freeze-out surface, $`\mathrm{\Sigma }^\mu (x)`$.
Let us consider the following model Hamiltonian for a scalar field $`\varphi (𝐱)`$ in the rest frame of matter,
$$H=H_0\frac{1}{2}𝑑𝐱𝑑𝐲\varphi (𝐱)\delta M^2(𝐱𝐲)\varphi (𝐲),H_0=\frac{1}{2}𝑑𝐱\left(\dot{\varphi }^2+|\varphi |^2+m_0^2\varphi ^2\right),$$
(1)
where $`H_0`$ is the asymptotic Hamiltonian. The field $`\varphi (𝐱)`$ in $`H`$ corresponds to quasi - particles that propagate with a momentum-dependent effective mass, which is related to the vacuum mass, $`m_0`$, via $`m_{}^2(|𝐤|)=m_0^2\delta M^2(|𝐤|)`$. The mass-shift is assumed to be limited to long wavelength collective modes.
The invariant single-particle and two-particle momentum distributions are given by
$`N_1(𝐤_1)`$ $`=`$ $`\omega _{𝐤_1}{\displaystyle \frac{d^3N}{d𝐤_1}}=\omega _{𝐤_1}a_{𝐤_1}^{}a_{𝐤_1}^{},`$ (2)
$`N_2(𝐤_1,𝐤_2)`$ $`=`$ $`\omega _{𝐤_1}\omega _{𝐤_2}a_{𝐤_1}^{}a_{𝐤_2}^{}a_{𝐤_2}^{}a_{𝐤_1}^{}`$ (3)
$`=`$ $`\omega _{𝐤_1}\omega _{𝐤_2}(a_{𝐤_1}^{}a_{𝐤_1}^{}a_{𝐤_2}^{}a_{𝐤_2}^{}+a_{𝐤_1}^{}a_{𝐤_2}^{}a_{𝐤_2}^{}a_{𝐤_1}^{}+a_{𝐤_1}^{}a_{𝐤_2}^{}a_{𝐤_2}^{}a_{𝐤_1}^{}),`$
where $`a_𝐤`$ is the annihilation operator for the asymptotic quantum with four-momentum $`k^\mu =(\omega _𝐤,𝐤)`$, $`\omega _𝐤^2=m_0^2+𝐤^2`$ and the expectation value of an operator $`\widehat{O}`$ is given by the density matrix $`\widehat{\rho }`$ as $`\widehat{O}=\mathrm{Tr}\widehat{\rho }\widehat{O}`$.
We introduce the chaotic and squeezed amplitudes, defined, respectively, as
$$G_c(1,2)=\sqrt{\omega _{𝐤_1}\omega _{𝐤_2}}a_{𝐤_1}^{}a_{𝐤_2}^{},G_s(1,2)=\sqrt{\omega _{𝐤_1}\omega _{𝐤_2}}a_{𝐤_1}^{}a_{𝐤_2}^{}.$$
(4)
In most situations, the chaotic amplitude, $`G_c(1,2)G(1,2)`$ is dominant, and carries the Bose-Einstein correlations, while the squeezed amplitude, $`G_s(1,2)`$ vanishes:
$$C_2(𝐤_1,𝐤_2)=\frac{N_2(𝐤_1,𝐤_2)}{N_1(𝐤_1)N_1(𝐤_2)}=1+\frac{|G(1,2)|^2}{G(1,1)G(2,2)}.$$
(5)
The exact value of the intercept, $`C_2(𝐤,𝐤)=2`$, is a characteristic signature of a chaotic Bose gas without dynamical 2-body correlations.
## 2 RESULTS FOR A HOMOGENEOUS SYSTEM
The terms neglected in (5) involving $`G_s(1,2)`$ become non-negligible when $`\delta M^2(|𝐤|)0`$. Given such a mass shift, the dispersion relation is modified to $`\mathrm{\Omega }_𝐤^2=\omega _𝐤^2\delta M^2(|𝐤|)`$, where $`\mathrm{\Omega }_𝐤`$ is the frequency of the in-medium mode with momentum $`𝐤`$. The annihilation operator for the in-medium quasi-particle with momentum $`𝐤`$, $`b_𝐤`$, and that of the asymptotic field, $`a_𝐤`$, are related by a Bogoliubov transformation :
$$a_{𝐤_1}^{}=U^{}b_{𝐤_1}^{}U=c_{𝐤_1}^{}b_{𝐤_1}^{}+s_{𝐤_1}^{}b_{𝐤_1}^{}C_1^{}+S_1^{},$$
(6)
where $`c_𝐤=\mathrm{cosh}[r_𝐤]`$, $`s_𝐤=\mathrm{sinh}[r_𝐤]`$, and $`r_𝐤=\frac{1}{2}\mathrm{log}(\omega _𝐤/\mathrm{\Omega }_𝐤)`$. As is well-known, the Bogoliubov transformation is equivalent to a squeezing operation, and so we call $`r_𝐤`$ the mode dependent squeezing parameter. While it is the $`a`$-quanta that are observed, it is the $`b`$-quanta that are thermalized in medium. Thus, we consider the thermal average for a globally thermalized gas of the $`b`$-quanta, that is homogeneous in volume $`V`$:
$$\widehat{\rho }=\frac{1}{Z}\mathrm{exp}\left(\frac{1}{T}\frac{V}{(2\pi )^3}𝑑𝐤\mathrm{\Omega }_𝐤b_𝐤^{}b_𝐤^{}\right).$$
(7)
When this thermal average is applied,
$`N_1(𝐤)`$ $`=`$ $`{\displaystyle \frac{V}{(2\pi )^3}}\omega _𝐤n_1(𝐤),n_1(𝐤)=|c_𝐤^{}|^2n_𝐤^{}+|s_𝐤|^2(n_𝐤+1),n_𝐤={\displaystyle \frac{1}{\mathrm{exp}(\mathrm{\Omega }_𝐤/T)1}},`$
$`G_c(1,2)`$ $`=`$ $`\sqrt{\omega _{𝐤_1}\omega _{𝐤_2}}\left[C_1^{}C_2^{}+S_1^{}S_2^{}\right],G_s(1,2)=\sqrt{\omega _{𝐤_1}\omega _{𝐤_2}}[S_1^{}C_2^{}+C_1^{}S_2^{}].`$ (8)
In the homogeneous case, the resulting two particle correlation function is unity except for the parallel and anti-parallel cases. The dynamical correlation due to the two mode squeezing associated with mass shifts is therefore back-to-back as first pointed out in . The HBT correlation intercept remains 2 for identical momenta. Evaluating $`C_2(𝐤,𝐤)`$ for $`T140`$ MeV, $`|𝐤|=0`$ – 500 MeV for $`\varphi `$ mesons, as a function of $`m_\varphi ^{}`$, one finds back-to-back correlations (BBC) as big as 100 – 1000 for reasonable values of $`m_\varphi ^{}`$. Note, that these novel BBC-s are not bounded from above, $`1<C_2(𝐤,𝐤)<\mathrm{}`$. With increasing values of $`|𝐤|`$ these BBC-s increase indefinitely. The huge BBC of decaying medium is, however, reduced if the decay of the medium is not completely sudden. To describe a more gradual freeze-out, the probability distribution $`F(t_i)`$ of the decay times $`t_i`$ is introduced. The time evolution of $`a_𝐤(t)`$ is $`a_𝐤(t)=a_𝐤(t_i)\mathrm{exp}[i\omega _𝐤(tt_i)]`$, which leads to
$$C_2(𝐤,𝐤)=1+\frac{|c_𝐤^{}s_𝐤^{}n_𝐤^{}+c_𝐤^{}s_𝐤^{}(n_𝐤^{}+1)|^2}{n_1(𝐤)n_1(𝐤)}\left|𝑑tF(t)\mathrm{exp}\left[i(\omega _𝐤+\omega _𝐤)t\right]\right|^2.$$
(9)
For a typical exponential decay, $`F(t)=\theta (tt_0)\mathrm{\Gamma }\mathrm{exp}[\mathrm{\Gamma }(tt_0)]`$ with $`\delta t=\mathrm{}/\mathrm{\Gamma }=2`$ fm/c, we show the BBC for the $`\varphi `$ mesons in Fig. 1, which shows that the BBC survives the suppression with a strength as large as 2-3.
## 3 RESULTS FOR INHOMOGENEOUS SYSTEMS
Following Ref., we divide the inhomogeneous fluid into cells labeled $`i`$. In each cell, it is assumed that the field can be expanded with creation and annihilation operators, and that $`H_i`$ is diagonalized by a local Bogoliubov transformation, that implies
$`G_c(1,2)`$ $`=`$ $`{\displaystyle \frac{1}{(2\pi )^3}}{\displaystyle d^4\sigma _\mu (x)K_{12}^\mu e^{iq_{12}x}\left[|c_{1,2}|^2n_{1,2}+|s_{1,2}|^2(n_{1,2}+1)\right]},`$ (10)
$`G_s(1,2)`$ $`=`$ $`{\displaystyle \frac{1}{(2\pi )^3}}{\displaystyle d^4\sigma _\mu (x)K_{12}^\mu e^{i2K_{12}x}\left[s_{1,2}^{}c_{2,1}n_{1,2}+c_{1,2}s_{2,1}^{}(n_{1,2}+1)\right]}.`$ (11)
Here $`d^4\sigma ^\mu (x)=d^3\mathrm{\Sigma }^\mu (x;\tau _f)F(\tau _f)d\tau _f`$ is the product of the normal-oriented volume element depending parametrically on $`\tau _f`$ (the freeze-out hypersurface parameter) and the invariant distribution of that parameter $`F(\tau _f)`$. The other variables are defined as follows:
$`n_{i,j}(x)`$ $`=`$ $`1/\left[\mathrm{exp}[(K_{i,j}^\mu (x)u_\mu (x)\mu (x))/T(x)]1\right],`$ (12)
$`r(i,j,x)`$ $`=`$ $`{\displaystyle \frac{1}{2}}\mathrm{log}\left[(K_{i,j}^\mu (x)u_\mu (x))/(K_{i,j}^\mu (x)u_\mu (x))\right],`$ (13)
$`c_{i,j}`$ $`=`$ $`\mathrm{cosh}[r(i,j,x)],s_{i,j}=\mathrm{sinh}[r(i,j,x)],`$ (14)
where $`i,j=\pm 1,\pm 2`$ and the mean and the relative momenta for the $`a`$($`b`$)-quanta are defined as $`K_{i,j}^{()\mu }(x)=[k_i^{()\mu }(x)+k_j^{()\mu }(x)]/2`$ and $`q_{i,j}^\mu =k_i^\mu k_j^\mu ,`$ respectively. The local hydrodynamical flow field is denoted by $`u^\mu (x)`$. See ref. for further details. The correlation function in the presence of local squeezing is given by
$$C_2(𝐤_1,𝐤_2)=1+\frac{|G_c(1,2)|^2}{G_c(1,1)G_c(2,2)}+\frac{|G_s(1,2)|^2}{G_c(1,1)G_c(2,2)}.$$
(15)
As the Bogoliubov transformation always mixes particles with anti-particles, the above expression holds only for the case where particles are equivalent to their anti-particles, e.g. the $`\varphi `$ meson and $`\pi ^0`$. However, the extension to the case where particles and their anti-particles are different is straightforward; correlations between particles and anti-particles such as $`\pi ^+`$ and $`\pi ^{}`$, $`K^+`$ and $`K^{}`$, and so forth, appear . Fig. 2 illustrates the novel character of BBC for two identical bosons caused by medium mass-modifications, along with the familiar Bose-Einstein or HBT correlations on the diagonal of the $`(𝐤_1,𝐤_2)`$ plane.
## 4 SUMMARY
The theory of particle correlations and spectra for bosons with in-medium mass-shifts predicts the existence of back-to-back correlations of $`\varphi \varphi `$, $`K^+K^{}`$, $`\pi ^0\pi ^0`$ and $`\pi ^+\pi ^{}`$ pairs that could be searched for at CERN SPS and upcoming RHIC BNL heavy ion experiments . Surprisingly, such novel back-to-back correlations could be as large as the well-known HBT correlations, surviving large finite time suppression factors.
|
no-problem/9907/cs9907025.html
|
ar5iv
|
text
|
# The union of unit balls has quadratic complexity, even if they all contain the origin
## 1 Introduction
The union of a set of $`n`$ balls in $`^3`$ has quadratic complexity $`\mathrm{\Theta }(n^2)`$, even if they all have the same radius. All the already known constructions have balls scattered around, however, and Sharir posed the problem whether a quadratic complexity could be achieved if all the balls (of same radius) contained the origin.
In this note, we show a construction of $`n`$ unit balls, all containing the origin, whose union has complexity $`\mathrm{\Theta }(n^2)`$. As a trivial observation, we observe that the centers are arbitrarily close to the origin in our construction. In fact, if the centers are forced to be at least pairwise $`\epsilon `$ apart, for some constant $`\epsilon >0`$, then no more than $`O(\frac{1}{\epsilon ^3})`$ can meet in a single point, and hence the union has complexity at most $`O(\frac{1}{\epsilon ^3}n)=O_\epsilon (n)`$. It is an interesting open question what a condition should be so that the union have subquadratic complexity and yet the balls have arbitrarily close centers.
By contrast, the intersection of $`n`$ balls can have quadratic complexity if their radii are not constrained, but the complexity is linear if all the radii are the same . Similarly, the convex hull of $`n`$ balls can have also quadratic complexity , but that complexity is linear if they all have the same radius.
## 2 Construction
Let $`m`$ and $`k`$ be any integers. We define two families of unit balls: the first consists of $`k`$ unit balls whose centers lie on a small vertical segment; the second consists of $`m`$ unit balls whose centers lie on a small circle under the segment. (See Figure 3.) We show below that their union has quadratic $`O(km)`$ complexity.
### The balls $`B_1`$$`B_k`$.
We denote by $`B(p,r)`$ the ball centered at $`p`$ and of radius $`r`$. Let $`n=k+m`$ and $`P_i`$ denote the point of coordinates $`(0,0,(i1)/n^4)`$, and $`B_i=B(P_i,1)`$, for $`i=1,\mathrm{},k`$. It is clear that the boundary of $`_{1ik}B_i`$ consists of two hemispheres belonging to $`B_1`$ and $`B_k`$ linked by a narrow cylinder of height less than $`k/n^41/n^3`$. This cylinder contains all the circles $`B_iB_{i+1}`$ for $`i=1,\mathrm{},k1`$. (See Figure 1.)
### The balls $`B_{k+1}`$$`B_{k+m}`$.
Let $`R`$ be the point of coordinates $`(x,0,z)`$ with
$$x=\frac{2n^24}{n^4},z=\frac{2n^24}{n^3}.$$
(Any values satisfying the constraints $`P_kH<1`$ in (1) and $`\mathrm{}<\frac{2}{n}`$ in (2) below would do.) We define $`\theta `$ as the rotation around the $`z`$-axis of angle $`2\pi /m`$, and for each $`j=1,\mathrm{},m`$, $`R_{k+j}=\theta ^{j1}(R)`$ and $`B_{k+j}=B(R_{k+j},1)`$.
## 3 Analysis
By our choice of $`x`$ and $`z`$, we prove below that the boundaries of $`B_{k+1}`$ and of the union $`_{i=1}^kB_i`$ depicted in Figure 1 meet along a curve $`\gamma `$ which satisfies the two claims below. The situation is depicted on Figure 2.
###### Claim 1
The curve $`\gamma `$ intersects all the balls $`B_i`$ for $`i=0,\mathrm{},k`$, .
###### Claim 2
The portion of $`\gamma `$ which does not belong to $`B_1`$ (equivalently, which belongs to the union $`_{i=2}^kB_i`$) is contained in an angular sector of angle at most $`2\pi /m`$.
From claim 2, we conclude that the portion of $`\gamma `$ which does not belong to $`B_1`$ is contained in the boundary of the union of the $`n=k+m`$ balls, From claim 1, we conclude that the portion of $`\gamma `$ which does not belong to $`B_1`$ has complexity $`\mathrm{\Omega }(k)`$. From claim 2, that it is contained in a small angular sector, hence appears completely on the boundary of the union of the $`n=k+m`$ balls, and it is replicated $`m`$ times, once for each of the balls $`B_j`$, $`j=1,\mathrm{},m`$. It follows that the union of all the balls $`B_i`$ for $`i=1,\mathrm{},k+m`$ has quadratic complexity $`\mathrm{\Omega }(km)`$. Moreover, all the balls contain the origin. The union of the $`n`$ balls is depicted on Figure 3.
The proofs involve only elementary geometry and trigonometry. The situation is depicted in Figure 4 and 5. Figure 4 depicts a section in the $`xz`$-plane of the spheres $`B_i`$ and $`B_{k+1}`$ and the point $`M`$, the highest point of intersection of the bounding spheres. The point $`M`$ is also depicted on Figure 2.
### Proof of Claim 1.
It suffices to prove that $`M`$ is higher than $`P_k`$, since then $`\gamma `$ extends higher than $`P_k`$ as well and passes through $`M`$ by symmetry. The lowest point of $`\gamma `$ belongs to $`B_1`$ and is clearly below the origin. The two facts together prove that $`\gamma `$ must intersect all the balls between $`B_1`$ and $`B_k`$.
Let $`H`$ be the point in the $`xz`$-plane on the median bisector of $`R`$ and $`P_k`$, with same $`z`$-ordinate as $`P_k`$. (See Figure 4.) In order to prove that $`M`$ is higher than $`P_k`$, it suffices to prove that $`H`$ belongs to $`B_k`$, since then $`M`$ is farther along the bisector. The two triangles $`QP_kH`$ and $`KRP_k`$ have equal angles, hence they are similar. It follows that
$$P_kH=P_kR\frac{P_kQ}{RK}=\frac{P_kR^2}{2RK}=\frac{x^2+\left(zz_k\right)^2}{2x},$$
(1)
where $`z_k=\frac{k1}{n^4}`$. For $`x`$ and $`z`$ as given in the construction, we have
$$P_kH=1/16\frac{40n^415n^2+68+16n^616n^3+28n}{n^4\left(n^22\right)}$$
which is smaller than 1 for $`n2`$.
### Proof of Claim 2.
It is easy to see that the intersection of $`\gamma `$ and a ball $`B_i`$ ($`2ik`$) consists of at most two arcs of circle, any of which is monotone in angular coordinates around the $`z`$-axis, and that any such arc is entirely above the plane $`z=0`$. Hence the intersections of $`\gamma `$ with the $`xy`$-plane belong to $`B_1`$ and $`B_{k+1}`$. It suffices to show that these intersections are at a distance $`\mathrm{}`$ at most $`\frac{2}{n}\mathrm{sin}\frac{\pi }{m}`$ from the $`x`$-axis. (See Figure 5.)
In the $`xy`$-plane section, $`B_1`$ is a unit circle, and $`B_{k+1}`$ is a circle of radius $`r=\sqrt{1z^2}`$ and center $`R^{}`$ of coordinates $`(x,0)`$. (Recall that the center of $`B_{k+1}`$ has coordinates $`(x,0,z)`$.) Hence $`\mathrm{}`$ is the height of a triangle with base $`x`$ and sides 1 and $`r<1`$. It is elementary to compute that
$$\mathrm{}=\sqrt{1\left(\frac{z^2+x^2}{2x}\right)^2}.$$
(2)
For our choice or $`x`$ and $`z`$, this yields
$$\mathrm{}=\sqrt{\frac{2n^6+3n^44n^24}{n^8}}$$
which is smaller than $`2/n`$ for $`n2`$.
### Acknowledgments.
Thanks to Micha Sharir for pointing out the problem the us. It was also pointed out that Alon Efrat might have a construction which leads to a quadratic lower bound as well. We have derived our construction independently.
|
no-problem/9907/cond-mat9907191.html
|
ar5iv
|
text
|
# Monte Carlo procedure for protein folding in lattice models. Conformational rigidity.
Proteins are heteropolymers that exbihit surprising thermodynamic and kinetic properties. The first aspect is that the lowest free energy conformation of a protein is assumed to be the unique native structure and to be thermodynamically stable . A major challenge in theoretical protein folding is to understand the second aspect or in other words, how does a protein find its native structure in biologically reasonnable times under physiological conditions . The lattice model is one class of models that is used to study theoretically the folding of protein and Monte Carlo (MC) algorithms are widely used to study dynamics .
In this Letter, we show that the commonly used MC procedure converges poorly towards thermal equilibrium. An attempt to refine the procedure has been recently proposed by Cieplak et al. , but even if this procedure converges towards equilibrium, the parameters of the Arrhenius law that they found disagree with the value of the main potential barrier obtained independently by a study of the phase space of the systems. We introduce, here, a more rigorous treatment of the dynamics. Our method fulfil the detailed balance condition, and, then, converges, indeed, towards the thermal equilibrium. For the first time, it also shows a good efficiency in the calculation of kinetics parameters and the determination of the Arrhenius law.
The model used is a two-dimensional lattice polymer. The chains are composed of $`N`$ monomers that are connected and constrained to be on a square lattice and the chains are self avoiding walk. The energy of a sequence in a given conformation $`m`$ is given by:
$$E^{(m)}=\underset{i>j+1}{}(B_{ij}+B_0)\mathrm{\Delta }_{ij}^{(m)}$$
(1)
where the function $`\mathrm{\Delta }_{ij}^{(m)}`$ equals 1 if the $`i^{th}`$ and $`j^{th}`$ monomers interact i.e. if they are nearest neighbors on the lattice. The $`B_{ij}^{}s`$ are the contact energy values. They are chosen randomly in a gaussian distribution centered on $`0`$, and $`B_0`$ is a negative parameter which favors the compact conformations . The set of $`B_{ij}`$ gives a sequence of the chain.
The sets of connections between conformations, used for the MC procedure, are those used by Chan and Dill : the corner flip and the tail moves are referred to as the move set 1 (MS1), the crankshaft move is referred to as the move set 2 (MS2) and at each MC step, a move of MS1 is chosen with a probability $`r`$ and a move of MS2 is chosen with a probability $`1r`$ .
Now, the problem is to find a correct algorithm of Metropolis which guarantees that the simulation converges towards thermal equilibrium imposed by the condition of the detailed balance :
$$P_{eq}^{(m)}W(mn)=P_{eq}^{(n)}W(nm)$$
(2)
where $`P_{eq}^{(m)}\mathrm{exp}(E^{(m)}/T)`$ is the equilibrium probability of the conformation $`m`$, $`T`$ is the temperature, and $`W(mn)`$ is the probability of transition from the state $`m`$ to the state $`n`$. Let us note :
$$W(mn)=W^{(0)}(mn)a(mn)$$
(3)
where $`W^{(0)}(mn)`$ is the a priori transition probability. A convenient choice for the acceptance ratio is :
$$a(mn)=\frac{1}{1+\mathrm{exp}(\mathrm{\Delta }E_n^m/T)}$$
(4)
with $`\mathrm{\Delta }E_n^m=E^{(n)}E^{(m)}`$. Let us note $`N_m^{(1)}`$ and $`N_m^{(2)}`$ the number of allowed transitions from $`m`$ to any conformation by performing a move of the MS1 or of the MS2 and $`N_{max}^{(1)}=\mathrm{max}_m\{N_m^{(1)}\}`$ and $`N_{max}^{(2)}=\mathrm{max}_m\{N_m^{(2)}\}`$. One can easily see that $`N_{max}^{(1)}=N+2`$ and $`N_{max}^{(2)}=N7`$. In order to have symmetric a priori transition probabilities : $`W^{(0)}(mn)=W^{(0)}(nm)`$, one assumes that the probability to attempt a move from conformation $`m`$ to conformation $`n`$ related by a connection of respectively MS1 and MS2 during one MC step are then :
$$W_1^{(0)}(mn)=\frac{r}{N_{max}^{(1)}}=\frac{r}{N+2}$$
(5)
$$W_2^{(0)}(mn)=\frac{(1r)}{N_{max}^{(2)}}=\frac{1r}{N7}$$
(6)
Then, the probability to attempt any move from the conformation $`m`$ using the MS1 is $`rN_m^{(1)}/(N+2)`$ (and $`(1r)N_m^{(2)}/(N7)`$ using the MS2). And therefore, it appears a probability of null transition :
$$w_m^{(0)}=1\left(r\frac{N_m^{(1)}}{N+2}+(1r)\frac{N_m^{(2)}}{N7}\right)$$
(7)
In contrast with rigid rotation which can involve movements of a lot of monomers, the one and two monomers moves are local modifications. One assumes, then, that they have the same affinity. Then, it comes from equations 5 and 6 :
$$r=\frac{N+2}{2N5}$$
(8)
In this particular case, the previous equations simplify :
$$W_1^{(0)}(mn)=W_2^{(0)}(mn)=\frac{1}{2N5}$$
(9)
$$w_m^{(0)}=1\frac{N_m^{(1)}+N_m^{(2)}}{2N5}$$
(10)
In order to check the accuracy of the proposed procedure, we applied it on 12 monomers chains. These chains can adopt 15037 different self avoiding walk conformations non equivalent by symmetry. The following results are obtained for the sequence A defined elsewhere . Such a short chain is used to check the method because a convergence test can be applied to this chain in a reasonable computational time.
For this chain, we performed 300 billions steps MC trajectories. A convergence factor $`C(t)=\sqrt{(P_{eq}^{(m)}\mathrm{occ}^{(m)}(t))^2}`$ is computed each 100000 MC steps ; $`t`$ is for number of the MC step and $`\mathrm{occ}^{(m)}(t)=N^{(m)}(t)/t`$ where $`N^{(m)}(t)`$ is the number of steps corresponding to the occurences of the conformation $`m`$. The brackets denote the average over all the conformations. If a simulation checks well the detailed balance, the $`C(t)`$ quantities should tend towards 0 when $`t\mathrm{}`$.
Figure 2 shows clearly that the commonly used procedure present limits of convergence depending on the temperature. On the other hand, the proposed method shows a power law of the convergence factor versus the MC steps. This result shows very well that the factor $`w_m^{(0)}`$ cannot be omitted in a lattice simulation for protein folding.
In what follows, we focus on the properties of the $`w_m^{(0)}`$ factor. One must notice that this factor is only a topological factor and therefore is sequence independent. If one looks now at the simulation only at the topological point of view, by removing for a while the energetic contribution (let suppose for a while that all conformations have the same energy), one sees that, the larger the factor $`w_m^{(0)}`$, the longer the simulation stays in the $`m`$ conformation when it reaches it. One must note that it is not only unprobable to escape from the conformation $`m`$ if $`w_m^{(0)}`$ is large, but it is also unprobable to reach it. On the contrary, conformations with small values of $`w_m^{(0)}`$ are often reach but the simulation doesn’t stay in this conformation. Then, the larger $`w_m^{(0)}`$, the more rigid the conformation $`m`$ and the smaller $`w_m^{(0)}`$, the more flexible the conformation $`m`$. Therefore, let us call in what follows $`w_m^{(0)}`$ the rigidity of the conformation $`m`$. Fig. 3(b) show how the $`w_m^{(0)}`$ prefactors are distributed for each subset of conformations with the same number of contacts. No conformation have a value of $`w_m^{(0)}`$ equal to 1(fig. 3). This guarantees that, no conformation is totally rigid, then each one is related to at least, another one. But, one must note that this condiction is not enough strong to fulfil the ergodic hypothesis.
It appears clearly that, the more compact conformations present the larger values of the rigidity. The more flexible are the more extended. Only one move of MS1 is allowed for the two more rigid conformations. One can see that, there is no conformation which have a value of $`w_m^{(0)}`$ which tends towards 0. Hence no conformation is totally flexible. This is a consequence of that no conformation present the maximum number of neighbors with both the MS1 and the MS2.
The native conformations of protein has not only very low energy but also they are very compact. Hence, they have of large Boltzmann weights but also they are very rigid conformations. Both effects favorise the stability of the native conformations, but the folding dynamics is slowed down by the topology of the native structures. The trap conformations are also very compact and are conformation of local energy minima . Then to exit the trap valley the chain has to first escape from a stable and rigid conformation.
One computed many kinetic ways from the trap conformation to the native structure of the sequence A. The trap conformation has been determined by solving the master equation of the system using the way described by Cieplak et al for the particular choice of $`r`$ used in the present paper. The conformation trap found here is the same that the conformation found by Cieplak et al. and it is chosen as the first conformation of the MC trajectories. The kinetic ways exhibit all similar properties. The native and the trap conformations are compact and then very rigid ($`w_{natif}^{(0)}=w_{trap}^{(0)}=0.894`$) and have low energies.
At low temperature the system spends a lot of MC steps in the trap conformation. The system escapes uneasily from the trap by passing through transition states which exhibit common properties : high energies, few intrachain contacts and then great flexibility. Therefore, even if the transition states are energetically unfavorable, they are easily accessible at a topological point of view and the MC trajectories spend a very few steps in these conformations.
A major problem of the protein folding investigation is namely to calculate kinetic properties at low temperature , where the rejected move ratio of a MC procedure is very large. The efficiency of the procedure is increased at low temperature using a Bortz-Kalos-Lebowitz (BKL) type algorithm . The idea is the following : let us note, $`w_m`$ the probability not to accept a move from the conformation $`m`$ during one step :
$$w_m=1\frac{1}{2N5}\underset{nm}{}\frac{1}{1+\mathrm{exp}(\mathrm{\Delta }E_n^m/T)}$$
(11)
then, the probability not to accept a move from the conformation $`m`$ during exactly $`k`$ steps is :
$$P(k)=w_m^{k1}(1w_m)$$
(12)
Then for each move, the number of MC steps $`k`$, during which the chain stays in the current conformation, say $`m`$, is chosen at random in the density of probability $`P(k)`$ and a move chosen with the following probability of transition :
$$t(mn)=\frac{\frac{1}{1+\mathrm{exp}(\mathrm{\Delta }E_n^m/T)}}{_{n^{}m}\frac{1}{1+\mathrm{exp}(\mathrm{\Delta }E_n^{}^m/T)}}$$
(13)
is always performed. This procedure permits to carry out MC simulations at very low temperature. All the values of $`w_m`$ and $`t(mn)`$ are computed for each temperature before performing the MC trajectories.
The folding times ($`t_{fold}`$) have been computed using the BKL type algorithm for low temperature. The folding time is the average over 500 trajectories of the number of MC steps needed to reach the conformation of lowest energy. Three different simulations have been carried out depending on the choice of the first conformation set ; the simulation ”T” for which the trap conformation is chosen as the first conformation ; the simulation ”E” for which the first conformation is an extended conformation chosen at random ; the simulation ”R” for which the first conformation is chosen at random among all the conformational space. The transition state of lowest energy between the trap and the native structure has been determined elsewhere for this sequence and the difference of energy between the trap and the transition state had been computed and equals $`\mathrm{\Delta }E=4.53`$. The Monte-Carlo folding time found by Cieplak et al. follows an Arrhenius law $`t_{fold}(T)=A\mathrm{exp}(\delta E/T)`$, with $`\delta E=2.76`$ which is in poor agreement with $`\mathrm{\Delta }E`$. For the three simulations, we also find Arrhenius laws at very low temperature ($`T=`$ 0.24, 0.22, 0.20, 0.18). $`\delta E`$ A simulation ”T” 4.51 33.25 simulation ”E” 4.40 8.58 simulation ”R” 4.34 12.55 TABLE I.: Value of the parameters $`\delta E`$ and $`A`$ of the Arrhenius laws $`t_{fold}(T)=A\mathrm{exp}(\delta E/T)`$ for the ”T”, ”E” and ”R” simulations (see text) The results of $`\delta E`$, shown in table 1, are in very good agreement (less than $`1\%`$ for the ”T” simulation) with the value of $`\mathrm{\Delta }E`$ and strongly support the proposed method for the calculation of the parameters of the Arrhenius laws.
If a first conformation is chosen at random, it can fall in the trap valley (TV), in the native conformation valley (NV) or in less important valleys. At low temperature, whatever the set of first conformations, the conformations which fall in TV govern the kinetics. Then, the dominant term in the exponential function of the Arrhenius law tends always towards $`\mathrm{\Delta }E`$. The ratio of the $`A`$ coefficients gives the proportion of conformations which falls in TV : a random conformation has a probability equal to $`12.55/33.25=0.38`$ to fall in TV and an extended conformation has a probability equal to 0.26 to be attracted by TV. Then, these ratios give an insight of the attraction strenght of the basin of TV.
The results presented in this Letter show clearly that the proposed MC method is well adapted to the study of the dynamics of protein folding. It has been showed that not only the difference of energy between the conformations has to be taken into account in the MC simulations but also the rigidity of the conformations. The method had been applied only on a short chain in order to check its efficiency, but it is easily applicable to longer chain on two- or three-dimensional lattices and moreover the BKL algorithm should permit to elucidate low temperature properties of protein-like chains.
Acknowledgments to Aaron Dinner, Bertrand Berche, Christophe Chatelain, Trinh Xuan Hoang and Marek Cieplak for helpful discussions.
|
no-problem/9907/astro-ph9907274.html
|
ar5iv
|
text
|
# Shell-model half-lives for the 𝑁=82 nuclei and their implications for the r-process
## Abstract
We have performed large-scale shell-model calculations of the half-lives and neutron-branching probabilties of the r-process waiting point nuclei at the magic neutron number $`N=82`$. We find good agreement with the measured half-lives of <sup>129</sup>Ag and <sup>130</sup>Cd. Our shell-model half-lives are noticeably shorter than those currently adopted in r-process simulations. Our calculation suggests that <sup>130</sup>Cd is not produced in beta-flow equilibrium with the other $`N=82`$ isotones on the r-process path.
About half of the elements heavier than mass number $`A=60`$ are made in the astrophysical r-process, a sequence of neutron capture and beta decay processes . The r-process is associated with environment of relatively high temperatures ($`T10^9`$ K) and very high neutron densities ($`>10^{20}`$ neutrons/cm<sup>3</sup>) such that the intervals between neutron captures are generally much smaller than the $`\beta `$ lifetimes, i.e. $`\tau _n\tau _\beta `$ in the r-process. Thus, nuclei are quickly transmuted into neutron-richer isotopes, decreasing the neutron separation energy $`S_n`$. This series of successive neutron captures comes to a stop when the $`(n,\gamma )`$ capture rate for an isotope equals the rate of the destructive $`(\gamma ,n)`$ photodisintegration rate. Then the r-process has to wait for the most neutron-rich nuclei to $`\beta `$-decay. Under the typical conditions expected for the r-process, the $`(n,\gamma )(\gamma ,n)`$ equilibrium is achieved at neutron separation energies, $`S_n2`$ MeV . This condition mainly determines the r-process path, which is located about 15-20 units away from the valley of stability. The r-process path reaches the neutron shell closures at $`N=50,82`$, and 126 at such low $`Z`$-values that $`S_n`$ is too small to allow the formation of still more neutron-rich isotopes; the isotopes then have to $`\beta `$-decay. To overcome the shell gap at the magic neutron numbers and produce heavier nuclei, the material has to undergo a series of alternating $`\beta `$-decays and neutron captures before it reaches a nucleus close enough to stability to have $`S_n`$ large enough to allow for the continuation of the sequence of neutron capture reactions. Noting that the $`\beta `$-decay half-lives are relatively long at the magic neutron numbers, the r-process network waits long enough at these neutron numbers to build up abundance peaks related to the mass numbers $`A80,130`$, and 195. Furthermore the duration of the r-process, i.e. the minimal time required to transmute, at one site, seed nuclei into nuclei around $`A200`$, is dominantly given by the sum of the half-lives of the r-process nuclei at the three magic neutron numbers. It appears as if the required minimal time is longer than the duration of the favorable r-process conditions in the neutrino-driven wind from type II supernovae , which is the currently most favored r-process site.
Simulations of the r-process require a knowledge of nuclear properties far from the valley of stability. As the relevant nuclei are not experimentally accessible, theoretical predictions for the relevant quantities (i.e. neutron separation energies and half-lives) are needed. This Letter is concerned with the calculation of $`\beta `$-decays of r-process nuclei at the magic neutron number $`N=82`$. These $`\beta `$-decays are determined by the weak low-energy tails of the Gamow-Teller strength distribution, mediated by the operator $`𝝈𝝉_{\mathbf{}}`$, and provide quite a challenge to theoretical modelling as they are not constrained by sumrules. Previous estimates have been based on semi-empirical global models, quasiparticle random phase approximation, or very recently, the Hartree-Fock-Bogoliubov method. But the method of choice to calculate Gamow-Teller transitions is the interacting nuclear shell model, and decisive progress in programming and hardware make now reliable shell model calculations of the half-lives of the $`N=82`$ r-process waiting point nuclei feasible.
Our shell model calculations have been performed with the code antoine developed by E. Caurier . As model space we chose the $`0g_{7/2},1d_{3/2,5/2},2s_{1/2},0h_{11/2}`$ orbitals outside the $`N=50`$ core for neutrons, thus assuming a closed $`N=82`$ shell configuration in the parent nucleus. For protons our model space was spanned by the $`1p_{1/2},0g_{9/2,7/2},1d_{3/2,5/2},2s_{1/2}`$ orbitals where a maximum of 2 (3) protons were allowed to be excited from the $`1p_{1/2},0g_{9/2}`$ orbitals to the rest of the orbitals in the parent (daughter) nucleus. The $`0g_{9/2}`$ neutron orbit and the $`0h_{11/2}`$ proton orbit has been excluded from our model space to remove spurious center of mass configurations. Therefore, we do not allow for Gamow-Teller transitions from the $`0g_{9/2}`$ and $`h_{11/2}`$ neutron orbitals, which should, however, not contribute significantly to the low-energy decays we are interested in here. The $`1p_{1/2}`$ proton orbit has been included to describe the $`1/2^{}`$ isomeric state seen in <sup>131</sup>In and expected in the others $`N=82`$ odd-A isotones, but does not play any role in the decay of the ground states. The residual interaction can be split into a monopole part and a renormalized G-matrix component which can be derived from the nucleon-nucleon potential. We use the interaction of ref. for the $`gdsh_{11/2}`$ orbits and the KLS interaction for the interaction between the previous orbits with the $`1p_{1/2}`$ orbit. To derive the appropriate monopole part, we followed the prescription given by Zuker , and fine-tuned the monopoles to reproduce known spectra of nuclei around the $`N=82`$ shell closure. As shell model studies overestimate the GT strength by a universal factor, we have scaled our results by the appropriate factor $`(0.74)^2`$ .
The $`Q_\beta `$ values have been taken either from experiment (<sup>131</sup>In) or from the mass compilation of Duflo and Zuker . Note that the Extended Thomas-Fermi with Strutinsky Integral approach (ETFSI) and the microscopic-macroscopic (FRDM) model of Möller give very similar $`Q_\beta `$ values (with typical uncertainties of 250 keV for $`Q_\beta 10`$ MeV) so that the associated uncertainty in the half-lives is small.
The shell-model half-lives are summarized in Table I and are compared to other theoretical predictions in Fig. 1. For $`Z=47`$-49, the half-lives are known experimentally and our shell model values are slightly faster. This, however, is expected as our still truncated model space will miss some correlations and hence slightly overestimates the Gamow-Teller matrix elements.
Our shell model half-lives show significant and important differences to those calculated in the FRDM and the ETFSI approach , which have been typically used in r-process simulations. Although the latter predicts a $`Z`$-dependence of the half-lives in the $`N=82`$ isotones very similar to the present results, the ETFSI half-lives are longer on average by factors 4-5 indicating that the method fails to shift enough Gamow-Teller (GT) strength to low energies . The FRDM half-lives show a very pronounced odd-even dependence which is predicted neither by ETFSI nor shell model. While the FRDM half-lives for odd-A $`N=82`$ isotones approximately agree with the shell model results (within a factor of 2) and the experimental values for <sup>131</sup>In and <sup>129</sup>Ag, they overestimate the half-lives for even isotones by an order of magnitude. As such an odd-even dependence is not present in the experimental half-lives (nor in the r-process abundances) it is probably an artifact of the FRDM model. The absence of odd-even effects can be understood considering that the main contribution to the half-life comes from transitions from a $`g_{7/2}`$ neutron to a $`g_{9/2}`$ proton due to the energy gap between the $`g_{9/2}`$ and the other orbits. Therefore, neither the GT matrix elements nor the half-lifes show a strong odd-even dependence along the $`N=82`$ isotonic line. Noting that the $`Q_\beta `$ values in the FRDM model are very similar to the ones used and that the main difference with our results appears when the final nuclei is odd-odd we conclude that the odd-even effect must stem from the treatment of the $`pn`$ interaction in the FRDM approach. Very recently, Engel et al. have performed half-life calculations of r-process nuclei within the HFB model . Unfortunately their studies are yet restricted to even-even nuclei only, but they obtain results which, except for a factor of 2, closely resemble the present shell model results. Ref. points out that the half-lives of the $`N=82`$ waiting point nuclei are noticeably shorter than currently assumed in r-process simulations, in support of our findings.
Odd-A nuclei in this mass range usually exhibit a low-lying $`1/2^{}`$ isomeric state which can be related to a proton hole in the $`p_{1/2}`$ orbital. These isomeric states can affect the r-process half-lives in two different ways: i) If low enough in energy, the isomeric state can be populated thermally; ii) in a non-equilibrium picture the isomeric state can be fed by the preceding neutron capture on the $`N=81`$ nucleus. The half-live of the isomeric state has been measured in <sup>131</sup>In (350 ms), very similar to the ground state half-life (280 ms). We have calculated the energy positions and half-lives of the isomeric states within our shell model approach. We find that the excitation energy of the isomeric state slowly decreases within the $`N=82`$ isotones when moving from <sup>123</sup>Nb ($`E^{}=500`$ keV) to <sup>131</sup>In (375 keV) where experimentally only the isomeric state in <sup>131</sup>In is known (at 360 keV). Importantly our calculation predicts the half-lives of the isomeric states to be comparable to the ground state half-lives in all cases (see Table I). Thus, the effective r-process half-lives will be very close to the ground state half-lives. We note that the isomeric state in <sup>131</sup>In dominantly decays by first-forbidden transitions, as the approximately closed $`g_{9/2}`$ proton configuration in this state strongly suppresses low-energy GT transitions. We calculate a half-live of the isomeric state of 274 ms due to first-forbidden decay, about $`30\%`$ faster than the experimental value (350 ms). For the $`N=82`$ nuclei with $`Z47`$ the $`g_{9/2}`$ orbital is not anymore closed for the isomeric state allowing for GT transitions at low-energy. Consequently these nuclei decay by GT transitions rather than first-forbidden ones.
An interesting, but yet open question is whether the r-process proceeds in $`\beta `$-flow equilibrium also at the waiting points related to magic neutron numbers . If so, the duration of the r-process has to be larger than the sum of the beta half-lives of the nuclei in $`\beta `$-flow equilibrium. In this appealing picture, the observed r-process abundances scale like the respective $`\beta `$-decay half-lives, if the former are corrected for $`\beta `$-delayed neutron emissions during their decays from the r-process paths towards the stable nuclei which are observed as r-process abundances. Using our shell model $`\beta `$-strength functions we have calculated the probability $`P_{1n}`$ that the $`\beta `$ decay is accompanied by the emission of a neutron, defined as the relative probability of the $`\beta `$-decay rate above the neutron emission threshold, $`S_n`$. For consistency we adopted the $`S_n`$ values from Duflo and Zuker (DZ), which for a mild parity effect, gives similar results than the ETFSI model, while the FRDM model predicts a significantly slower decrease of $`S_n`$ with decreasing $`Z`$. As the $`P_{1n}`$ values are rather sensitive to the neutron separation energies we have accounted for the differences between the ETFSI and the DZ predictions by assigning an uncertainty of 500 keV to the DZ $`S_n`$ values. In this way we have calculated an equally probable range for the shell model $`P_{1n}`$ values as shown in Fig. 2. We find rather small neutron emission probabilities for $`Z=46`$-49 (which will only slightly been increased if the $`1h`$-proton orbitals were included); for the smaller $`Z`$-values our results approximately resembles the FRDM values, also showing a noticeable odd-even dependence.
Using the solar r-process abundances and the shell model neutron emission probabilities $`P_{1n}`$, we have determined the abundances of the $`N=82`$ progenitor nuclei, $`n(Z)`$, on the r-process path by one-step iteration; we have checked that second-order branchings change the results only unsignificantly. If $`\beta `$-flow equilibrium at the waiting points is indeed achieved, one has $`n(Z)\tau (Z)`$. Thus, up to a constant $`n(Z)`$ can be expressed as the so-called $`\beta `$-flow half-life $`T_{\beta f}`$. Fixing the constant appropriately, figure 3 shows that $`\beta `$-flow equilibrium can be attained for the $`N=82`$ isotones with $`Z=44`$-47, but fails by more than a factor of 3 for <sup>130</sup>Cd and <sup>131</sup>In. As the systematic of neutron separation energies put <sup>130</sup>Cd on the r-process path, our results suggest that the conditions which allow to build the $`N=82`$ r-process abundance peak do not last long enough to achieve $`\beta `$-flow equilibrium for this nucleus. This is consistent with the expectation that the r-process peaks at $`N=82`$ and $`N=126`$ are made under different conditions . Recent observations also indicate that the nuclides in the $`N=82`$ and 126 abundance peaks are produced at different sites . Furthermore, the assumption of $`\beta `$-flow equilibrium, which on the r-process path should be fulfilled the better the shorter the half-life, leads to unphysical (negative) $`T_{\beta f}`$ values for $`Z<43`$, indicating that these nuclei are not on the r-process path. However, firm conclusion here can only be reached after reducing the uncertainties in the $`P_{1n}`$ values for all nuclei in the decay sequence to stability.
So far we have discussed the r-process as a sequence of competing neutron capture and $`\beta `$-decay processes. If the r-process site is indeed the neutrino-driven wind above a newly born neutron star, then it occurs in a very strong neutrino flux and charged-current $`(\nu _e,e)`$ reactions can substitute for $`\beta `$ decays; in this case the picture of $`\beta `$-flow equilibrium has to be extended to ‘weak-flow’ equilibrium. McLaughlin and Fuller have shown that weak steady flow equilibrium at the $`N=82`$ waiting point normally cannot be attained in the neutrino-driven wind model. However, their study adopted $`\beta `$ half-lives taken from which predict a $`Z`$-dependence of the half-lives in disagreement with the recent theoretical studies (including the present) and the data. Nevertheless, when we reinvestigate this question using the present shell-model $`\beta `$-decay rates and the charge current neutrino rates of , we find that <sup>129</sup>Ag and <sup>130</sup>Cd cannot be produce in $`\beta `$-flow equilibrium within the neutrino-driven wind model in agreement with the conclusions reached in ref. .
In conclusion, we have calculated shell model half-lives and neutron emission probabilities for the $`N=82`$ waiting point nuclei in the r-process, finding good agreement with the experimentally known half-lives for the $`Z=47`$-49 nuclei. Our half-lives are significantly shorter than the ETFSI and FRDM half-lives, which are frequently used in r-process simulations. Our results indicate that <sup>129</sup>Ag is produced in $`\beta `$-flow equilibrium together with the lighter isotones at the $`N=82`$ waiting point. R-process simulations usually include <sup>130</sup>Cd and even <sup>131</sup>In in the r-process path at freeze-out. If so, they will not be synthesized in $`\beta `$-flow equilibrium. That fact together with the shorter half-lives implies a shorter waiting time at $`N=82`$. This is quite welcome to remove possible conflicts between the required duration time for the r-process and the expansion time scale of the neutrino-driven wind scenario which requires the r-process to occur in a fraction of a second.
We thank E. Caurier, J. Engel, F. Nowacki, F.-K. Thielemann, P. Vogel and A. P. Zuker for useful discussions. This work was supported in part by the Danish research Council. Computational cycles where provided by the Supercomputing Facility at the University of Århus and by the Center for Advanced Computational Research at Caltech.
|
no-problem/9907/hep-ex9907059.html
|
ar5iv
|
text
|
# Subjet Multiplicity in Quark and Gluon Jets at DØ
## I Introduction
The Tevatron proton-antiproton collider is a rich environment for studying high energy physics. The dominant process is jet production, described in Quantum Chromodynamics (QCD) by scattering of the elementary quark and gluon constituents of the incoming hadron beams. In leading order (LO) QCD, there are two partons in the initial and final states of the elementary process. A jet is associated with the energy and momentum of each final state parton. Experimentally, however, a jet is a cluster of energy in the calorimeter. Understanding jet structure is the motivation for the present analysis. QCD predicts that gluons radiate more than quarks. Asymptotically, the ratio of objects within gluon jets to quark jets is expected to be in the ratio of their color charges $`C_A/C_F=9/4`$.
## II The $`k_T`$ Jet Algorithm
We define jets in the DØ detector with the $`k_T`$ algorithm . The jet algorithm starts with a list of energy preclusters, formed from calorimeter cells or from particles in a Monte Carlo event generator. The preclusters are separated by $`\mathrm{\Delta }=\sqrt{\mathrm{\Delta }\eta ^2+\mathrm{\Delta }\varphi ^2}>0.2`$, where $`\eta `$ and $`\varphi `$ are the pseudorapidity and azimuthal angle of the preclusters. The steps of the jet algorithm are:
1. For each object $`i`$ in the list, define $`d_{ii}=E_{T,i}^2`$, where $`E_T`$ is the energy transverse to the beam. For each pair $`(i,j)`$ of objects, also define $`d_{ij}=min(E_{T,i}^2,E_{T,j}^2)\frac{\mathrm{\Delta }_{ij}^2}{D^2}`$, where $`D`$ is a parameter of the jet algorithm.
2. If the minimum of all possible $`d_{ii}`$ and $`d_{ij}`$ is a $`d_{ij}`$, then replace objects $`i`$ and $`j`$ by their 4-vector sum and go to step 1. Else, the minimum is a $`d_{ii}`$ so remove object $`i`$ from the list and define it to be a jet.
3. If any objects are left in the list, go to step 1.
The algorithm produces a list of jets, each separated by $`\mathrm{\Delta }>D`$. For this analysis, $`D=0.5`$.
The subjet multiplicity is a natural observable of a $`k_T`$ jet. Subjets are defined by rerunning the $`k_T`$ algorithm starting with a list of preclusters in a jet. Pairs of objects with the smallest $`d_{ij}`$ are merged successively until all remaining $`d_{ij}>y_{cut}E_T^2(jet)`$. The resolved objects are called subjets, and the number of subjets within the jet is the subjet multiplicity $`M`$. The analysis in this article uses a single resolution parameter $`y_{cut}=10^3`$.
## III Jet Selection
In LO QCD, the fraction of final state jets which are gluons decreases with $`xE_T/\sqrt{s}`$, the momentum fraction of initial state partons within the proton. For fixed $`E_T`$, the gluon jet fraction decreases when $`\sqrt{s}`$ is decreased from 1800 GeV to 630 GeV. We define gluon and quark enriched jet samples with identical cuts in events at $`\sqrt{s}=1800`$ and 630 GeV to reduce experimental biases and systematic effects. Of the two highest $`E_T`$ jets in the event, we select jets with $`55<E_T<100`$ GeV and $`|\eta |<0.5`$.
## IV Quark and Gluon Subjet Multiplicity
There is a simple method to extract a measurement of quark and gluon jets on a statistical basis, using the tools described in the previous sections. $`M`$ is the subjet multiplicity in a mixed sample of quark and gluon jets. It may be written as a linear combination of subjet multiplicity in gluon and quark jets:
$$M=fM_g+(1f)M_q$$
(1)
The coefficients are the fractions of gluon and quark jets in the sample, $`f`$ and $`(1f)`$, respectively. Consider Eq. (1) for two similar samples of jets at $`\sqrt{s}=1800`$ and 630 GeV, assuming $`M_g`$ and $`M_q`$ are independent of $`\sqrt{s}`$. The solutions are
$$M_q=\frac{f^{1800}M^{630}f^{630}M^{1800}}{f^{1800}f^{630}}$$
(2)
$$M_g=\frac{\left(1f^{630}\right)M^{1800}\left(1f^{1800}\right)M^{630}}{f^{1800}f^{630}}$$
(3)
where $`M^{1800}`$ and $`M^{630}`$ are the experimental measurements in the mixed jet samples at $`\sqrt{s}=1800`$ and 630 GeV, and $`f^{1800}`$ and $`f^{630}`$ are the gluon jet fractions in the two samples. The method relies on knowledge of the two gluon jet fractions.
## V Results
The HERWIG 5.9 Monte Carlo event generator provides an estimate of the gluon jet fractions. The method is tested using the detector simulation and CTEQ4M PDF. We tag every selected jet in the detector as either quark or gluon by the identity of the nearer (in $`\eta \times \varphi `$ space) final state parton in the QCD 2-to-2 hard scatter. Fig. 1 shows that gluon jets in the detector simulation have more subjets than quark jets. The tagged subjet multiplicity distributions are similar at the two center of mass energies, verifying the assumptions in § IV.
We count tagged gluon jets and find $`f^{1800}=0.59\pm 0.02`$ and $`f^{630}=0.33\pm 0.03`$, where the uncertainties are estimated from different gluon PDF’s. The nominal gluon jet fractions and the Monte Carlo measurements at $`\sqrt{s}=1800`$ and 630 GeV are used in Eqs. (2-3). The extracted quark and gluon jet distributions in Fig. 1 agree with the tagged distributions and demonstrate closure of the method.
Figure 2 shows the raw subjet multiplicity in DØ data at $`\sqrt{s}=1800`$ GeV is higher than at $`\sqrt{s}=630`$ GeV. This is consistent with the prediction that there are more gluon jets at $`\sqrt{s}=1800`$ GeV compared to $`\sqrt{s}=630`$ GeV, and gluons radiate more than quarks. The combination of the distributions in Fig. 2 and the gluon jet fractions gives the raw subjet multiplicity distributions in quark and gluon jets, according to Eqs. (2-3).
The quark and gluon raw subjet multiplicity distributions need separate corrections for various detector-dependent effects. These are derived from Monte Carlo, which describes the raw DØ data well. Each Monte Carlo jet in the detector simulation is matched (within $`\mathrm{\Delta }<0.5`$) to a jet reconstructed from particles without the detector simulation. We tag detector jets as either quark or gluon, and study the subjet multiplicity in particle jets $`M^{ptcl}`$ vs. that in detector jets $`M^{det}`$. The correction unsmears $`M^{det}`$ to give $`M^{ptcl}`$, in bins of $`M^{det}`$. Figure 3 shows the corrected subjet multiplicity is clearly larger for gluon jets compared to quark jets.
The gluon jet fractions are the largest source of systematic error. We vary the gluon jet fractions by the uncertainties in an anti-correlated fashion at the two values of $`\sqrt{s}`$ to measure the effect on $`R`$. The systematic errors listed in Table I are added in quadrature to obtain the total uncertainty in the corrected ratio $`R=\frac{M_g1}{M_q1}=1.91\pm 0.04(\mathrm{stat})_{0.19}^{+0.23}(\mathrm{sys})`$.
## VI Conclusion
We extract the $`y_{cut}=10^3`$ subjet multiplicity in quark and gluon jets from measurements of mixed jet samples at $`\sqrt{s}=1800`$ and 630 GeV. On a statistical level, gluon jets have more subjets than quark jets. We measure the ratio of additional subjets in gluon jets to quark jets $`R1.9\pm 0.2`$. The ratio is well described by the HERWIG parton shower Monte Carlo, and is only slightly smaller than the naive QCD prediction 9/4.
## VII Acknowledgements
We thank the Fermilab and collaborating institution staffs for contributions to this work and acknowledge support from the Department of Energy and National Science Foundation (USA), Commissariat à L’Energie Atomique (France), Ministry for Science and Technology and Ministry for Atomic Energy (Russia), CAPES and CNPq (Brazil), Departments of Atomic Energy and Science and Education (India), Colciencias (Colombia), CONACyT (Mexico), Ministry of Education and KOSEF (Korea), and CONICET and UBACyT (Argentina).
|
no-problem/9907/cond-mat9907427.html
|
ar5iv
|
text
|
# Observation of the Lambda Point in the 4He-Vycor System: A Test of Hyperuniversality
## ACKNOWLEDGMENTS
We wish to thank A. Tyler and A. Woodcraft for contributing to the development of the calorimeter. We thank A.C. Corwin and J. He for their assistance in performing these measurements. We have benefitted from conversations with M.H.W. Chan, F.M. Gasparini and T.C.P. Chui. This work was carried out under NSF grant DMR96-23694 and with funding from the Cornell Center for Materials Research under NSF grant DMR96-32275.
|
no-problem/9907/astro-ph9907236.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The Cepheid variable stars are among the most important objects of the modern astrophysics. These relatively well understood pulsating stars provide many empirical information on stellar structure, evolution etc. But what makes them particularly attractive is the well known correlation of their brightness with period, discovered yet at the beginning of the 20th century (Leavitt 1912). This feature of Cepheids and their large absolute brightness make them potentially ideal standard candle for precise distance determination to extragalactic objects where Cepheids have been discovered. However, after almost a century since its discovery the calibration of $`PL`$ relation is still a subject of considerable dispute.
While determination of precise periods of Cepheids is straightforward, determination and calibration of Cepheid brightness is difficult. The Galactic Cepheids are located so far from the Sun that the distance determination to them can only be obtained by indirect, often very uncertain methods. Even the Hipparcos satellite did not provide much progress in this field as it measured parallaxes of only a few Galactic Cepheids with accuracy better than 30% (Feast and Catchpole 1997). Galactic Cepheids are also usually highly reddened and accurate determination of their brightness is not easy. The Magellanic Clouds, where the $`PL`$ relation was discovered, play an important role in this field. Both the Large and Small Magellanic Clouds are known to contain large population of Cepheids, located at approximately the same distance. Therefore the slope of the $`PL`$ relation of Cepheids was usually derived based on the Magellanic Cloud objects.
The Magellanic Cloud calibrations (Caldwell and Coulson 1986, Madore and Freedman 1991, Laney and Stobie 1994) are based, however, on limited samples of stars with photometry obtained many years ago mostly with photoelectric and photographic techniques what in crowded fields may lead to systematic uncertainties. Unfortunately both galaxies which are the best objects for studying properties of Cepheids have been very rarely observed with modern CCD techniques. Situation has changed dramatically in 1990s when large microlensing surveys started regular observations of the Magellanic Clouds. Photometry of millions stars is a natural by-product of these surveys, and for the first time precise light curves of thousands of Cepheids could be obtained. The MACHO team (Alcock et al. 1995) presented impressive $`PL`$ diagram of Cepheids in the LMC showing for the first time clear division between the $`PL`$ relation of the fundamental mode and first overtone Cepheids, merged in previous data. Also the EROS group analyzed $`PL`$ diagrams in the SMC and LMC (Sasselov et al. 1997) suggesting dependence of the zero point of $`PL`$ relation on metallicity. They also found a change of slope of the short period fundamental mode Cepheids in the SMC (Bauer et al. 1999).
Both the MACHO and EROS data are taken in non-standard bands, therefore they are not suitable for general $`PL`$ relation calibrations. The problem of good calibration of the $`PL`$ relation becomes very urgent now, because the Cepheid variables are routinely discovered in many galaxies by HST. The main goal of the HST Key Project (Kennicutt, Freedman and Mould 1995) is determination of the Hubble constant. The group uses, however, the universal $`PL`$ relation based on very small number of LMC Cepheids (Madore and Freedman 1991), neglecting possible population effects, and assuming the distance modulus to the LMC $`\mu _{LMC}=18.50`$ mag. Any uncertainties of calibration of the $`PL`$ relation and the LMC distance propagate to any Cepheid based distance and as a result to the value of the Hubble constant with all astrophysical consequences as, for example, age of the Universe.
The Magellanic Clouds were added to the targets of the OGLE microlensing search at the beginning of the second phase of the project – OGLE-II (Udalski, Kubiak and Szymański 1997) in January 1997. Observations are collected in the standard BVI-bands and after more than two years of observations the photometric databases consist of a few hundreds epochs of a few millions stars from the LMC and SMC. The OGLE-II databases were already searched for variable stars and Cepheids are very numerous among them. Some results on Cepheid variables detected in the OGLE-II data – double-mode Cepheids and second overtone Cepheids in the SMC – were already presented in previous papers of this series (Udalski et al. 1999a,b). Catalogs with BVI photometry of about 1400 Cepheids from the LMC and 2300 from the SMC will be released in the following papers.
In this paper we present analysis of the period-luminosity-color ($`PLC`$) and $`PL`$ relations for Cepheids from the LMC and SMC. We analyze mainly the fundamental mode pulsators because they are used as distance indicators. We compare relations of both the LMC and SMC and find that the slopes of relations in both galaxies are within errors similar. The difference of distance moduli of the SMC and LMC inferred with the Cepheid relations is consistent with that found with other reliable standard candles indicating no dependence of the zero points of Cepheid relations on population differences between the Clouds. We calibrate the $`PLC`$ and $`PL`$ relations using the short distance modulus to the LMC ($`\mu _{LMC}=18.22`$ mag) resulting from the recent determinations with other reliable distance indicators. Finally, we compare V-band magnitudes of Cepheids with that of RR Lyr stars providing a tight constraint on their absolute magnitude.
## 2 Observational Data
The observational data presented in this paper were collected during the second phase of the OGLE microlensing search with the 1.3-m Warsaw telescope at the Las Campanas Observatory, Chile which is operated by the Carnegie Institution of Washington. The telescope was equipped with the ”first generation” camera with a SITe $`2048\times 2048`$ CCD detector working in drift-scan mode. The pixel size was 24 $`\mu `$m giving the 0.417 arcsec/pixel scale. Observations were performed in the ”slow” reading mode of CCD detector with the gain 3.8 e<sup>-</sup>/ADU and readout noise of about 5.4 e<sup>-</sup>. Details of the instrumentation setup can be found in Udalski, Kubiak and Szymański (1997).
Observations covered significant part of the central regions of both the LMC and SMC. Practically the entire bars of these galaxies – more than 4.5 square degrees (21 $`14.2\times 57`$ arcmin driftscan fields) and about 2.4 square degrees (11 fields) were monitored regularly from January 1997 through June 1999 and June 1997 through March 1999 for the LMC and SMC, respectively. Collected BVI data were reduced and calibrated to the standard system. Accuracy of transformation to the standard system was about $`0.010.02`$ mag. The photometric data of the SMC were used to construct the BVI photometric maps of the SMC (Udalski et al. 1998b). The reader is referred to that paper for more details about methods of data reduction, tests on quality of photometric data, astrometry, location of observed fields etc. Quality of the LMC data is similar and it will be fully described with release of the BVI photometric maps of the LMC in the near future.
OGLE-II photometric databases were already searched for variable stars, in particular pulsating variables. About 1400 Cepheids were detected in the LMC fields and 2300 in the SMC. Part of the SMC sample, namely double-mode Cepheids and second overtone Cepheids, was already presented in Udalski et al. (1999a, 1999b). The LMC and SMC Cepheid catalogs will be presented in the following papers of this series. They will describe in detail methods of selection of Cepheid variables, determination of mean photometry, completeness of the sample etc.
In short, the light curves of each object consist of about 120–360 epochs in the I-band and about 15–40 in the V and B-bands. BVI photometry was available for all 11 driftscan fields in the SMC. B-band photometry for the LMC is at the moment of writing this paper less complete – reductions of only 40% of fields were finished. For the remaining fields only VI photometry was available. B-band photometry of these fields will be completed after the next observing season.
The observational data used for construction of the $`PLC`$ and $`PL`$ diagrams consist of BVI photometry and period of light variations. The intensity-mean brightness of each object was derived by fitting the light curve with fifth order Fourier series. Accuracy of the I-band mean magnitudes is about a few thousands of magnitude. Accuracy of the BV-band magnitudes is somewhat worse because of worse sampling of light curves – about 0.01 mag. Accuracy of periods is about $`710^5P`$.
## 3 Determination of the $`PLC`$ and $`PL`$ Relations
To derive the $`PLC`$ and $`PL`$ relations for Cepheids in the LMC and SMC, we selected all objects from the OGLE Catalogs of Cepheids cataloged as fundamental mode (FU) and first overtone (FO) pulsators. In the next step we corrected the mean brightness for interstellar extinction. We used red clump giants as the reference brightness for determination of the mean interstellar extinction. Red clump stars are very numerous in both the LMC and SMC, their I-band magnitude was shown to be independent on age of these stars in the wide range of $`210`$ Gyr, and it is only slightly dependent on metallicity (Udalski 1998a,b). The latter correction is not important in this case because of practically homogeneous environment of field stars in the LMC (Bica et al. 1998) or SMC. Thus the mean brightness of red clump stars can be a very good reference of brightness for monitoring extinction. Similar method was used by Stanek (1996) for determination of extinction map of Baade’s Window in the Galactic bulge.
The reddening was determined in 84 lines-of-sight in the LMC and 11 in the SMC. The total mean reddening of the observed fields was found to be $`E(BV)=0.137`$ and $`E(BV)=0.087`$ in the LMC and SMC, respectively. Its fluctuations, more significant in the LMC, and non-uniform distribution of Cepheids resulted in somewhat larger total mean reddening of these stars: $`E(BV)=0.147`$ and $`E(BV)=0.092`$ for the LMC and SMC Cepheid samples, respectively. More details on extinction determination will be provided with the release of Catalogs of Cepheids from the LMC and SMC.
It is obvious that our extinction corrections remove effects of extinction in statistical sense only but because of huge sample of Cepheids presented in this paper our approach is well justified. To test whether our extinction correction indeed removes inhomogeneities of extinction we compared standard deviations of the LMC $`PL`$ relation constructed for observed magnitudes with those presented below for extinction corrected samples. We found improvement of the standard deviation from $`\sigma =0.123`$ to $`\sigma =0.109`$ in the I-band and $`\sigma =0.183`$ to $`\sigma =0.159`$ in the V-band for the observed and extinction corrected samples, respectively. This indicates that our extinction procedure works correctly. In the SMC the decrease of standard deviation is smaller due to more uniform extinction there.
It should be, however, stressed here that extinction is variable within each Cloud, growing with the distance inside the Cloud. Therefore in any line-of-sight we may expect an additional scatter of brightness when the mean extinction correction is used. Also the extinction correction we applied was determined from different population of stars than Cepheids, namely old red clump stars. It is possible that the spatial distribution of red clump stars along the line-of-sight is different than that of much younger Cepheids. Thus, some systematic differences between the determined reddening and the mean reddening of Cepheids in a given direction cannot be ruled out.
After correcting photometry of our samples of Cepheids for interstellar extinction we determined the $`PLC`$ and $`PL`$ relation with an iterative procedure using the least square method. We fitted the relations in the following form:
$$M=\alpha \mathrm{log}P+\beta CI+\gamma $$
$`(1)`$
for the $`PLC`$ relation, and
$$M=a\mathrm{log}P+b$$
$`(2)`$
for the $`PL`$ relation. $`P`$ is the period of Cepheid, $`M`$ magnitude and $`CI`$ color index.
After each iteration the points deviating by more than $`2.5\sigma `$ were removed and the fitting was repeated. In this way we removed a few outliers – usually the objects reddened significantly more than the mean correction applied to our data or objects blended and unresolved with background stars. $`2.5\sigma `$ value was selected after a few tests as a compromise value allowing effective removing of outliers and on the other hand not removing too many good objects.
The $`PLC`$ relation was determined for the I-band brightness and $`VI`$ color. The $`PL`$ relation was constructed for the B,V and I-bands. Additionally we determined the $`PL`$ relation for extinction insensitive index $`W_I`$ (called sometimes Wesenheit index, Madore and Freedman 1991) which is defined as follows:
$$W_I=I1.55(VI)$$
$`(3)`$
The coefficient 1.55 in Eq. 3 corresponds to the coefficient resulting from standard interstellar extinction curve dependence of the I-band extinction on $`E(VI)`$ reddening (Schlegel, Finkbeiner and Davis 1998). It is easy to show that the values of $`W_I`$ are the same when derived from observed or extinction free magnitudes, provided that extinction to the object is not too high so it can be approximated with a linear function of color.
The B-band $`PL`$ relation is presented for the SMC only. As we mentioned in Section 2, the LMC sample is less complete in the B-band because only 8 from 21 fields from the LMC have already been reduced in the B-band. Complete sample will be available after the next observing season of the LMC. The B-band sample of the LMC Cepheids presently at our disposal is less than half that numerous than in the VI-bands. In particular the longer period Cepheids, very important for precise determination of the $`PLC`$ and $`PL`$ relations, are sparsely populated what would make our determinations less accurate. Therefore to avoid biases we decided to wait with determination of the B-band $`PL`$ relation for the LMC until the full data set is available. One has to remember that the B-band $`PL`$ relation has intrinsically much larger scatter what makes it much less attractive for distance determination (cf. the SMC data). Also, all most important data for extragalactic Cepheids collected by the HST Key Project team were obtained in the bands closely resembling standard VI-bands, thus precise calibration of $`PL`$ relations in these bands is more important.
In this paper we concentrate on analysis of the $`PL`$ and $`PLC`$ diagrams of the more important FU mode Cepheids. Cepheids of this type are usually discovered in extragalactic objects and used for distance determination. We limited our samples of FU mode Cepheids from the short period side at $`\mathrm{log}P=0.4`$ for two reasons. First, in the LMC the population of Cepheids with shorter periods is marginal contrary to the SMC where large sample of Cepheids with periods shorter than $`\mathrm{log}P=0.4`$ (2.5 days) has been found. Therefore to be able to make non-biased comparison of our relations in both galaxies we limited ourselves to the same range of periods in both galaxies. Secondly, Bauer et al. (1999) reported that the slope of the fundamental mode pulsators with periods shorter than 2 days in the SMC is steeper than for the longer period stars. Indeed, we also observe in our data such a change of slope. Our lower limit of $`\mathrm{log}P=0.4`$ excludes safely this part of the $`PL`$ relation of FU mode Cepheids in the SMC. The upper limit of period of our samples is defined by saturation level of the CCD detector because the longest period Cepheids become too bright and are overexposed in our images. It is at $`\mathrm{log}P1.5`$ and $`\mathrm{log}P1.7`$ for the LMC and SMC, respectively.
## 4 Discussion
Tables 1 and 3 present results of the least-square (LSQ) fitting of the $`PL`$ and $`PLC`$ relations to our samples of classical, fundamental mode Cepheids, respectively. We also list there number of stars used and standard deviation of the residual magnitudes.
We tested stability of our best LSQ solutions performing a few simulations. First, we limited the samples by cutting the lower period limit. Then, we removed randomly significant number (up to the half) of shorter period ($`\mathrm{log}P<0.7`$) Cepheids which in both samples are much more numerous than the longer period ones. In all cases results were consistent with our best fits for the entire samples differing by not more than $`\pm 0.05`$ in $`\mathrm{log}P`$ coefficients of our relations.
We also checked whether our sample of the LMC Cepheids is not severely affected by differential extinction. We performed a series of tests by limiting the sample of Cepheids to those from the fields in which there are indications that the extinction is in the first approximation uniform. The shape of the red clump in the color-magnitude diagram of a given field served as an indicator of how uniform the extinction in the field is. In many fields the shape of the red clump is round indicating little differential extinction. However, in a few cases the oval shape of the red clump, elongated in the direction of reddening, clearly indicates larger differential extinction. We excluded Cepheids located in these fields from our sample. This lowered the number of objects from about 690 to 480. Fitting the $`PL`$ and $`PLC`$ relations to such a cleaned sample gave almost identical results as for the full sample for all combination of bands and relations. Thus, our tests indicate that the differential extinction is of little concern in our case.
Simple comparison of results obtained for the LMC and SMC Cepheids allows to draw some conclusions on possible dependence of the $`PL`$ relation on differences of metallicity of these galaxies. We find from Table 1 that the slopes, $`a`$, of the $`PL`$ relation are within errors the same for the I-band and $`W_I`$ index for the LMC and SMC. Only in the case of the V-band $`PL`$ relation we note marginally shallower slope ($`3.6\sigma `$) for the SMC relation as compared to the LMC one. However, taking into account uncertainty of the true shape of Cepheid extinction which might, for instance, depend slightly on the brightness of Cepheid, the same coefficients for extinction insensitive index $`W_I`$ and, finally, larger dispersion of the $`PL`$ relations in the V-band we do not consider this somewhat lower slope in the SMC as significant. We may conclude that for the metallicity range between the LMC and SMC ($`[\mathrm{Fe}/\mathrm{H}]=0.3`$ dex, and $`0.7`$ dex for the LMC and SMC, respectively, Luck et al. 1998) the slopes, $`a`$, of the $`PL`$ relations of fundamental mode classical Cepheids are within errors constant.
Because the $`PL`$ relations of the LMC have much smaller scatter (the standard deviation is almost two times smaller for the LMC relations as compared to the SMC ones) and in the case of the fundamental mode Cepheids of $`\mathrm{log}P>0.4`$ they are much better populated, we decided to use the coefficients $`a`$ derived from the LMC data as universal and we repeated fitting of the SMC data with these coefficients. The fits we obtained are only slightly worse than the best LSQ fits and we treat them as final. Adopted parameters of the $`PL`$ relations for the LMC and SMC are listed in Table 2.
We should note at this point that fitting of the $`PL`$ relation for the first overtone Cepheids leads to somewhat different results. Although the V and I-band slopes are within errors the same, the extinction insensitive index $`W_I`$ indicates a small difference of slopes of its $`PL`$ relation in the LMC and SMC at the $`4.6\sigma `$ level ($`a_{W_I}^{LMC}=3.406\pm 0.021,a_{W_I}^{SMC}=3.556\pm 0.025`$). Because the first overtone Cepheids are not used as distance indicators, we only note this small discrepancy, but the problem certainly deserves further studies.
Table 3 presents results of the best LSQ fitting of the LMC and SMC $`PLC`$ relations. Comparison of these relations in the LMC and SMC requires special attention. At the first look it may seem that coefficients $`\alpha `$ and $`\beta `$ are different by many sigmas as the direct comparison of figures in Table 3 indicates. However, it does not necessarily mean that they are indeed different. It is well known that the $`\alpha `$ and $`\beta `$ coefficients of the $`PLC`$ relation are highly correlated in the sense that the error in $`\alpha `$ coefficient is coupled with the error of the $`\beta `$ coefficient and both errors compensate (Caldwell and Coulson 1986). This makes precise empirical determination of both coefficients difficult but has little consequences on predicted luminosity (and distance determination) if both coefficients come from the same determination. Thus, the $`\alpha `$ and $`\beta `$ should be considered as a pair. To investigate whether the SMC data can be approximated well by the LMC pair of coefficients ($`\alpha `$, $`\beta `$), we repeated the $`PLC`$ fitting of the SMC data with $`\alpha `$ and $`\beta `$ fixed from the LMC determination. Results are given in Table 4 and Fig. 1 which presents the $`PLC`$ relation in the form of plot of $`I_01.409(VI)_0`$ against $`\mathrm{log}P`$ for both the LMC and SMC. The fit is somewhat worse than the best LSQ fit of the SMC data ($`\sigma =0.126`$ vs. 0.138, for the best LSQ and LMC coefficients fit, respectively), but it is clearly seen from Fig. 1, that the pair ($`\alpha `$,$`\beta `$) from the LMC fits the SMC data almost equally well. Both sequences of Cepheids for the LMC and SMC in Fig. 1 are within errors parallel indicating that differences between the best LSQ fit and that with LMC coefficients ($`\alpha `$, $`\beta `$) are marginal. Therefore, there is no indication that these coefficients significantly differ between the LMC and SMC.
Final, adopted parameters of the $`PLC`$ relation in the LMC and SMC are provided in Table 4. It is worth noting that the color term, $`\beta `$, of the LMC determination (and as a consequence the SMC, because we adopt the LMC values of $`\alpha `$ and $`\beta `$ as universal) is very close to the coefficient of the I-band extinction dependence on $`E(VI)`$, 1.55, making the fitting of the $`PLC`$ relation non-sensitive to interstellar extinction.
Figures 2–4 and 5–8 show the $`PL`$ relations for LMC and SMC Cepheids, respectively, for the BVI-bands and $`W_I`$ index. In the upper panel of each figure all observed Cepheids are plotted. Dark and light dots indicate the fundamental mode and first overtone Cepheids, respectively. In the lower panel, the $`PL`$ relation for the fundamental mode Cepheids is shown. Dark and light points in the lower panel mark objects included and rejected from the final fits, respectively. Solid line shows the $`PL`$ relation with coefficients adopted from Table 2.
The $`PLC`$ relation (Fig. 1) and $`W_I`$ index $`PL`$ relation (Fig. 4) of the LMC show incredibly small scatter from the fitted relation. The standard deviation of differences between the observed and fitted values is equal to only 0.074 mag and it determines most likely the intrinsic dispersion of the $`PLC`$ and $`PL`$ relations. It also proves that the Cepheid variable stars can indeed be a very good standard candle allowing precise distance determination good to a few percent. In the case of the SMC the scatter is somewhat larger amounting to $`\sigma =0.126`$ mag. This could be expected as it is widely believed that the geometrical depth of the SMC, which is tilted to the line of sight much more than seen almost face-on LMC, is larger than that of the LMC (Caldwell and Coulson 1986).
### 4.1 SMC – LMC Distance Ratio
Comparisons of coefficients $`a`$, of the $`PL`$ relations and ($`\alpha `$, $`\beta `$) of the $`PLC`$ relation in the LMC and SMC indicate no significant difference of their values in these galaxies. This is in agreement with most of theoretical modeling (Chiosi, Wood and Capitanio 1993, Saio and Gautschy 1998, Alibert et al. 1999) although the opposite predictions can also be found in the literature (Bono et al. 1999).
On the other hand, it is believed that metallicity variations among objects may have stronger effect on the zero point of $`PLC`$ or $`PL`$ relations. Results of previous empirical attempts to determine effects of metallicity on zero points, generally suggested fainter Cepheids in metal poorer objects, however, with high degree of uncertainty (Sasselov et al. 1997, Kochanek 1997, Kennicutt et al. 1998).
We may test the dependence of the zero points of $`PLC`$ or $`PL`$ relations on metallicity in very straightforward manner – by determination of the difference of distance moduli between the Magellanic Clouds resulting from these relations and comparison with similar determinations based on other reliable distance indicators observed in both Clouds.
With the $`PL`$ relation we may determine the difference of distance moduli of the LMC and SMC for the V and I-bands and the extinction insensitive $`W_I`$ index. Results – the difference of zero points of corresponding $`PL`$ relations in the LMC and SMC (Table 2) are listed in Table 5. We assign lower weight to VI-band determinations because of extinction uncertainty.
The distance determined from the $`PLC`$ relation is also extinction independent. The main source of uncertainty is in this case the color error. We determined the difference of distance moduli for the period corresponding to approximately middle of the relation range: $`\mathrm{log}P=1.0`$ for the FU Cepheids. The mean $`(VI)_0`$ colors of the $`\mathrm{log}P=1.0`$ fundamental mode Cepheids are $`(VI)_0=0.69\pm 0.03`$ for the LMC object and $`(VI)_0=0.70\pm 0.03`$ for the SMC Cepheid. Results of determination with the $`PLC`$ relation are given in Table 5.
To compare results obtained with Cepheids we need independent estimates of difference of the SMC and LMC distance moduli, derived with other reliable standard candles observed in both galaxies. We use values obtained with red clump and RR Lyr stars. Both types of objects were observed during the OGLE project with the same equipment and methods of reductions. In this way possible systematic errors can be minimized.
Brightness of red clump stars in the LMC and SMC was derived based on observations of a few star clusters located in the halo of both galaxies where interstellar extinction is small and can be determined from reliable maps of Schlegel et al. (1998). The I-band, mean, extinction free brightness of red clump stars is equal to $`I_0=17.88\pm 0.05`$ and $`I_0=18.31\pm 0.07`$ for the LMC and SMC clusters, respectively (Udalski 1998b). With a small correction for the difference of metallicity of clusters (Udalski 1998a), the difference of distance moduli between the LMC and SMC from red clump stars is $`\mathrm{\Delta }\mu _{RC}=0.47\pm 0.09`$ mag.
Preliminary analysis of RR Lyr stars from the OGLE fields was presented in Udalski (1998a). The samples of more than 100 objects in each galaxy are small as compared to the total number of a few thousands found in the entire observed area of the Magellanic Clouds, nevertheless they allow to determine reliable brightness of these objects. Results for the LMC RR Lyr stars presented by Udalski (1998a) were based on moderate number of observations of these stars and preliminary photometric calibrations of the observed fields. Also, for both the LMC and SMC RR Lyr stars, the extinction was estimated by extrapolation or interpolation of available extinction maps. Now, with about three times that many observing epochs available for these LMC RR Lyr stars (about 140 in the I-band and 20 in the V-band) and extinction independently determined we have reanalyzed the objects of Udalski (1998a).
New photometry of 104 RR Lyr stars from the LMC with appropriate extinction correction yields $`V_{RR}^{LMC}=18.94\pm 0.04`$ mag. This is practically the same result as presented by Udalski (1998a) ($`V_{RR}^{LMC}=18.86`$ mag), taking into account that extinction was slightly overestimated in that paper. It is in very good agreement with the mean brightness of RR Lyr stars in a few star clusters in the LMC (Walker 1992). Correction of the mean brightness of RR Lyr stars in the SMC is almost negligible: $`V_{RR}^{SMC}=19.43\pm 0.03`$ mag as compared to $`V_{RR}^{SMC}=19.41`$ mag in Udalski (1998a).
To derive the difference of distance moduli between both galaxies we also have to correct the brightness of RR Lyr stars for metallicity differences. Fortunately, the mean difference of metallicity of RR Lyr stars in the LMC and SMC is not large (on average: $`[\mathrm{Fe}/\mathrm{H}]=1.6`$ and $`1.7`$ for the LMC and SMC, respectively, see discussion in Udalski 1998a) and with the average slope of the brightness-metallicity relation equal to 0.2 it leads to a small correction of 0.02 mag. The SMC RR Lyr stars would be fainter if they were of the LMC metallicity. Thus, the difference of distance moduli between the SMC and LMC resulting from RR Lyr stars is equal to $`\mathrm{\Delta }\mu _{RR}=0.51\pm 0.08`$ mag. It should be noted that the mean brightness of RR Lyr stars in both galaxies will be finally refined when the OGLE catalog of these stars is released.
Results of determinations of distance moduli difference are summarized in Table 5. It can be seen that all determinations based on Cepheids are consistent. The Cepheid determination is in excellent agreement with independent estimates from red clump and RR Lyr stars. This result indicates that within uncertainty of a few hundredth of magnitude the zero points of the $`PLC`$ and $`PL`$ relations are independent of metallicity of hosting object. Thus, the population effects on the Cepheid distance scale are negligible, at least for the metallicity range bracketed by the Magellanic Clouds. The mean difference of the SMC and LMC distance moduli resulting from three independent standard candles is $`\mu _{SMC}\mu _{LMC}=0.51\pm 0.03`$ mag.
### 4.2 Calibration of the $`PLC`$ and $`PL`$ Relations
The traditional way of calibrating the standard candles is based on observations of the same type objects located in the Galaxy. In the case of Galactic Cepheids such a calibration is, however, very difficult. Even the closest Cepheids are located, unfortunately, so far from the Sun that the distance determination can only be performed with indirect methods. Even Hipparcos did not provide parallaxes precise enough to allow unambiguous distance determination for sound sample of these stars. It seems that much better results might be achieved by observations of Cepheids in other galaxies and calibrating the $`PL`$ relation based on other, reliable distance determinations. One of such galaxies might be NGC4258 to which very precise distance was recently determined with geometric method, based on maser observations (Herrnstein et al. 1999). The galaxy possesses a population of Cepheids detected with HST (Maoz et al. 1999). However, although NGC4258 might be a very attractive object for testing and checking the calibration of Cepheids it is certainly not the best object for deriving precise calibration. The Cepheid sample there is small and it may be biased by many factors including difficulty of detecting short period Cepheids, quality of HST photometry etc.
On the other hand the Magellanic Clouds seem to be the best objects to calibrate the Cepheid $`PL`$ relation. They are close enough and contain thousands of Cepheids allowing analysis of a large, homogeneous and photometrically accurate sample of these objects, like the one presented in this paper. They are also chemically homogeneous what minimizes uncertainties resulting from metallicity variations (Luck et al. 1998).
Unfortunately, the distance to the LMC has been a subject of dispute for a long time. It seems, however, that the recent results obtained with different techniques converge at the short distance modulus of $`\mu _{LMC}=18.218.3`$ mag. The most promising, largely geometric method using eclipsing binary stars should allow to derive the distance to the LMC with accuracy of $`12`$ percent. First determination of the distance to the LMC with HV2274 eclipsing system yields the distance modulus of $`\mu _{LMC}18.26`$ mag (Guinan et al. 1998, Udalski et al. 1998c). RR Lyr stars calibrated with the most reliable methods (Popowski and Gould 1998) give the distance modulus of $`\mu _{LMC}=18.23\pm 0.08`$ mag when these calibrations ($`M_V^{RR}=0.71\pm 0.07`$ mag) are used with the mean extinction free V-band magnitude of RR Lyr stars in the LMC presented in the previous Subsection. Finally, the recent distance determination with red clump stars used as a standard candle (Udalski et al. 1998a, Stanek, Zaritsky and Harris 1998) corrected for small population effects (Udalski 1998a,b) yields the distance modulus of $`\mu _{LMC}=18.18\pm 0.06`$ mag. It should be noted that red clump stars are at present the most precisely calibrated standard candle because very accurate parallaxes (accuracy better than 10%) were measured for hundreds of them in the solar neighborhood by Hipparcos.
We should also mention here the method that is in principle a precise geometric technique – observations of the light echo from the ring of gas observed around the supernova SN1987A. Unfortunately in the case of SN1987A this technique suffers from insufficient quality of observations at crucial moments after the supernova explosion and many modeling assumptions. Only the upper limit of the distance modulus to the LMC can be estimated. It ranges from $`\mu _{LMC}<18.58`$ mag (Panagia 1998) to as low as $`\mu _{LMC}<18.37`$ mag (Gould and Uza 1998).
To calibrate the Cepheid $`PLC`$ and $`PL`$ relations we adopted the average distance modulus resulting from all these determinations: $`\mu _{LMC}=18.22\pm 0.05`$ mag. It is in excellent agreement with the recent observations of Cepheids in NGC4258 galaxy (Maoz et al. 1999). Calibrated with the HST Key Project team method (assuming the distance modulus to the LMC, $`\mu _{LMC}=18.50`$) they yield the distance to NGC4258 equal to $`8.1\pm 0.4`$ Mpc, considerably larger ($`12`$%) than resulting from precise geometric method measurement ($`7.2\pm 0.3`$ Mpc, Herrnstein et al. 1999). The most natural explanation of this discrepancy is shorter than assumed distance modulus to the LMC ($`\mu _{LMC}18.2`$ mag) – fully consistent with our adopted value.
If necessary, any refinement of the distance modulus to the LMC in the future will correspond to appropriate shift of the zero points of our calibration. Coefficients of the absolute magnitude $`PLC`$ and $`PL`$ relations for classical, FU mode Cepheids are given in Table 6.
Finally, based on our photometry of Cepheids and RR Lyr stars in both Magellanic Clouds we may provide a constraint on the absolute magnitude of fundamental mode Cepheids. The mean V-band magnitude of the 10-day period Cepheid is equal to $`V_{C,10}^{LMC}=14.28\pm 0.03`$ mag in the LMC and $`V_{C,10}^{SMC}=14.85\pm 0.04`$ mag in the SMC. The extinction free brightness of other standard candle – RR Lyr stars can be used for comparison. In the previous Subsection we provided appropriate brightness of RR Lyr stars in both galaxies: $`V_{RR}^{LMC}=18.94\pm 0.04`$ mag and $`V_{RR}^{SMC}=19.43\pm 0.03`$ mag for the LMC and SMC, respectively. Including a small correction of RR Lyr brightness due to metallicity differences, we find $`\mathrm{\Delta }V_{RRC,10}^{LMC}=4.66\pm 0.05`$ mag and $`\mathrm{\Delta }V_{RRC,10}^{SMC}=4.60\pm 0.05`$ mag for the LMC and SMC, respectively. The difference is in respect to the RR Lyr star of the LMC metallicity ($`[\mathrm{Fe}/\mathrm{H}]=1.6`$ dex).
Consistent results in both Magellanic Clouds indicate that both Cepheids and RR Lyr are good standard candles. Assuming the absolute calibration of RR Lyr stars: $`M_V^{RR}=0.71\pm 0.07`$ mag (Popowski and Gould 1998) we obtain $`M_V^{C,10}=3.92\pm 0.09`$ for 10-day period Cepheid. Such an absolute brightness of Cepheids and our observed $`PL`$ relation ($`V_{C,10}^{LMC}=14.28`$ mag) indicate the distance modulus to the LMC $`\mu _{LMC}=18.20`$ mag fully consistent with the short distance modulus adopted for absolute calibration via the LMC (Table 6). Thus, the distance scale of Cepheids is consistent with distance scales inferred from RR Lyr stars and other reliable distance indicators.
We may also compare the Cepheid absolute magnitude resulting from the RR Lyr calibration with results of studies of Galactic Cepheids. The Galactic calibrations of Cepheids fall into three categories: based on Hipparcos direct parallaxes (Feast and Catchpole 1997, Lanoix, Paturel and Garnier 1999), classical, pre-Hipparcos ones (Laney and Stobie 1994, Gieren et al. 1998) and based on statistical parallaxes (Luri et al. 1998). They give the brightest, moderate and faintest luminosity of Cepheids at a given period and as a consequence the long, classical and short distance to the LMC. We will not discuss in detail any of these calibrations here.
As we already mentioned the Hipparcos parallaxes of Cepheids are very uncertain and may be biased by many factors. Analysis of the Hipparcos data by Feast and Catchpole (1997) and Lanoix et al. (1999) lead to essentially the same results that the absolute V-band magnitude of the Galactic Cepheids with 10-day period is about $`4.22`$ mag. The classical calibration predicts the mean absolute magnitude of the Galactic Cepheids of 10-day period equal to $`M_V^{C,10}=4.07`$ mag (Laney and Stobie 1994, Gieren et al. 1998). Finally, the statistical parallaxes method predicts much fainter Cepheids: $`M_V^{C,10}=3.86`$ mag for 10-day period object (Luri et al. 1998).
Comparing these calibrations with the absolute magnitude inferred from comparison of Cepheids with RR Lyr stars in the Magellanic Clouds and the most likely calibration of RR Lyr stars we find that the Galactic calibration based on statistical parallaxes (Luri et al. 1998) is closest to our result. It is worth noting that statistical parallaxes method for both Cepheids and RR Lyr stars give consistent results. Calibration of RR Lyr of Popowski and Gould (1998) is based, among others, on statistical parallaxes determination.
Acknowledgements. We would like to thank Prof. Bohdan Paczyński for many discussions and help at all stages of the OGLE project. We thank Drs. K. Z. Stanek and D. Sasselov for valuable comments on the paper. The paper was partly supported by the Polish KBN grants 2P03D00814 to A. Udalski and 2P03D00916 to M. Szymański. Partial support for the OGLE project was provided with the NSF grants AST-9530478 and AST-9820314 to B. Paczyński.
## REFERENCES
* Alcock, C. et al. 1995, Astron. J., 109, 1652.
* Alibert, Y., Baraffe, I., Hauschildt, B., and Allard, F. 1999, Astron. Astrophys., 344, 551.
* Bauer, F. et al. 1999, Astron. Astrophys., 348, 175.
* Bica, E., Geisler, D., Dottori, H., Clariá, J.J., Piatti, A.E., and Santos Jr, J.F.C. 1998, Astron. J., 116, 723.
* Bono, G., Caputo, F., Castellani, V., and Marconi, M. 1999, Astrophys. J., 512, 711.
* Caldwell, J.A.R., and Coulson, I.M. 1986, MNRAS, 218, 223.
* Chiosi, C., Wood, P.R., and Capitanio, N. 1993, Astrophys. J. Suppl. Ser., 86, 541.
* Feast, M.W., and Catchpole, R.M. 1997, MNRAS, 286, L1.
* Gould, A., and Uza, O. 1998, Astrophys. J., 494, 118.
* Gieren, W.P., Fouqué, P., and Gómez, M. 1998, Astrophys. J., 496, 17.
* Guinan, E.F. et al. 1998, Astrophys. J. Letters, 509, L21.
* Herrnstein, J.R. et al. 1999, Nature, 400, 539.
* Kennicutt, R.C., Freedman, W.L., and Mould,J.R. 1995, Astron. J., 110, 1476.
* Kennicutt, R.C., et al. 1998, Astrophys. J., 498, 181.
* Kochanek, C.S. 1997, Astrophys. J., 491, 13.
* Laney, C.D, and Stobie, R.S. 1994, MNRAS, 266, 441.
* Lanoix, P., Paturel, G., and Garnier, R. 1999, MNRAS, in press, (astro-ph/9904298).
* Leavitt, H.S. 1912, Harvard Cir., 173.
* Luck, R.E, Moffett, T.J., Barnes, T.G., and Gieren, W.P. 1998, Astron. J., 115, 605.
* Luri, X., Gómez, A.E., Torra, J., Figueras, F., and Mennessier, M.O. 1998, Astron. Astrophys., 335, 81.
* Maoz, E., et al. 1999, Nature, in press, (astro-ph/9908140).
* Madore, B.F., and Freedman, W.L. 1991, P.A.S.P., 103, 933.
* Panagia, N. 1998, Mem. Soc. Astron. Italiana, 69, 225.
* Popowski, P., and Gould, A. 1998, in ”Post-Hipparcos Cosmic Candles, Eds. A. Heck and F. Caputo, Kluwer Academic Publ. Dordrecht; p. 53, astro-ph/9808006.
* Sasselov, D., et al. 1997, Astron. Astrophys., 324, 471.
* Schlegel, D.J., Finkbeiner, D.P., and Davis, M. 1998, Astrophys. J., 500, 525.
* Saio, H., and Gautchy, A. 1998, Astrophys. J., 498, 360.
* Stanek, K.Z. 1996, Astrophys. J. Letters, 460, L37.
* Stanek, K.Z, Zaritsky, D., and Harris, J. 1998, Astrophys. J. Letters, 500, L141.
* Udalski, A., Kubiak, M., and Szymański, M. 1997, Acta Astron., 47, 319.
* Udalski, A. 1998a, Acta Astron., 48, 113.
* Udalski, A. 1998b, Acta Astron., 48, 383.
* Udalski, A., Szymański, M., Kubiak, M., Pietrzyński, G., Woźniak, P., and Żebruń, K. 1998a, Acta Astron., 48, 1.
* Udalski, A., Szymański, M., Kubiak, M.,Pietrzyński, G., Woźniak, P., and Żebruń, K. 1998b, Acta Astron., 48, 147.
* Udalski, A., Pietrzyński, G., Woźniak, P., Szymański, M., Kubiak, M., and Żebruń, K. 1998c, Astrophys. J. Letters, 509, L25.
* Udalski, A., Soszyński, I., Szymański, M., Kubiak, M., Pietrzyński, G., Woźniak, P., and Żebruń, K. 1999a, Acta Astron., 49, 1.
* Udalski, A., Soszyński, I., Szymański, M., Kubiak, M., Pietrzyński, G., Woźniak, P., and Żebruń, K. 1999b, Acta Astron., 49, 45.
* Walker, A.R. 1992, Astrophys. J. Letters, 390, L81.
|
no-problem/9907/astro-ph9907096.html
|
ar5iv
|
text
|
# Untitled Document
Gamma-ray Burst Energetics
Pawan Kumar
Institute for Advanced Study, Princeton, NJ 08540
Abstract
We estimate the fraction of the total energy in a Gamma-Ray Burst (GRB) that is radiated in photons during the main burst. Random internal collisions among different shells limit the efficiency for converting bulk kinetic energy to photons. About 1% of the energy of explosion is converted to radiation, in 10–10<sup>3</sup> kev energy band in the observer frame, for long duration bursts (lasting 10s or more); the efficiency is significantly smaller for shorter duration bursts. Moreover, about 50% of the energy of the initial explosion could be lost to neutrinos during the early phase of the burst if the initial fireball temperature is $``$ 10 Mev. If isotropic, the total energy budget of the brightest GRBs is $`\mathrm{}>10^{55}`$erg, a factor of $`\mathrm{}>20`$ larger than previously estimated. Anisotropy of explosion, as evidenced in two GRBs, could reduce the energy requirement by a factor of 10-100. Putting these two effects together we find that the energy release in the most energetic bursts is about 10<sup>54</sup> erg.
Subject headings: gamma-rays: bursts – gamma-rays: theory
1. Introduction
The short, milli-second, time variability of gamma-ray bursts is believed to arise in internal shocks i.e. when faster moving ejecta from the explosion collides with slower moving material ejected at an earlier time (Paczynski & Xu 1994, Rees & Mészáros 1994, Sari & Piran 1997). The optical identification and measurement of redshifts for five GRBs have determined their distances and the amount of energy that would be radiated in an isotropic explosion (eg. Metzger et al. 1997, Kulkarni et al. 1998, Kelson et al. 1999, Piran 1999 and references therein). In three of these cases (GRB 971214, 980703 and 990123), the total isotropic energy radiated is estimated to be in excess of 10<sup>53</sup> erg. For GRB 990123 the isotropic energy in the gamma-ray burst is estimated to be 3.4x10<sup>54</sup> erg. However, the steepening of the fall-off of the optical light curve, $`2`$ days after the explosion, suggests that the explosion was not isotropic, and the total radiated energy might only be $`6`$x$`10^{52}`$ erg (Kulkarni et al. 1999; Mészáros & Rees, 1999). There is little evidence for beaming in the other two cases.
The energy radiated in photons in gamma-ray bursts is only a fraction of the total energy released in the explosion. Collisions of shells or ejecta from the central source, believed to produce the highly variable gamma-ray burst emission, converts but a small fraction of the kinetic energy of the ejecta into thermal energy which is shared among protons, electrons and magnetic field. If the initial temperature of the fireball is larger than a few Mev then a fraction of the fireball energy is lost to neutrinos. Thus a significantly larger amount of energy than ‘observed’ must be released in these explosions. The purpose of this paper is to provide an estimate for the radiative efficiency of GRBs in the framework of the internal shock model (§2). Some aspects of the work presented here has been previously considered by Kobayashi et al. (1997) and Daigne & Mochkovitch (1998). The main points are summarized in §3.
2. Gamma-ray burst energetics
A fraction of the kinetic energy of ejecta in GRBs is converted into photons as a result of internal collision during the main burst. This efficiency factor is calculated below in §2.1. Just after the explosion, when the adiabatic cooling is small and the temperature of the fireball is several Mev, neutrinos are copiously produced and carry away a fraction of the energy of the explosion. The fraction of energy lost to neutrinos is calculated in §2.2.
2.1 Efficiency of internal shocks
The efficiency of conversion of the kinetic energy of ejecta to radiation via internal shocks has been considered by Kobayashi et al. (1997) and Daigne & Mochkovitch (1998). There are several differences between the calculation presented here and previous works. One of which is that we calculate synchrotron emission from forward and reverse shocks in colliding shells and compton up scattering of photon energy, by solving appropriate equations for shock and radiation, to determine the observed fluence in the energy band 10–10<sup>3</sup> kev. We also take into consideration that about one–third of the total thermal energy produced in colliding shells is taken up by electrons, and only this fraction is available to be radiated away. Finally, we treat in a consistent manner energy radiation in shell collisions when the fireball is optically thick to Thomson scattering. In this case, photons do not escape the expanding ejecta but instead deposit their energy back into shells and increase the kinetic energy of ejecta. Most of this kinetic energy is not converted back to thermal energy until some later time when interstellar material is shocked. The reason for this is that shell mergers reduce the relative Lorentz factor of remaining shells and their subsequent mergers produce less thermal energy. The optical depth is important for bursts of duration ten seconds or less (hereafter referred to as short duration bursts).
We model the central explosion as resulting in random ejection of discrete shells each carrying a random amount of energy ($`ϵ_i`$), and with a random Lorentz factor ($`\gamma _i`$). The baryonic mass of i-th shell ($`m_i`$) is set by its energy ($`ϵ_i`$) and $`\gamma _i`$; $`m_i=ϵ_i/(c^2\gamma _i)`$. The time interval between the ejection of two consecutive shells is taken to be a random number with mean time interval such as to give the desired total burst duration. The Lorentz factor of shells is taken to be uniformly distributed between a minimum ($`\gamma _{min}=5`$) and a maximum ($`\gamma _{max}`$) value. The energy conversion efficiency is more or less independent of the number of shells ejected in the explosion so long as the number of shells is greater than a few.
When two cold shells i & j collide and merge the thermal energy produced is
$$\mathrm{\Delta }E=\gamma _f\left[(m_i^2+m_j^2+2m_im_j\gamma _r)^{1/2}(m_i+m_j)\right]c^2,$$
where $`\gamma _r=\gamma _i\gamma _j(1v_iv_j)`$ is the Lorentz factor corresponding to the relative speed of collision, and $`\gamma _f=(m_i\gamma _i+m_j\gamma _j)(m_i^2+m_j^2+2m_im_j\gamma _r)^{1/2}`$ is the final Lorentz factor of the merged shells. The energy $`\mathrm{\Delta }E`$ is shared among protons, electrons and magnetic field. In equipartition, electrons take up one third of the total energy, which is available to be radiated. In collisions involving two equal mass shells with $`\gamma _r=2`$, 6% of the energy can be radiated away, whereas collisions with $`\gamma _r=10`$ result in a loss of 19% of the energy. The average relative Lorentz factor of shell collisions is about 2 if shells are randomly ejected in a relativistic explosion. Thus the average bolometric radiative efficiency of internal shocks is about 6%. Approximately 1/4 of the total radiative energy lies in the energy band 10–10<sup>3</sup> kev, and therefore the effective radiative efficiency, in the observed energy band, of internal shocks is about 1%. More precise results from numerical simulations are presented below.
The time scale for the transfer of energy from protons to electrons due to Coulomb collisions, even when the number density of protons $`10^{13}`$ cm<sup>-3</sup> at a time when the fireball is just becoming optically thin, is much longer than the dynamical time and so we assume that there is little transfer of energy from protons to electrons on the time scale of interest for internal shocks.
The synchrotron cooling time, $`t_s`$, is typically much less than the dynamical time within the first few minutes of the burst and does not limit the efficiency of GRBs. In any case we include the effect of finite synchrotron cooling time on the radiative efficiency. We also include the inverse Compton cooling of electrons to calculate the spectrum and the fraction of thermal energy radiated away in internal shocks.
Following each shell collision we calculate the thermodynamical state of the shocked gas, and the emergent photon spectrum resulting from synchrotron emission plus the inverse compton scattering. The optical depth for emergent photons to Thomson scattering is calculated by following their trajectory along with the trajectory of shells. If the optical depth of the fireball is greater than a few, the photon energy gets converted back to the energy of bulk motion via adiabatic expansion and the momentum deposit by photons. The energy, $`\delta E_j`$, and momentum, $`\delta P_j`$, incident on a shell $`j`$ (as measured in its rest frame), by photons created in a colliding shell an optical depth $`\tau _j`$ away, which is moving with a relative velocity $`v_{cj}`$ toward the j-th shell, is given by
$$\delta E_j=\eta _j\gamma _{cj}(1+v_{cj})^2,$$
and
$$\delta P_j=\frac{\eta _j}{c(v_{cj}\gamma _{cj})^3}\left[\gamma _{cj}^4(1+v_{cj})^24\gamma _{cj}^2(1+v_{cj})v_{cj}+2\mathrm{ln}\gamma _{cj}^2(1+v_{cj})1\right],$$
where $`\eta _j=\mathrm{\Delta }E\mathrm{exp}(\tau _j)[1\mathrm{exp}(\delta \tau _j)]/(6\gamma _f)`$ is the energy incident on the j-th shell if it were stationary with respect to the center of momentum of the colliding shells, $`\gamma _f`$ is the Lorentz factor of merged shells, and $`\delta \tau _j`$ is the optical depth of the j-th shell. For $`\tau _j`$ dominated by scattering opacity the flux from a steady source is attenuated by a factor of $`1/\tau _j`$ instead of $`\mathrm{exp}(\tau _j)`$ given above. However, the energy/momentum received from a transient source on the short, photon transit time, is reduced by a factor of $`\mathrm{exp}(\tau _j)`$. The remainder of the energy/momentum is received on a longer time scale, of order photon diffusion time, and is included in our numerical computation where appropriate. For the elastic Thomson scattering by cold electrons the incident photon energy is only partially absorbed in optically thick shells as a result of the adiabatic expansion of the shell. The energy–momentum intercepted by a shell which is moving away from the energy producing shell is much smaller and is given by
$$\delta E_j=\frac{\eta _j}{(1+v_{cj})^2\gamma _{cj}^3},$$
and
$$\delta P_j=\frac{\eta _j}{c(v_{cj}\gamma _{cj})^3}\left[\frac{(1+v_{cj})^21}{(1+v_{cj})^2}\frac{4v_{cj}}{1+v_{cj}}+2\mathrm{ln}(1+v_{cj})\right].$$
The energy and momentum absorbed by the shell determines the change to its bulk velocity and its expansion which we include in our numerical simulation to determine the radiative efficiency of internal collisions. Also included in our calculation is the conversion of the thermal energy of protons and the magnetic field to bulk motion as a result of adiabatic expansion.
The radiative efficiency, $`\eta `$, of a burst is defined as the total energy radiated in the energy band 10–10<sup>3</sup> kev, during a time interval in which shell collisions take place, divided by the total energy released in the explosion.
Figure 1 shows a plot of $`\eta `$ as a function of burst duration. The total energy in bursts, in all of the cases shown in the figure, was taken to be $`10^{52}`$ erg, independent of the burst duration. The value of $`\eta `$ is found be about 1% for long duration bursts. The bolometric radiative efficiency of random internal shocks is found to be larger by a factor of about 4. The efficiency decreases with decreasing duration (for a fixed $`\gamma _{max}`$). Internal shocks are very inefficient for short duration bursts, because of photon trapping, as a number of shell collisions occur when the shell radii are small and the fireball is optically thick. For instance, the radiative efficiency for bursts of 1 sec duration is about 0.2% if $`\gamma _{max}=200`$. The radiative efficiency for short duration bursts can increase significantly if the Lorentz factor of ejecta is larger in shorter duration bursts (see fig. 1). Choice of a different distribution function for the Lorentz factor of ejecta has little effect on the efficiency of long duration bursts. However, the efficiency of short duration bursts can increase significantly if the width of the distribution function is taken to be small so that shells collide at larger radii enabling photons to escape freely; For instance, in the case where $`\gamma _{min}=50`$ & $`\gamma _{max}=200`$ the radiative efficiency is nearly constant, $`\eta 0.006`$, for bursts of duration 1 sec and longer (see fig. 1). The efficiency for short duration bursts is also enhanced if they are less energetic than longer duration bursts thereby requiring smaller baryonic loading.
2.2 Energy loss due to neutrino production
Some fraction of gamma-ray bursts display variability on milli-second time scale if not less. The energy of explosion in these cases is expected to be generated in a region of size about 100 km. If the total energy release in an explosion underlying a GRB is $`E`$ and it involves ejection of $`N`$ shells, each of which have an initial radius of $`r_0`$ then the mean initial temperature of shells is $`T_0=[3E_n/(4Na\pi r_0^3)]^{1/4}=20.6`$ Mev $`E_{53}^{1/4}r_{100}^{3/4}N^{1/4}`$; where $`a`$ is the radiation constant, $`E_{53}`$ is energy in units of 10<sup>53</sup> erg, and $`r_{100}=r_0/100`$km. We note that the energy of the explosion ($`E`$) is greater than the observed energy in the gamma-ray emission by a factor of at least ten because of the inefficiency of photon production discussed in §2.1. Moreover, the value of $`E`$ that should be used in calculating the temperature is the total isotropic energy of explosion and not the reduced energy due to finite opening angle of jet, so long as the jet was produced in the initial explosion and not by some collimation effect of the surrounding medium subsequent to a spherical explosion. Thus $`E10^{53}`$ erg is a reasonable value for the five GRBs with known redshift distance.
Neutrinos produced by $`e^{}`$$`e^+`$ annihilation, and the decay of meuons and pions result in a loss of a fraction of the energy of explosions. The energy loss rate due to $`e^{}`$$`e^+`$ annihilation is given by
$$\frac{dE_n}{dt}=2n_ec\sigma _eϵ_e(4\pi r^2r_0n_e),$$
where $`E_n=E/N`$, $`n_e`$ is the number density of electrons, $`ϵ_e`$ is the mean thermal energy of electrons, $`4\pi r^2r_0`$ is the volume of the shell in its comoving frame when the shell has expanded to a radius $`r`$ (the shell thickness, $`r_0`$, is very nearly constant in the initial acceleration phase), and $`\sigma _e=2\times 10^{44}(ϵ_e/1MeV)^2`$cm<sup>2</sup> is the effective cross section for $`e^+`$ and $`e^{}`$ annihilation to produce neutrinos of all different flavors. Since $`E_n12\pi r^2r_0n_eϵ_e\gamma `$, $`n_e=2.34\times 10^{34}T_{10}^3`$ cm<sup>-3</sup> ($`T_{10}=T/10`$ Mev), and $`ϵ_e=3.15kT`$, we find
$$\frac{d\mathrm{ln}E}{dt}=\frac{9.5\times 10^3}{\gamma }T_{10}^5.$$
Initially the Lorentz factor of shells ($`\gamma `$) increases linearly with their radius and the temperature declines as the inverse of the radius. Using these relations we can integrate the above equation and find that
$$\mathrm{ln}\left[\frac{E(2t_0)}{E(t_0)}\right]=1.9\times 10^3t_0\left(\frac{T_0}{10\mathrm{M}\mathrm{e}\mathrm{v}}\right)^5,$$
where $`t_0`$ is the larger of $`r_0/c`$ and the time when the shell becomes optically thin to neutrinos; shells become optically thin to electron neutrinos when $`T_010.2`$ Mev. A neutrino propagating outward sees the mean electron energy and density decrease and therefore the opacity for scattering in an expanding medium is smaller than a corresponding static shell.
For $`r_0=10^7`$ cm and $`T_0=7`$ Mev we find that 10% of the energy of the explosion is lost to neutrinos from $`e^+`$$`e^{}`$ annihilation, and for $`T_0=10`$ Mev, 50% of the energy is lost.
We next calculate the fraction of energy carried away by neutrinos produced by the decay of muons and pions. Let us consider an unstable particle ($`\mu ^\pm `$ or $`\pi ^\pm `$) of mass $`m_d`$, that has a lifetime of $`t_d`$, the number density $`n_d`$, and the amount of energy carried by neutrinos when it decays is $`ϵ_\nu `$. In the temperature range of interest to us, these particles are created by $`e^\pm `$ interaction on time scale short compared to their decay time and so their number density is given by the thermal distribution i.e.
$$n_d=10.5T^3\left(\frac{m_d}{kT}\right)^{3/2}\mathrm{exp}(m_dc^2/kT)\mathrm{cm}^3.$$
The rate of loss of energy of the explosion to escaping neutrinos produced by the decay of these particles is given by
$$\frac{dE}{dt}=\frac{8\pi r^2r_0n_dϵ_\nu }{t_d}\frac{Eϵ_\nu }{8t_dkT}\left(\frac{m_dc^2}{kT}\right)^{3/2}\mathrm{exp}(m_dc^2/kT).$$
This equation can be easily integrated to yield,<sup>1</sup> The $`\nu _\mu `$’s produced in these decays find the shell to be optically thin so long as the shell temperature is less than about 15 Mev. For $`T_0\mathrm{}>15`$Mev the $`\nu _\mu `$’s are trapped in the fireball and their distribution is thermal in equilibrium with $`e^\pm `$. In this case roughly 50% of the fireball energy is lost to neutrinos.
$$\mathrm{ln}\left[\frac{E(2t_0)}{E(t_0)}\right]=\frac{t_0}{t_d}\frac{ϵ_\nu }{8kT}\left(\frac{m_dc^2}{kT_0}\right)^{1/2}\mathrm{exp}(m_dc^2/kT).$$
For the muons $`m_d=105.66`$Mev, $`t_d=2.2`$x$`10^6`$ s, $`ϵ_\nu 70`$ Mev. Thus the fraction of energy lost by the decay of $`\mu ^\pm `$ for $`T_0=10`$ Mev and $`t_0=3.3`$x10<sup>-4</sup>s, is 0.5%, whereas at $`T_0=15`$ Mev, 10% of the energy of the fireball is lost to neutrinos from muon decay.
For pions $`m_d=139.6`$Mev, $`t_d=2.55`$x$`10^8`$ s, $`ϵ_\nu 29`$ Mev. The fraction of energy lost by the decay of of $`\pi ^\pm `$ if we take $`T_0=10`$ Mev and $`t_0=3.3`$x10<sup>-4</sup> s is 2%, whereas at $`T_0=15`$ Mev, 50% of the energy of the explosion is lost to neutrinos from pion decay.
In summary, we find that a fraction of the energy of the explosion is lost to neutrinos. The fraction lost depends on the initial temperature of the fireball, and for plausible burst parameters roughly half the energy of the explosion is carried away by neutrinos. Since the typical energy of these neutrinos is about 10-30 Mev they are undetectable from a typical GRB source at $`z1`$. The total energy in high energy neutrinos, $`ϵ_\nu \mathrm{}>10^{14}`$ev, produced in internal shocks, is about two orders of magnitude smaller than the energy in the 1–30 Mev neutrinos considered here. However, the much larger cross-section for the high energy neutrinos makes them accessible to the large neutrino detections under construction (Waxman and bahcall, 1998).
Summary and discussion
We find that the efficiency for internal shocks to convert the energy of explosion to radiation in the energy band 10—10<sup>3</sup> kev is of order 1% if electrons are in equipartition with protons and magnetic field. The efficiency is smaller if the electron energy is less than the equipartition value as suggested by analysis of afterglow emission (eg. Waxman 1997). Energy loss due to neutrino production at initial times, when the fireball temperature is $`10`$ Mev for short duration bursts, could be significant, further reducing the energy available for radiation by a factor of $``$ two. The bolometric radiative efficiency of random internal shocks is found to be a factor of about 4 larger. A recent work of Panaitescu, Spada and Mészáros (1999) finds the radiative efficiency of internal shocks in the 50–300 kev band to be about 1%, and is consistent with our result.
For GRB 971214, 980703 and 990123, the total isotropic energy radiated, in the BATSE energy band, has been estimated from their observed redshifts and fluences and found to be 3x10<sup>53</sup>, 2x10<sup>53</sup> and 3.5x10<sup>54</sup> erg respectively. The flux in higher energy photons could increase the total energy budget by a factor of $`2`$. These three bursts are the most energetic of the five bursts for which redshifts (or lower limits to $`z`$) are known. These energies should of course be corrected for beaming and the efficiency for photon production.
It has been suggested that the energy for GRB 990123, 3.5x10<sup>54</sup>erg for isotropic explosion, is reduced by a factor of about 50 due to finite beaming angle (Kulkarni et al. 1999; Mészáros & Rees 1999). However, the inefficiency of producing radiation raises the energy budget by a factor of about 100, so the energy in the explosion is more than 10<sup>54</sup> erg even if beaming is as large as suggested. For GRB 980703 (at $`z=0.966`$), for which there is no evidence for beaming, the energy in the explosion is also of order 10<sup>54</sup> erg. So it appears that the total energy of explosion for the most energetic bursts is close to or possibly greater than 10<sup>54</sup> erg. This energy is greater than what one can realistically hope to extract from a neutron star mass object.
The efficiency for gamma-ray production is significantly increased if photons during the main burst are produced in both internal and external shocks. However, since it is very difficult to get short time variability in external shocks (Sari & Piran, 1997) only a small fraction of energy in highly variable bursts can arise in external shocks. The energy requirement is also reduced if shells ejected in explosions are highly inhomogeneous. This will be discussed in a future paper.
Acknowledgments: I thank John Bahcall for encouraging me to writeup this work and for his comments. I am grateful to Plamen Krastev for providing accurate neutrino cross-sections, and I am indebted to Ramesh Narayan, Tsvi Piran and Peter Mészáros for many helpful discussions. I thank an anonymous referee for suggestions to clarify some points in the paper.
REFERENCES
Daigne, F. and Mochkovitch, R. 1998, MNRAS
Kabayashi, S., Piran, T. & Sari, R. 1997, ApJ 490, 92
Kelson et al., 1999, IAUC 7096
Kulkarni, S.R., et al. 1998, Nature 393, 35
Kulkarni, S.R. et al., 1999, astro-ph/9902272
Metzger, M.R., et al. 1997, Nature, 387, 879
Mészáros, P., and rees, M.J. 1999, astro-ph/9902367
Narayan, R., Piran, T., and Shami, A., 1991, ApJ 379, L17
Pacznski, B. and Xu, G. 1994 ApJ 427, 708
Panaitescu, A., Spada, M. & Mészáros, P., 1999, astro-ph/9905026
Piran, T., 1999, to appear in Physics Reports
Rees, M.J. and Mészáros, P., 1994, ApJ 430, L93
Sari, R. and Piran, T., 1997, MNRAS 287, 110
Waxman, E., 1997, ApJ 489, L33
Waxman, E. and Bahcall,J.N., 1998, astro-ph/9807282
Figure Captions
Figure 1.— The efficiency for the conversion of the energy of explosion to radiation, in the energy band 10–10<sup>3</sup> kev, via internal shocks ($`\eta `$\*100) is shown as a function of the time duration of GRBs. The energy lost to neutrinos is highly temperature dependent and has not been included in this calculation. The continuous curve corresponds to the maximum Lorentz factor of the ejected shells to be 200, and for the dotted curve the maximum Lorentz factor is 500. The minimum value of the Lorentz factor in both these cases was taken to be 5. The minimum Lorentz factor for the dashed curve was taken to be 50 & $`\gamma _{max}=200`$. Each point on the curve was calculated by averaging 250 realizations of ’explosions’ in which 50 shells were randomly expelled as described in section 2.1. The total energy in each of the explosion was taken to be 10<sup>52</sup> erg which was independent of the burst duration. The radiative efficiency is almost independent of the number of shells ejected so long as the number is larger than a few.
|
no-problem/9907/cond-mat9907190.html
|
ar5iv
|
text
|
# REFERENCES
Li et al. reply: In their Comment on our recent Letter , Bhattacharya et al. note that for the geometry used in their experiment to try to detect the modes supported by the Meissner state of a $`d`$-wave superconductor predicted by $`\stackrel{˘}{\mathrm{Z}}`$utić and Valls , a field of first vortex penetration $`H_{c1}^{}`$ of 300 Oe was measured. They then argue that the crossover field $`H^{}`$ separating the regime dominated by nonlocal electrodynamics and a local regime is of order only 20 Oe, suggesting that nonlocal effects can be neglected for their geometry for most of the intermediate field range $`H^{}<H<H_{c1}^{}`$. Instead, they attribute their failure to observe these modes to the absence of gap nodes due to an out-of-phase admixture of a secondary order parameter component. Here we question whether the theory of $`\stackrel{˘}{\mathrm{Z}}`$utić and Valls can be applied in this intermediate range, argue that a $`d+is`$ state in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.95</sub> (YBCO) is unsupported by other types of experimental evidence, and continue to maintain that the nonlinear Meissner effect is unlikely to be observed in any of its manifestations, due in part to nonlocal effects.
In our paper, we pointed out simply that the nonlocal crossover field $`H^{}\frac{\mathrm{\Phi }_0}{\pi \lambda _0^2}\frac{\xi _{0c}}{\xi _0}`$ ($`\lambda _0`$ is the penetration depth for supercurrents in the plane, and $`\xi _0`$ and $`\xi _{0c}`$ are the coherent lengths in the plane and along the $`\widehat{c}`$ axis, respectively), for the geometry in question, with a dominant (001) surface and $`H\widehat{a},\widehat{b}`$, is generically of the same order of magnitude as the field below which the Meissner state is thermodynamically stable, $`H_{c1}\frac{\mathrm{\Phi }_0}{4\pi \lambda _0\lambda _{0c}}\mathrm{ln}\overline{\kappa }`$, where $`\lambda _{0c}`$ is the penetration depth for supercurrents along the $`\widehat{c}`$ axis, and $`\overline{\kappa }=\sqrt{\kappa _0\kappa _{0c}}`$ with $`\kappa _0`$ and $`\kappa _{0c}`$ the plane and the $`\widehat{c}`$-axis Ginzburg-Landau parameters, respectively. Bhattacharya et al. argue that the experimentally measured field of first vortex penetration $`H_{c1}^{}`$ is as much as an order of magnitude higher. With this problem in mind, we in fact alluded to precisely this difficulty of determining the correct lower critical field in our paper .
We do not have a complete understanding of the high value of $`H_{c1}^{}`$ consistent with all experimental observations at this time. An obvious explanation is the existence of a surface barrier, as is commonly observed in type-II materials . On the other hand, the expected hysteresis in such systems due to barrier profile asymmetry has apparently not been observed.
The real questions are as follows: For fields $`H_{c1}<H<H_{c1}^{}`$, is the sample still in the pure Meissner state and can one expect to observe manifestations of the nonlinear Meissner effect? We believe not. Although the bulk magnetization has not occurred, the intermediate field state is thermodynamically unstable and vortices may be easily trapped at the sample corners or within the skin depth. Two recent high-resolution penetration depth measurements failed to measure the predicted temperature dependence in this field range, but did observe a large linear-$`H`$ field dependence most likely attributed to trapped vortices . These vortices may make the field penetration layer highly nonuniform, leading to a broadening of the $`\stackrel{˘}{\mathrm{Z}}`$utić-Valls harmonics.
Bhattacharya et al. argue that their failure to observe the predicted small nonlinearities is strong evidence for the existence of a true gap in the bulk YBCO system, as created, e.g. by a bulk $`d+is`$ order parameter with $`\mathrm{\Delta }_s/\mathrm{\Delta }_d`$ of order a few percent. This claim is however inconsistent with several other experiments which indicate the existence of a residual normal fluid in $`YBCO`$, in one case down to 30mK . Were a small out-of-phase order parameter component $`\mathrm{\Delta }_s`$, of order several degrees K, to exist, the gap in the density of states would exponentially supress quasiparticle excitations at mK temperatures. On the other hand, a much smaller gap, of order tens or even hundreds of milliK is insufficient to eliminate the signal in the experiment of Ref. .
The experiment of Bhattacharya et al. is a clever and careful attempt to measure a very small effect, and, as pointed out by $`\stackrel{˘}{\mathrm{Z}}`$utić and Valls , in principle, is a sensitive probe of gap structure. However, the interpretation of the results is complicated by the influence of trapped flux, and it seems prudent that effects of this type be ruled out or understood before claiming the observation of more exotic effects, such as the $`d+is`$ state invoked by Bhattacharya et al. .
M.-R. Li<sup>1</sup>, P.J. Hirschfeld<sup>2</sup>, and P. Wölfle<sup>1</sup> <sup>1</sup>Institut für Theorie der Kondensierten Materie, Universität Karlsruhe, 76128 Karlsruhe, Germany. <sup>2</sup>Department of Physics, University of Florida, Gainesville, FL 32611, USA.
Received January 1999 PACS numbers: 74.25.Nf, 74.20.Fg
|
no-problem/9907/cond-mat9907371.html
|
ar5iv
|
text
|
# Download relaxation dynamics on the WWW following newspaper publication of URL
## Abstract
A few key properties of the World-Wide-Web (WWW) has been established indicating the lack of any characteristic scales for the WWW, both in its topology and in its dynamics. Here, we report an experiment which quantifies another power law describing the dynamical response of the WWW to a Dirac-like perturbation, specifically how the popularity of a web site evolves and relaxes as a function of time, in response to the publication of a notice/advertisement in a newspaper. Following the publication of an interview of the authors by a journalist which contained our URL, we monitored the rate of downloads of our papers and found it to obey a $`1/t^b`$ power law with exponent $`b=0.58\pm 0.03`$. This small exponent implies long-term memory and can be rationalized using the concept of persistence, which specifies how long a relaxing dynamical system remains in a neighborhood of its initial configuration.
It is generally accepted that the World-Wide-Web (WWW) provides one of the most efficient methods for retrieving information. However, little is known about how information actually flows through the WWW and even less on how the WWW interacts with other types of media. Most studies have until now focused on statistical properties of the WWW and the people surfing on it, the “internauts”. A few key properties has been established indicating the lack of any characteristic scales for the WWW : (i) the distribution of the number of pages per site is an approximate power law ; (ii) the distributions of outgoing (Uniform Resource Locator or URLs found on an HTML document) and incoming (URLs pointing to a certain HTML document) links are well-described by a universal power law which seems independent of the search engine ; (iii) the distribution of independent hits or users per web-site also seems to follow a power law and the ranking of sites according to their popularity is well-described by Zipf’s law ; (iv) the distribution of waiting times to access a given page is also a power law distribution and the correlation function of the WWW traffic intensity as a function of time also exhibits a slow power law decay .
These properties are believed to reflect the evolutionary self-organizing dynamics of the WWW, which is not well-understood and the subject of active research . The WWW provides in particular a very interesting proxy of a fast evolving ecology of heterogeneous agents in which several different times scales appear ranging from the largest time scale corresponding to a significant evolution of the web network (months to years), the response adjustment time of agents to network evolution or to novel information (hours to months) to the access times (seconds to minutes) of single WWW pages.
Here, we report an experiment which probes a property belonging to the intermediate time scale. Specifically, we quantify how the popularity of a web site evolves and relaxes as a function of time, in response to the publication of a notice/advertisement in a newspaper. The authors were interviewed by a journalist from the Danish newspaper JyllandsPosten on a subject of rather broad and catchy interest, namely stock market crashes. The interview was published on the 14 April 1999 in both the paper version of the newspaper as well as in the electronic version (with access restricted to subscribers) and included the URLs where the authors’ research papers on the subject could be retrieved. Specifically, the URLs were the search engine of the Los Alamos preprint server and the URL of the first author’s home-page at the Niels Bohr Institute’s web-site. Naturally, we had no means of monitoring the downloads from the Los Alamos preprint server. However, all WWW-activity on the Niels Bohr Institute’s web-site is continuously logged and kept for record. It was hence possible to monitor the number of downloads of papers as a function of time.
Since the interview was published in Danish, the experiment only probes a small fraction of the internauts, namely those capable of reading Danish, thus essentially people of Danish, Icelandic, Norvegian and Swedish origin and their immediate surroundings. The results reported below have not been reproduced as the “impact” by the publication of the interview provides a rather unique opportunity to monitor in real time the dynamics of information spreading and persistence. The statistical significance can thus be improved in principle by repeating this experiment several times.
In figure (3), we show the cumulative number of downloads $`N`$ as a function of time $`t`$ since the publication of the interview. Only downloads of papers already posted on the home-page at the time of the publication of the interview has been included in the count in order to keep the experiment as well-defined as possible. The error-bars are taken as the square-root of the number. We see that the data is surprisingly well-captured over two decades by the relation
$$N\left(t\right)=\frac{a}{1b}t^{1b}+ct,$$
(1)
corresponding to a download rate $`dN(t)/dt=at^b+c`$ giving the number of downloads per unit time at a time $`t`$ after the publication of the interview. The constant background rate $`c`$ takes into account downloads from people unaware of the interview as well as robots. The best fit parameters are $`a=23.1\pm 0.5`$ days<sup>-1</sup>, $`b0.58\pm 0.03`$ and $`c0.76\pm 0.31`$ days<sup>-1</sup>, over a total time interval of $`100`$ days. Expression (1) thus establishes a novel self-similar relationship for the dynamical behavior on the WWW, describing the slow relaxation of the system after an essentially Dirac-like excitation. The coefficient $`a`$ controls the absolute number of downloads per unit time and is thus not universal. It reflects the size of the internaut population which is concerned by the experiment. Similarly, the coefficient $`c`$ controls the background rate and depends on 1) how easily the page can be found and 2) the general interest of the subjects posted on the page.
The finding that the relaxation exponent $`b`$ is less than one has an important consequence, namely non-stationarity and “aging” in the technical sense of a breaking of ergodicity. Consider $`N`$ successive downloads separated in time by $`\mathrm{\Delta }t_i,i=1,\mathrm{},N,`$ where $`\mathrm{\Delta }t_1+\mathrm{\Delta }t_2+\mathrm{}+\mathrm{\Delta }t_N=t=N\mathrm{\Delta }t`$. The distribution of downloads time intervals $`\mathrm{\Delta }t`$ is a power law $`1/\mathrm{\Delta }t^{1+x}`$, where $`x`$ is determined from the fact that
$$\mathrm{\Delta }t_0^{\mathrm{\Delta }t_{max}}𝑑\tau \frac{\tau }{\tau ^{1+x}}\mathrm{\Delta }t_{max}^{1x}.$$
Since the maximum $`\mathrm{\Delta }t_{max}`$ among $`N`$ trials is typically given by $`N_{\mathrm{\Delta }t_{max}}^{\mathrm{}}\frac{d\tau ^{}}{\tau ^{1+x}}1`$, we have $`\mathrm{\Delta }t_{max}N^{\frac{1}{x}}`$. Thus $`t=N\mathrm{\Delta }tN^{\frac{1}{x}}`$ giving $`Nt^x`$, for $`x<1`$. We can thus identify the exponent $`x`$ with $`1b`$ and thus find that the distribution of waiting times between successive downloads is a power law with an exponent $`b0.58`$ less than one. One can then show that this power law distribution of time intervals between downloads implies that the longer since the last download, the longer the expected time till the next one . In other words, any expectation of a download that is estimated today depends on the past in a manner which does not decay. This is a hallmark of “aging”. The mechanism is similar to the “weak breaking of ergodicity” in spin glasses that occurs when the exponent $`x`$ of the distribution of trapping times in meta-stable states is less than one .
How can we rationalize this relation (1)? We propose the following very naive but illustrative model: think of the population of internauts as subjected to the influence of the newspaper publication that may trigger an activity (downloading from our site). Let us think of this influence as a field that diffuse and spread dynamically in the complex space network of internauts and in their mind. This diffusive field captures the dynamics of information, rumor spreading, psychological decision and so on. Let us assume that the decision to act and download from our site is triggered when the influence field reaches a threshold. Then the rate $`dN(t)/dt`$ is proportional to the probability for the field not to have reached the threshold, i.e. to the probability to remain in the neighborhood of its initial state. This problem falls in the class of the so-called “persistence phenomenon” discovered in a large variety of systems , and which specifies how long a relaxing dynamical system remains in a neighborhood of its initial configuration. For a Gaussian process, the persistence exponent $`x`$ can be shown to be a functional of the two-point temporal correlator . For Markovian or weakly non-Markovian random walk processes, the exponent $`x`$ and therefore $`b`$ is close to $`1/2`$, as we find empirically.
Figure (3) shows the residue obtained by subtracting the formula (1) from the data points. Figure (3) shows that the spectrum of this residue is sharply peaked on a characteristic frequency corresponding exactly to a period of one week. Since the publication date of 14 April was a Wednesday, the dips shown in figure (3) corresponds to Weekends: Apparently, most people probed by the experiments still mainly have Internet and printer access through their job, which explains the low activity during Weekends and the weekly periodicity.
|
no-problem/9907/astro-ph9907028.html
|
ar5iv
|
text
|
# Discovery of a faint Field Methane Brown Dwarf from ES0 NTT and VLT observations
## 1 Introduction
Despite large observational efforts during recent years in both wide field and targeted searches for very cold brown dwarfs, the number of such objects known so far remains extremely small. Since 1995, and until June 1999, the only genuine one identified was Gliese 229B (Nakajima nakajima (1995), Oppenheimer oppenheimer (1996)), the coolest substellar object known, with a temperature below 1000 K, a mass in the range 20-50 M<sub>J</sub> (Jupiter mass), and an age in the range 0.5-1 Gyr. A second object of this class, SDSS 1624+00, has been discovered recently in the Sloan Digital Sky Survey (Strauss strauss (1999)), after identification from the survey database by its unusual red color. Follow-up spectroscopy of this object in the visible with the Apache Point 3.5m telescope and in the IR with UKIRT identified it as a methane brown dwarf like Gliese 229B. A couple of similar objects have since then been identified (Tzetanov, private communication) from the SDSS survey. At almost the same time, 4 other similar objects were identified from the Two Micron All-Sky Survey (2MASS) (Burgasser burgasser (1999)), and confirmed as methane brown dwarfs from visible spectroscopy at Palomar and IR spectroscopy at Keck.
In this paper we report our discovery of a new methane brown dwarf in the NTT Deep Field, a small area of the sky that was the target of very deep exposures in the visible and the near-infrared using the SUSI and SOFI instruments at the ESO New Technology Telescope (NTT) (Arnouts arnouts (1999), Saracco saracco (1999)). One object, NTTDF J1205-0744, stands out in these images for its very red (i-J) $`>`$ 6 color index. However, it is very blue at longer wavelengths, with (J-Ks) = -0.15. Near-infrared spectroscopy with SOFI, and with ISAAC at the ESO Very Large Telescope (VLT), has confirmed the remarkable similarity of this object to Gliese 229B. The powerful combinations NTT/SOFI and VLT/ISAAC made the observations reported here possible, in spite of the faint apparent magnitude of NTTDF J1205-0744. Although the raw S/N is limited (1 to 2 per pixel, 5 to 10 after rebinning), our results secure the identification of NTTDF J1205-0744 as a new field methane brown dwarf.
## 2 Observations and data reduction
The NTT Deep Field covers an area of 2.3 $`\times `$ 2.3 arcminutes in the visible down to AB magnitude limits of 27.2, 27.0, 26.7 and 26.3 in B, V, r, and i, and 5 $`\times `$ 5 arcminutes in the IR down to magnitude limits of 24.6 and 22.8 in J and Ks.
The entire dataset of the NTT Deep Field Project, primarily targeted to the study of faint galaxy populations, as well as a detailed information on data acquisition and reduction, are publicly available at http://www.eso.org.
J and i band images of the field containing NTTDF J1205-0744 are shown in figure 1.
After identification of NTTDF J1205-0744 from its unusual extremely red colour (i-J) in April 98, we carried out spectroscopy with SOFI at the NTT using Target of Opportunity Time on 30 June - 1 July 1998. The spectrum, covering the range 0.95-1.65 microns (dispersion: 7 $`\mathrm{\AA }`$ per pixel), was obtained under non-photometric conditions using a 1$`\mathrm{}`$ slit, and nodding along the slit between two positions, for a total effective on-target integration time of 84 minutes. Spectrophotometric calibration and removal of telluric features was achieved using the observation of a B9 type star. The spectrum was scaled to match the IR photometry in the J filter.
The spectrum shows clear H<sub>2</sub>O absorptions, leaving peaks in the spectrum at 1.05 and 1.27 $`\mu \mathrm{m}`$(the latter peak at a S/N of 1-1.5 per pixel), and a marginally significant detection of a third peak at 1.57 $`\mu \mathrm{m}`$.
We subsequently obtained spectroscopy of NTTDF J1205-0744 with ISAAC at the VLT in the H and K bands. All the ISAAC observations were made with a 1$`\mathrm{}`$ slit and nodding along the slit.
The K observations were carried out during the nights of 6 and 9 February 1999, for a total amount of time of 1 hour. We used the Low Resolution grating in second order providing a dispersion per pixel of 7 $`\mathrm{\AA }`$. Spectrophotometric calibration was achieved from the observation of a B6 type star observed on a different night. The signal to noise per pixel is below 1 on the peak at 2.1 $`\mu \mathrm{m}`$.
The observations in H were carried out during the night of 23 March 1999, again for a total integration time of 1 hour. We used the same Low Resolution grating in third order, providing a dispersion per pixel of 4.7 $`\mathrm{\AA }`$. Spectrophotometric calibration was achieved from the observation of a B8 type star. The spectrum was arbitrarily scaled so as to correspond to an H magnitude of 20.3. This scaling proved to properly match the SOFI spectrum. The signal to noise ratio per pixel is $``$ 2 on the peak at 1.57 $`\mu \mathrm{m}`$.
The combined, flux calibrated, spectrum is presented on figure 2, overplotted with the spectrum of Gliese 229B for reference (Geballe geballe (1996)).
## 3 Discussion
The magnitudes, or magnitude lower limits of NTTDF J1205-0744 are given in table 1.
Both the i-J and the J-Ks color indices match within less than 0.2 magnitude the color indices of both Gliese 229B and of the SDSS and 2MASS brown dwarfs.
Our infrared spectrum shown in figure 2 has relatively low s/n and some flux calibration uncertainties due to the fact that the observations were made at different times and with different instruments. A detailed discussion of the smaller features is therefore not warranted. For example, the feature in the Ks peak could be real but corresponds to a region of crowded OH sky lines and may just be noise. The most important result here is its striking overall similarity with the spectra of Gliese 229B and of the recently discovered methane brown dwarfs, in particular, the clear presence of the strongest H<sub>2</sub>0 and CH<sub>4</sub> absorption features, which clearly identifiy it as a methane brown dwarf, and the relative flux distribution which implies a similar temperature.
Assuming not only that the colours but also the absolute magnitude is similar to Gliese 229B which is at 5.8 pc we obtain a distance of $``$ 90 pc to NTTDF J1205-0744 ($`\mathrm{\Delta }\mathrm{J}`$ = 6 magnitudes). The assumption of a similar absolute magnitude may be justified on the basis of brown dwarf model predictions (Burrows burrows (1997)). Although both the colour and the magnitude change over a very large range at any particular brown dwarf age, theoretical isochrones practically overlap in color-magnitude diagrams for the range of colors of interest here. Therefore, even if the mass and the age of NTTDF J1205-0744 may be very different from those of the other methane brown dwarfs, the similar (J-Ks) color is indicative of a similar absolute magnitude. Thus, although both mass and age are very poorly constrained by our observations (the spectral features placing however the mass safely in the brown dwarf domain), the distance of NTTDF J1205-0744 is considered to be relatively secure.
We have SOFI and ISAAC images taken $``$ 14 months apart. We looked for possible proper motion, but nothing was detected at the level of 0.3 arcsec (2 $`\sigma `$).
NTTDF J1205-0744 stands out as the only object of its type within the 2.3 $`\times `$ 2.3 arcminutes NTT deep field. Although of dubious reliability based on a single object, the implied volume density is $``$ 1 per cubic parsec (assuming a recognition limit at J=22, see below, corresponding to a distance of $``$ 200 pc). This is considerably higher than the 0.01-0.03 per cubic parsec tentatively quoted by Strauss et al. (strauss (1999)) and than the 0.01 per cubic parsec derived from the discoveries of the 2MASS methane brown dwarfs (Burgasser burgasser (1999)). This implies that either our technique is considerably more sensitive or, more likely, that we were extremely lucky.
The probability of finding a very cold object in a random field with a given limiting magnitude can also be estimated using published brown dwarf models, the local density of low mass stars, and an extrapolation of the initial mass function towards lower masses. We have carried out this exercise using the Burrows et al. (burrows (1997)) models and the local volume density at 0.1 solar masses from Scalo (scalo (1986)). We have assumed a constant local formation rate of low mass stars over the last 10 Gyr. The initial mass function (IMF) below 0.1 solar masses has been represented by a power-law of the form $`\mathrm{\Phi }(M)dMM^\alpha dM`$, and we have considered values for $`\alpha `$ ranging from -1.5 to +1. We have then calculated the number of objects with a temperature lower than 1000 K that may be expected to appear in the field with an apparent J magnitude brighter than 22, under the assumption of the different values of $`\alpha `$. Although objects much fainter than J=22 are still visible in the J image, the limit chosen is given by the need to be able to recognize the characteristic colors of possible brown dwarfs, namely the extremely red (i-J) and the blue (J-Ks). The limiting J magnitude that we use is thus actually defined by the limiting Ks magnitude, combined with the (J-Ks) colors expected for the objects of interest. The results are given in Table 2.
These values are much lower than the 1% probability one would expect for a volume density of 0.01 per cubic parsec, suggesting that a negative slope much steeper than -1.5 would be required for the IMF to fit with the observed density.
One of the most remarkable features of these objects is the huge I-J color index which make them difficult to find using visible data alone. Despite the spectacular success of the Sloan Digital Sky Survey which has led to the discovery of SDSS 1624+00, the main avenue for unveiling in a systematic way this new population of methane brown dwarfs is to resort to combined visible (I) and IR (J and H or J and Ks) deep observations, as demonstrated by the 2MASS discoveries and by the present work. It is interesting to note that the DENIS survey (Delfosse delfosse (1999)) did not detect so far such methane brown dwarfs, which might be explained by the relatively low detection limit in Ks (13.5). With a volume density of 0.01 per cubic parsec, the chance of finding a methane brown dwarf brighter than this limit is $``$ 1 over the whole sky.
The high I-J (or any visible - J) color index, combined with an almost flat J-H or J-Ks color index, is a very clear indicator for these methane brown dwarfs.
|
no-problem/9907/cond-mat9907498.html
|
ar5iv
|
text
|
# A Composite Fermion Description of Rotating Bose-Einstein Condensates
## Abstract
We study the properties of rotating Bose-Einstein condensates in parabolic traps, with coherence length large compared to the system size. In this limit, it has been shown that unusual groundstates form which cannot be understood within a conventional many-vortex picture. Using comparisons with exact numerical results, we show that these groundstates can be well-described by a model of non-interacting “composite fermions”. Our work emphasises the similarities between the novel states that appear in rotating Bose-Einstein condensates and incompressible fractional quantum Hall states.
PACS Numbers: 03.75.Fi, 73.40.Hm, 67.57.Fg
It has proved fruitful in fractional quantum Hall systems to account for the many-body correlations induced by electron-electron interactions by introducing non-interacting “composite fermions”. Recently a similar approach has been employed to show that the correlated states arising from interparticle interactions in dilute rotating confined bose atomic gases can be described in terms of the condensation of a type of composite boson. Here, we demonstrate that a transformation of the system of rotating bosons to that of non-interacting composite fermions is also successful in accounting for these correlated states. Our results establish a close connection between the groundstates of rotating confined Bose-systems and the correlated states of fractional quantum Hall systems.
While the trapped atom gases have been shown to Bose-condense, the response of these condensates to rotations has not, as yet, been measured experimentally. Theoretically, it is clear that there exist various different regimes. Within the Gross-Pitaevskii framework, which requires macroscopic occupation of the single particle states, the system forms vortex arrays at both long and short coherence lengths (compared to the size of the trap), which are reminiscent of Helium-4. Here, following Ref. , we choose to study the system in the limit of large coherence length without demanding macroscopic occupation numbers. This allows us to study both the regime considered in Ref. , as well as regimes of higher vortex density where the quantum mechanical nature of the vortices will be most prevalent. Indeed, in Ref. it was shown that, in general, the groundstates of the rotating boson system cannot be described within a conventional many-vortex picture. Rather, the system was found to be better described in terms of the condensation of “composite bosons” – bound states of vortices and atoms – across the whole range of vortex density. In the present paper, we show that a description in terms of non-interacting composite particles with fermionic statistics also provides a highly accurate description of the rotating bose system: specifically, it enables us to predict many of the features in the energy spectrum and to form good overlaps with the exact groundstate wavefunctions. In addition, this description indicates a close relationship between the properties of rotating Bose systems and those of fractional quantum Hall systems.
In a rotating reference frame, the standard Hamiltonian for $`N`$ weakly interacting atoms in a trap is
$$=\frac{1}{2}\underset{i=1}{\overset{N}{}}[_i^2+r_i^2+\eta \underset{j=1,i}{\overset{N}{}}\delta (𝒓_i𝒓_j)2𝝎𝑳_i]$$
(1)
where we have used the trap energy, $`\mathrm{}\sqrt{K/m}=\mathrm{}\omega _0`$ as the unit of energy and the extent, $`(\mathrm{}^2/MK)^{1/4}`$, of the harmonic oscillator ground state as the unit of length. ($`M`$ is the mass of an atom and $`K`$ the spring constant of the harmonic trap.) The coupling constant is defined as $`\eta =4\pi \overline{n}a(\mathrm{}^2/MK)^{1/2}`$ where $`\overline{n}`$ is the average atomic density and $`a`$ the scattering length. The angular velocity of the trap, $`\omega `$, is measured in units of the trap frequency.
Throughout this work, we make use of the limit of weak interactions ($`\eta 1`$). It was shown in Ref. that in this limit the system may be described by a two-dimensional model with a Hilbert space spanned by the states of the lowest Landau level: $`\psi _m(𝒓)z^m\mathrm{exp}(zz^{}/2)`$, where $`m`$ is the angular momentum quantum number ($`m=0,1,2\mathrm{}`$) and $`zx+iy`$. The kinetic energy is quenched and the groundstate is determined by a balance between the interaction and potential energies. Noting that the $`𝒛`$-component of the angular momentum, $`L`$, commutes with the Hamiltonian, the total energy, scaled by $`\eta `$, may be written
$$E/\eta =V_N(L)+(1\omega )/\eta L,$$
(2)
where $`V_N(L)`$ is the interaction energy at angular momentum $`L`$. While this separation holds for all energy eigenstates, we choose $`V_N(L)`$ to denote the smallest eigenvalue of the interactions at angular momentum $`L`$. Since the interactions are repulsive, $`V_N(L)`$ decreases as $`L`$ increases and the particles spread out in space; a tendency that is opposed by the term $`(1\omega )/\eta L`$ describing the parabolic confinement. Thus, as the rotation frequency $`\omega `$ is varied, the groundstate angular momentum will increase, from $`L=0`$ at $`\omega =0`$, to diverge as $`\omega 1`$ (when the trap confinement is lost); our goal is to describe the sequence of states (of different $`L`$) through which it passes.
We have obtained the groundstate interaction energies, $`V_N(L)`$, for $`N=3`$ to $`10`$ particles, from exact numerical diagonalisations within the space of bosonic wavefunctions in the lowest Landau level. While the interaction energy $`V_N(L)`$ does decrease with increasing angular momentum, it is not a smooth function of $`L`$. Thus, the groundstate angular momentum, obtained by minimising (2), is not a smoothly increasing function of $`\omega `$. As shown in Fig. 1, certain values of angular momentum, corresponding to downward cusps in $`V_N(L)`$, are particularly stable, and are selected as the groundstate over a range of $`\omega `$.
The existence of certain angular momentum states of enhanced stability is reminiscent of the “magic-values” of angular momentum for electrons in quantum dots; by analogy we will also refer to the stable angular momenta of the bosons as the magic values. Indeed, the system of bosons we study is precisely the bosonic variant of the (fermionic) problem of a parabolic quantum dot in strong magnetic field, with $`\omega _0`$ playing the role of the magnetic field and $`(1\omega )/\eta `$ the role of the parabolic confinement, and with $`\delta `$-function interactions replacing the more usual Coulomb repulsion. As we shall explain, the magic values of angular momenta for the bosonic and fermionic systems are, in fact, closely related. This is a corollary of our principal result, to which we now turn, that much of the structure appearing in Fig. 1 can be interpreted simply in terms of the formation of bound states of bosons and vortices behaving as non-interacting composite particles with fermionic statistics – “composite fermions”(CF).
It is known that, for homogeneous systems, interacting bosons and interacting fermions within the lowest Landau level have many features in common. For example, there exist certain filling fractions of both the boson and fermion systems at which interactions lead to incompressible groundstates, with wavefunctions that may be related by a simple statistical transformation if $`1/\nu _F=1/\nu _B+1`$ ($`\nu _B`$ and $`\nu _F`$ are the filling fractions of the bosons and fermions). These similarities arise from the remarkable effectiveness of mean-field approximations to Chern-Simons theories of such systems. Here, we are interested in an inhomogeneous system, in which the bosons are subject to a parabolic confinement. Jain and co-workers have shown that the fermionic equivalent of this problem – interacting electrons in a quantum dot – can be well-described in terms of properties of non-interacting composite-fermions. Motivated by the successes of their theory, we apply a similar transformation to describe the present bosonic problem.
Specifically, we make the following ansatz for the many-boson wavefunction
$$\mathrm{\Psi }_L^{ansatz}(\{z_i\})=𝒫\left\{\underset{i<j}{}(z_iz_j)\mathrm{\Psi }_{L_{CF}}^{CF}(\{z_i\})\right\}$$
(3)
where $`\mathrm{\Psi }_{L_{CF}}^{CF}(\{z_i\})`$ is a wavefunction for some fermionic particles – the composite fermions. Multiplication of the antisymmetric CF wavefunction by the Jastrow prefactor generates a completely symmetric bosonic wavefunction. $`𝒫`$ projects the wavefunction onto the lowest Landau level, which amounts to the replacement $`z_i^n\overline{z_i}^m\{\frac{n!}{m!}z_i^{nm}(nm);0(n<m)\}`$ for all terms in the polynomial part of the wavefunction. For a full discussion see Ref. . (For ease of presentation, we omit exponential factors and normalisation constants from all wavefunctions.)
The transformation (3) causes the boson wavefunction to describe a half vortex around the position of each other particle in addition to the motions described by $`\mathrm{\Psi }^{CF}`$. One can therefore interpret a composite fermion as a bound state of a boson with a half vortex (cf. Ref.). As a result, the angular momentum of the bosons, $`L`$, is increased with respect to that of the composite-fermions, $`L_{CF}`$, according to
$$L=L_{CF}+N(N1)/2.$$
(4)
Note that the transformation (3) relates $`1/\nu _{CF}=1/\nu _B1`$ and is not the same as that used in Ref. . There are an unlimited number of fermion $``$ boson mappings that one can effect through transformations of the form (3). In Fig. 2 we present a schematic of how, by subsequent attachments of half-vortices – each causing an addition of $`N(N1)/2`$ to the angular momentum – one can transform from the composite bosons (CB) introduced in Ref. to the composite fermions used here (CF), to the bare boson system in which we are interested (B), and finally to a fermion system (F).
We introduce the fermion system (F) to point out that the composite fermions (CF) which we use to describe the boson system (B) are the same as those used by Jain and Kawamura to describe interacting electrons in quantum dots (F). The predictions of the energy spectrum flowing from a model of non-interacting composite fermions will therefore be identical in the boson and fermion systems up to the shift $`L_F=L+N(N1)/2`$.
In the spirit of Ref. , we shall consider the CFs, described by $`\mathrm{\Psi }_{L_{CF}}^{CF}(\{z_i\})`$, to be non-interacting, and look at the variation of the minimum kinetic energy of the CFs as a function of the total angular momentum. We further assume that a composite fermion in the Landau level state $`(n,m)`$ (with Landau level index $`n=0,1,2\mathrm{}`$, and angular momentum $`m=n,n+1\mathrm{}`$) has an energy $`E_n=(n+1/2)E_{CF}`$, where $`E_{CF}`$ is some effective cyclotron energy. These assumptions may be viewed as a mean-field treatment of the appropriate Chern-Simons theory for this system; ultimately, they are justified by the predictive successes of the resulting theory.
Figure 3 shows the resulting groundstate energy of non-interacting CFs as a function of $`L=L_{CF}+N(N1)/2`$ for $`N=7,8,9`$, together with the exact interaction energies $`V_N(L)`$.
It is apparent that the composite fermion energies fail to capture the rapid rise in the exact energies at small angular momenta; this can be interpreted as a failing of the assumption of a constant effective cyclotron energy $`E_{CF}`$. The principal success of this approach is the identification of the cusps in the exact energy $`V_N(L)`$: at almost all of the angular momenta for which the composite fermion kinetic energy shows a downward cusp (we label these sets of angular momenta by $`L_N^{}`$), there is a corresponding cusp in the exact energy $`V_N(L)`$. Since a “magic” angular momentum of the boson system must coincide with a downward cusp in $`V_N(L)`$, the set $`L_N^{}`$ represents a set of candidate values for the magic angular momenta. For example: for $`N=7`$, the CF model predicts all nine actual magic numbers and in addition identifies cusps which do not become groundstates for a further three $`LL_7^{}`$. These missing values are not necessarily a failing of the composite fermion model. The main failing, to which we will return later, is that there is a small number of magic angular momenta that are not identified.
Not only does the composite fermion model successfully identify the majority of the magic angular momenta, as we now show it also provides a very accurate description of the associated wavefunctions. The composite fermion wavefunctions corresponding to the angular momenta $`L_N^{}`$ are the “compact states” discussed in Ref. . For these states, the composite fermions occupy the lowest available angular momentum states within each Landau level. As an illustration, for $`N=4`$, there is a cusp in the composite fermion energy at $`L=8`$ ($`L_{CF}=2`$), at which the composite fermions occupy the single particle states $`(n,m)=\{(0,0),(0,1),(0,2),(1,1)\}`$. The wavefunction $`\mathrm{\Psi }_{L_{CF}}^{CF}`$ is formed as a Slater determinant of these states, and the bosonic wavefunction $`\mathrm{\Psi }_L^{ansatz}`$ is constructed via Eq.(3).
In the cases $`L=0`$ and $`L=N(N1)`$, this procedure yields the exact groundstate wavefunction for all $`N`$. At $`L=0`$, there is only one many-body state within the lowest Landau level (all bosons occupy the $`m=0`$ state); the ansatz (3) has non-zero overlap with this state, so must (trivially) be the groundstate. For $`L=N(N1)`$, the lowest energy composite fermion state is formed from the states $`\{(0,0),(0,1),\mathrm{}.(0,N)\}`$. The Slater determinant of these states may be written $`\mathrm{\Psi }^{CF}=_{i<j}(z_iz_j)`$, which, inserted in (3), generates the bosonic Laughlin state $`\mathrm{\Psi }^{ansatz}=_{i<j}(z_iz_j)^2`$. (This state is in the lowest Landau level, and projection is unnecessary.) Since this wavefunction vanishes for $`z_i=z_j`$ ($`ij`$), it is the exact zero energy eigenstate of the $`\delta `$-function two-body interaction potential.
At intermediate values of the angular momentum, our ansatz (3) is not, in general, exact. We have performed numerical calculations to determine the overlaps of the ansatz wavefunctions with the exact groundstate wavefunctions, $`|\mathrm{\Psi }_L^{ansatz}|\mathrm{\Psi }_L^{exact}|`$. We list these overlaps in Table I at each of the angular momenta, $`LL_N^{}`$, selected by the non-interacting composite fermion model. In general, the ansatz (3) has an overlap of close to unity with the exact groundstate: the composite fermion model provides an excellent description of these states. Small overlaps can occur when the composite fermion model does not produce a unique ansatz – i.e. when two, or more, sets of single particle states for the composite fermions have the same kinetic energy at a given $`L`$ (e.g. $`N=6,L=12`$). In these cases, the overlaps could be improved by diagonalising the Hamiltonian within the space of states spanned by the two ansatz states.
Owing to the impressive agreement between the ansatz wavefunctions (3) and the exact groundstates at $`L_N^{}`$, an accurate description of the groundstate angular momentum as a function of rotation frequency can be obtained using only this set of ansatz wavefunctions. Minimizing the expectation value of the energy (2) within this set of ansatz wavefunctions, one obtains a groundstate angular momentum as a function of $`(1\omega )/\eta `$ that is in excellent agreement with the exact results shown in Fig. 1. This approach does, however, omit a small number of magic angular momenta. In some cases, these are magic values identified by the composite fermion model, but for which the expectation value of the energy happens not to be sufficiently low to become stable ($`N=6`$, $`L=12`$; $`N=9`$, $`L=33,37`$; $`N=10`$, $`L=38`$). The most important omissions are the magic angular momenta at $`N=8,L=12`$, $`N=10,L=16,21`$ for which there are no features in the composite fermion kinetic energy that would suggest a stable angular momentum state. We believe that this emergent structure at larger numbers of particles represents many-body correlations that are not captured by the non-interacting composite fermion model used here. (Some of these states are correctly identified by the composite boson approach.) They could be related to the incompressible states, such as $`\nu =4/5`$, of quantum Hall systems which cannot be explained in terms of non-interacting composite fermions alone, but require an additional ‘particle-hole’ transformation. This view is strengthened by the observation that related magic angular momenta also appear in the exact groundstate energy of electrons in quantum dots interacting by Coulomb forces, up to the the shift $`L_F=L+N(N1)/2`$ (e.g. Ref. identifies a stable state of $`N=10`$ electrons at $`L_F=61`$ – equivalent $`N=10,L=16`$ of the present bosonic model). The study of this additional structure is beyond the scope of the present work.
In summary, we have studied the properties of rotating Bose systems in parabolic traps in the limit of large coherence length. Through comparisons with exact results for small systems, we showed that many of the features of the exact spectrum of the bosons can be understood in terms of non-interacting composite fermions. The non-interacting composite fermion model leads to (1) the identification of a set of candidate values for the stable angular momenta of the bosons, and (2) associated many-body wavefunctions that have large overlap with the exact groundstate wavefunctions. The successes of the mapping to composite fermions indicate that the groundstates of rotating Bose-Einstein condensates, in the limit of large coherence length, are closely related to the correlated states appearing in fractional quantum Hall systems.
We would like to thank J.M.F. Gunn and R.A. Smith for many helpful discussions. This work was supported by the Royal Society and EPSRC GR/L28784.
|
no-problem/9907/gr-qc9907092.html
|
ar5iv
|
text
|
# Newman-Janis method and rotating dilaton-axion black hole
## Abstract
It’s shown that the rotating dilaton-axion black hole solution can be obtained from GGHS static charged dilaton black hole solution via Newman-Janis method.
PACS number(s): 02.30.Dk,04.20.Cv
The low energy limit of the heterotic string theory gives an interesting generalization of Einstein-Maxwell theory - the Einstein-Maxwell-dilaton - axion gravity.The field equations of the Einstein-Maxwell-dilaton-axion gravity in four dimensions can be obtained from the following action ,
$`𝒜={\displaystyle \frac{1}{16\pi }}{\displaystyle d^4x\sqrt{g}\left(R2_\mu \phi ^\mu \phi \frac{1}{2}e^{4\phi }_\mu \mathrm{\Theta }^\mu \mathrm{\Theta }+e^{2\phi }F_{\mu \mu }F^{\mu \nu }+\mathrm{\Theta }F_{\mu \nu }\stackrel{~}{F}^{\mu \nu }\right)}`$ (1)
Here $`R`$ is the Ricci scalar with respect to the space-time metric $`g_{\mu \nu }`$(with a signature $`(+,,,)`$), $`\phi `$ is the dilaton field, $`F_{\mu \nu }=(dA)_{\mu \nu }`$ and $`\stackrel{~}{F}_{\mu \nu }`$ are correspondingly the Maxwell tensor and its dual, the pseudo scalar $`\mathrm{\Theta }`$ is related to the Kalb-Ramond field $`H^{\mu \nu \sigma }`$ through the relation
$$H^{\mu \nu \sigma }=\frac{1}{2}e^{4\phi }\epsilon ^{\mu \nu \sigma \rho }_\rho \mathrm{\Theta }.$$
In last decade the string black holes attract much attention.The static spherically symmetric charged dilaton black hole was obtained by Gibbons and independently by Garfinkle,Horowitz and Strominger .Using the string target space duality rotation Sen found the rotating dilaton-axion black hole solution generating it from Kerr solution .
It’s well-known that Kerr and Kerr-Newman solution in Einstein theory can be generated correspondingly from Schwarzschild and Reissner-Nordsrom solution via Newman-Janis method ,. It’s natural to ask whether Sen’s rotating dilaton-axion solution can be obtained via Newman-Janis method from GGHS dilaton black hole solution.
The purpose of the present note is to show that the rotating dilaton-axion black hole solution can be ”derived” from static spherically symmetric dilaton black hole solution via Newman-Janis procedure.
Here we will not discuss the Newman-Janis algorithm in details.We refer the reader to the recent papers ,.It should be noted, however that in Newman-Janis procedure there is a certain arbitrariness and an element of guess.
The GGHS dilaton black hole solution may be written in different coordinates and there is no pure physical reasons which of them are more appropriate for our purpose. It seems to be natural to expect that the desirable coordinates in which the GGHS solution should be written are these obtained by generating the GGHS solution directly from Schwarzschild solution. The generating the GGHS solution from Schwarzschild’s one has been already done in .Here we give the final result
$`ds^2=\left({\displaystyle \frac{1\frac{r_1}{r}}{1+\frac{r_2}{r}}}\right)dt^2\left({\displaystyle \frac{1\frac{r_1}{r}}{1+\frac{r_2}{r}}}\right)^1dr^2r^2\left(1+{\displaystyle \frac{r_2}{r}}\right)\left(d\theta ^2+\mathrm{sin}^2(\theta )d\varphi ^2\right)`$ (2)
$`e^{2\phi }={\displaystyle \frac{1}{1+\frac{r_2}{r}}}`$
$`\mathrm{\Phi }={\displaystyle \frac{\frac{Q}{r}}{1+\frac{r_2}{r}}}`$
where $`\phi `$ is the dilaton and $`\mathrm{\Phi }`$ is the electric potential. The parameters $`r_1`$ and $`r_2`$ are given by
$$r_1+r_2=2$$
and
$$r_2=\frac{Q^2}{}$$
where $``$ and $`Q`$ are the mass and the charge of the dilaton black hole.
Following Newman and Janis (see also and ) the first step is to write the metric (2) in advanced Eddington-Finkelstein coordinates.Performing the coordinate transformation
$`dt=du+\left({\displaystyle \frac{1\frac{r_1}{r}}{1+\frac{r_2}{r}}}\right)^1dr`$ (3)
we obtain
$`ds^2=\left({\displaystyle \frac{1\frac{r_1}{r}}{1+\frac{r_2}{r}}}\right)du^2+2dudrr^2\left(1+{\displaystyle \frac{r_2}{r}}\right)d\mathrm{\Omega }^2`$ (4)
This metric may be presented in terms of its null tetrad vectors
$`g^{\mu \nu }=l^\mu n^\nu +l^\nu n^\mu m^\mu \overline{m}^\nu m^\nu \overline{m}^\mu `$ (5)
where
$`l^\mu =\delta _1^\mu `$ (6)
$`n^\mu =\delta _0^\mu {\displaystyle \frac{1}{2}}\left({\displaystyle \frac{1\frac{r_1}{r}}{1+\frac{r_2}{r}}}\right)\delta _1^\mu `$
$`m^\mu ={\displaystyle \frac{1}{\sqrt{2}r\sqrt{1+\frac{r_2}{r}}}}\left(\delta _2^\mu +{\displaystyle \frac{i}{\mathrm{sin}(\theta )}}\delta _3^\mu \right)`$
Let’s now the radial coordinate $`r`$ allowed to take the complex values, as keeping the null vectors $`l^\mu `$ and $`n^\mu `$ real and $`\overline{m}^\mu `$ complex conjugated to $`m^\mu .`$ Then the tetrad takes the form
$`l^\mu =\delta _1^\mu `$ (7)
$`n^\mu =\delta _0^\mu {\displaystyle \frac{1}{2}}\left({\displaystyle \frac{1\frac{r_1}{2}\left(\frac{1}{r}+\frac{1}{\overline{r}}\right)}{1+\frac{r_2}{2}\left(\frac{1}{r}+\frac{1}{\overline{r}}\right)}}\right)\delta _1^\mu `$
$`m^\mu ={\displaystyle \frac{1}{\sqrt{2}\overline{r}\sqrt{1+\frac{r_2}{2}\left(\frac{1}{r}+\frac{1}{\overline{r}}\right)}}}\left(\delta _2^\mu +{\displaystyle \frac{i}{\mathrm{sin}(\theta )}}\delta _3^\mu \right)`$
The next step is to perform formally the complex coordinate transformation
$`r^{}=r+ia\mathrm{cos}(\theta )\theta ^{}=\theta `$ (8)
$`u^{}=uia\mathrm{cos}\theta \varphi ^{}=\varphi `$
By keeping $`r^{}`$ and $`u^{}`$ real we obtain the following tetrad
$`l_{}^{}{}_{}{}^{\mu }=\delta _1^\mu `$ (9)
$`n_{}^{}{}_{}{}^{\mu }=\delta _0^\mu {\displaystyle \frac{1}{2}}\left({\displaystyle \frac{1\frac{r_1r^{}}{\mathrm{\Sigma }}}{1+\frac{r_2r^{}}{\mathrm{\Sigma }}}}\right)\delta _1^\mu `$
$`m_{}^{}{}_{}{}^{\mu }={\displaystyle \frac{1}{\sqrt{2}(r^{}+ia\mathrm{cos}(\theta ))}}{\displaystyle \frac{1}{\sqrt{1+\frac{r_2r^{}}{\mathrm{\Sigma }}}}}\left(ia\mathrm{cos}(\theta )(\delta _0^\mu \delta _1^\mu )+\delta _2^\mu +{\displaystyle \frac{i}{\mathrm{sin}(\theta )}}\delta _3^\mu \right)`$
where $`\mathrm{\Sigma }=r_{}^{}{}_{}{}^{2}+a^2\mathrm{cos}^2(\theta ).`$
The metric formed by this tetrad is (dropping the primes)
$`g^{\mu \nu }=\left(\begin{array}{cccc}\frac{a^2\mathrm{sin}^2(\theta )}{\stackrel{~}{\mathrm{\Sigma }}}& 1+\frac{a^2\mathrm{sin}^2(\theta )}{\stackrel{~}{\mathrm{\Sigma }}}& 0& \frac{a}{\stackrel{~}{\mathrm{\Sigma }}}\\ & e^{2U(r,\theta )}\frac{a^2\mathrm{sin}^2(\theta )}{\stackrel{~}{\mathrm{\Sigma }}}& 0& \frac{a}{\stackrel{~}{\mathrm{\Sigma }}}\\ & & \frac{1}{\stackrel{~}{\mathrm{\Sigma }}}& 0\\ & & & \frac{1}{\stackrel{~}{\mathrm{\Sigma }}}\end{array}\right)`$ (10)
where we have put
$`e^{2U(r,\theta )}=\left({\displaystyle \frac{1\frac{r_1r}{\mathrm{\Sigma }}}{1+\frac{r_2r}{\mathrm{\Sigma }}}}\right)`$ (11)
and
$`\stackrel{~}{\mathrm{\Sigma }}=\left(1+{\displaystyle \frac{r_2r}{\mathrm{\Sigma }}}\right)\mathrm{\Sigma }=r(r+r_2)+a^2\mathrm{cos}^2(\theta )`$ (12)
The corresponding covariant metric is
$`g_{\mu \nu }=\left(\begin{array}{cccc}e^{2U(r,\theta )}& 1& 0& a\mathrm{sin}^2(\theta )\left(1e^{2U(r,\theta )}\right)\\ & 0& 0& a\mathrm{sin}^2(\theta )\\ & & \stackrel{~}{\mathrm{\Sigma }}& 0\\ & & & sin^2(\theta )\left(\stackrel{~}{\mathrm{\Sigma }}+a^2\mathrm{sin}^2(\theta )\left(2e^{2U(r,\theta )}\right)\right)\end{array}\right)`$ (13)
A further simplification is made by the following coordinate transformation
$`du=dt^{}{\displaystyle \frac{\mathrm{\Delta }_2}{\mathrm{\Delta }}}drd\varphi =d\varphi ^{}{\displaystyle \frac{a}{\mathrm{\Delta }}}dr`$ (14)
where $`\mathrm{\Delta }=r(rr_1)+a^2`$ and $`\mathrm{\Delta }_2=r(r+r_2)+a^2.`$ This transformation leaves only one off-diagonal element and the metric takes the form (dropping the primes on $`t`$ and $`\varphi `$)
$`g_{\mu \nu }dx^\mu dx^\nu =e^{2U(r,\theta )}dt^2{\displaystyle \frac{\stackrel{~}{\mathrm{\Sigma }}}{e^{2U(r,\theta )}\stackrel{~}{\mathrm{\Sigma }}+a^2\mathrm{sin}^2(\theta )}}dr^2\stackrel{~}{\mathrm{\Sigma }}d\theta ^2+`$ (15)
$`2a\mathrm{sin}^2(\theta )\left(1e^{2U(r,\theta )}\right)dtd\varphi \mathrm{sin}^2(\theta )\left(\stackrel{~}{\mathrm{\Sigma }}+a^2\mathrm{sin}^2(\theta )\left(2e^{2U(r,\theta )}\right)\right)d\varphi ^2`$
Taking into account that $`r_1+r_2=2`$ we obtain
$`ds^2=g_{\mu \nu }dx^\mu dx^\nu =\left(1{\displaystyle \frac{2r}{\stackrel{~}{\mathrm{\Sigma }}}}\right)dt^2\stackrel{~}{\mathrm{\Sigma }}\left({\displaystyle \frac{dr^2}{\mathrm{\Delta }}}+d\theta ^2\right)+`$ (16)
$`{\displaystyle \frac{4ra\mathrm{sin}^2(\theta )}{\stackrel{~}{\mathrm{\Sigma }}}}dtd\varphi \left(r(r+r_2)+a^2+{\displaystyle \frac{2ra^2\mathrm{sin}^2(\theta )}{\stackrel{~}{\mathrm{\Sigma }}}}\right)\mathrm{sin}^2(\theta )d\varphi ^2`$
where $`e^{2U(r,\theta )}\stackrel{~}{\mathrm{\Sigma }}+a^2\mathrm{sin}^2(\theta )=r(rr_1)+a^2=\mathrm{\Delta }.`$
This is the rotating dilaton-axion black hole metric .The other quantities are given by
$`A={\displaystyle \frac{Qr}{\stackrel{~}{\mathrm{\Sigma }}}}\left(dta\mathrm{sin}^2(\theta )d\varphi \right)`$ (17)
$`e^{2\phi }={\displaystyle \frac{1}{1+\frac{r_2r}{\mathrm{\Sigma }}}}={\displaystyle \frac{\mathrm{\Sigma }}{\stackrel{~}{\mathrm{\Sigma }}}}={\displaystyle \frac{r^2+a^2\mathrm{cos}^2(\theta )}{r(r+\frac{Q^2}{})+a^2\mathrm{cos}^2(\theta )}}`$
$`\mathrm{\Theta }={\displaystyle \frac{Q^2}{}}{\displaystyle \frac{a\mathrm{cos}(\theta )}{\mathrm{\Sigma }}}={\displaystyle \frac{Q^2}{}}{\displaystyle \frac{a\mathrm{cos}(\theta )}{r^2+a^2\mathrm{cos}^2(\theta )}}`$
It’s useful to present the metric (16) in the form
$`ds^2=e^{2U}(dt+\omega _idx^i)e^{2U}h_{ij}dx^idx^j`$ (18)
After a few algebra we find
$`ds^2=e^{2U(r,\theta )}\left(dt+{\displaystyle \frac{2ar\mathrm{sin}^2(\theta )}{\stackrel{~}{\mathrm{\Sigma }}_1}}d\varphi \right)^2`$ (19)
$`e^{2U(r,\theta )}\left(\stackrel{~}{\mathrm{\Sigma }}_1\left({\displaystyle \frac{dr^2}{\mathrm{\Delta }}}+d\theta ^2\right)+\mathrm{\Delta }\mathrm{sin}^2(\theta )d\varphi ^2\right)`$
where
$`\stackrel{~}{\mathrm{\Sigma }}_1=\stackrel{~}{\mathrm{\Sigma }}2=r(rr_1)+a^2\mathrm{cos}^2(\theta )`$ (20)
$`e^{2U(r,\theta )}=1{\displaystyle \frac{2r}{\stackrel{~}{\mathrm{\Sigma }}}}={\displaystyle \frac{1}{1\frac{2r}{\stackrel{~}{\mathrm{\Sigma }}_1}}}.`$
It should be expected that using the Newamn-Janis method we will able to generate stationary axisymmetric solutions starting with static spherically symmetric solutions different from the GGHS solution.For example, using as seed solutions the three classes two-parametric families of solutions presented in , it should be expected that we will obtain the corresponding rotating naked singularities in Einstein-Maxwell-dilaton-axion gravity.
There are some questions which arise.As we have seen the Newman-Janis method generates the rotating solution of Einstein-Maxwell-dilaton-axion starting with GGHS solution in proper coordinates.GGHS solution, however, is also a solution to the truncated theory without axion field (i.e. Einstein-Maxwell-dilaton gravity).Why the Newman-Janis method does not generate the rotating solution to truncated model instead to the full model? In our opinion the cause is probably that the full theory in the presence of two commuting Killing’s vectors possesses larger nontrivial symmetry group than the truncated model.
Acknowledgments
The author wishes to express his thanks to P.Fiziev for his continuous encouragement and the stimulating conversations.
This work was partially supported by the Sofia University Foundation for Scientific Researches, Contract No. 245/99, and by the Bulgarian National Foundation for Scientific Researches, Contract F610/99.
|
no-problem/9907/hep-th9907192.html
|
ar5iv
|
text
|
# Casimir energy of a compact cylinder under the condition 𝜀𝜇=𝑐⁻²
## I Introduction
The calculation of the Casimir energy for boundary conditions given on the surface of an infinite cylinder has turned out to be the most complicated problem in this field . In Ref. an attempt was undertaken to predict the Casimir energy of a conducting cylindrical shell treating the cylinder as an intermediate configuration between a sphere and two parallel plates. Taking into account that the vacuum energies of a conducting sphere and conducting plates have the opposite signs, the authors hypothesized that the Casimir energy of a cylindrical perfectly conducting shell should be zero. However, a direct calculation showed that this energy is negative as in the case of parallel conducting plates. This calculation was repeated only in recent papers by making use of comprehensive methods, more simple but more formal at the same time.
Thus in spite of its half-century history the Casimir effect still remains a problem where physical intuition does not work, and in order to reveal even the sign of the Casimir energy (i.e. the direction of the Casimir forces) it is necessary to carry out a consistent detailed calculation.
The account for dielectric and magnetic properties of the media in the case of nonplanar interface proved to be a very complicated problem in calculation of the Casimir energy . However if the light velocity is constant when crossing the interface, then the calculation of the Casimir energy of a compact ball or cylinder is the same as that for conducting spherical or cylindrical shells, respectively. In such calculations the expansion of the Casimir energy in terms of the parameter $`\xi ^2=(\epsilon _1\epsilon _2)^2/(\epsilon _1+\epsilon _2)^2=(\mu _1\mu _2)^2/(\mu _1+\mu _2)^21`$ is usually constructed, where $`\epsilon _1`$ and $`\mu _1`$ are, respectively, the permittivity and permeability of the material making up the ball or cylinder, and $`\epsilon _2`$, $`\mu _2`$ are those for the surrounding medium. The same velocity of light, $`c`$, in both the media implies that the condition $`\epsilon _1\mu _1=\epsilon _2\mu _2=c^2`$ is satisfied.
The Casimir energy of a compact ball with the same speed of light inside and outside and the Casimir energy of a pure dielectric ball turned out to be of the same sign: they are positive, and consequently the Casimir forces are repulsive.<sup>*</sup><sup>*</sup>*We use the terms “ure dielectric ball” and “pure dielectric cylinder” for the corresponding nonmagnetic configurations with $`\mu _1=\mu _2=1`$ and $`\epsilon _1\epsilon _2`$. Moreover, the extrapolation of the result obtained under the condition $`\epsilon \mu =c^2`$ to a pure dielectric ball gives a fairly good prediction .
For a compact cylinder under the condition $`\epsilon \mu =c^2`$ it has been found that the linear term in the Casimir energy expansion in powers of $`\xi ^2`$ vanishes. Keeping in mind the situation with a compact ball possessing the same speed of light inside and outside and a pure dielectric ball, it is tempting to check whether the Casimir energy of a compact cylinder under the condition $`\epsilon \mu =c^2`$ is close to the Casimir energy of a pure dielectric cylinder. However, in the case of a dielectric cylinder a principal difficulty arises, namely, in the integral representation for the corresponding spectral $`\zeta `$-function (or, in other words, for the sum of eigenfrequencies) it is impossible to carry out the integration over the longitudinal momentum $`k_z`$. On the other hand, in Ref. the Casimir energy of a compact dielectric cylinder was evaluated by a direct summation of the van der Waals interaction between individual fragments (molecules) of the cylinder. By making use of the dimensional regularization, a vanishing value for this energy was obtained. It is worth noting that this procedure, having been applied to a pure dielectric ball , gives the same result as the quantum field theory approach . In view of all this, it is undoubtedly interesting to elucidate whether the vacuum energy of the electromagnetic field for a compact cylinder with the condition $`\epsilon \mu =c^2`$ vanishes exactly. Therefore, the main goal of the present paper is, namely, to extend the analysis made in up to the fourth order in $`\xi `$. To this accuracy the Casimir energy in question turns out to be nonvanishing. Our consideration is concerned with zero temperature theory only, and the main calculation ignores dispersion.
The layout of the paper is as follows. In Sec. II the first nonvanishing term proportional to $`\xi ^4`$ is calculated in the expansion of the Casimir energy of a compact cylinder in powers of $`\xi ^2`$ under the condition $`\epsilon \mu =c^2`$. This term proves to be negative, and the Casimir forces seek to contract the cylinder reducing its radius, unlike the repulsive forces acting on a compact ball under the same conditions. In Sec. III the Casimir energy in the problem at hand is calculated numerically for several fixed values of the parameter $`\xi ^2`$ without assuming the smallness of $`\xi ^2`$, and the corresponding plot is presented. In Sec. IV the implication of the obtained results in the flux tube model (hadronic string) describing the quark dynamics inside the hadrons is considered. In the Conclusion (Sec. V) some general properties of the Casimir effect are briefly discussed.
## II Expansion of the Casimir energy in powers of $`\xi ^2`$
We start with the formulas which allow us to construct the expansion of the Casimir energy of a compact infinite cylinder, possessing the same speed of light inside and outside, in powers of the parameter $`\xi ^2`$. The derivation of these formulas can be found in the papers cited below.
When using the mode-by-mode summation method or the zeta function technique the Casimir energy per unit length of a cylinder is represented as a sum of partial energies
$$E=\underset{n=\mathrm{}}{\overset{+\mathrm{}}{}}E_n,$$
(1)
where
$$E_n=\frac{c}{4\pi a^2}_0^{\mathrm{}}𝑑yy\mathrm{ln}\left\{1\xi ^2[y(I_n(y)K_n(y))^{}]^2\right\}.$$
(2)
Here the condition
$$\epsilon _1\mu _1=\epsilon _2\mu _2=c^2$$
(3)
is assumed to hold, with $`c`$ being the velocity of light inside and outside the cylinder (in units of that velocity in vacuum). The parameter $`\xi ^2`$ in Eq. (2) is defined by the dielectric and magnetic characteristics of the material of a cylinder and a surrounding medium
$$\xi ^2=\frac{(\epsilon _1\epsilon _2)^2}{(\epsilon _1+\epsilon _2)^2}=\frac{(\mu _1\mu _2)^2}{(\mu _1+\mu _2)^2}.$$
(4)
The representation (1), (2) for the Casimir energy is formal because the integral in Eq. (2) diverges logarithmically at the upper limit, and the sum over $`n`$ in Eq. (1) is also divergent. These difficulties are removed by the following transformation of the sum (1):
$`E`$ $`=`$ $`{\displaystyle \underset{n=\mathrm{}}{\overset{+\mathrm{}}{}}}\left(E_nE_{\mathrm{}}+E_{\mathrm{}}\right)={\displaystyle \underset{n=\mathrm{}}{\overset{+\mathrm{}}{}}}\left(E_nE_{\mathrm{}}\right)+{\displaystyle \underset{n=\mathrm{}}{\overset{+\mathrm{}}{}}}E_{\mathrm{}}`$ (5)
$`=`$ $`{\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}}\overline{E}_n+E_{\mathrm{}}{\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}}n^0,`$ (6)
where
$`\overline{E}_n`$ $`=`$ $`E_nE_{\mathrm{}},n=0,\pm 1,\pm 2\mathrm{},`$ (7)
$`E_{\mathrm{}}`$ $`=`$ $`E_n|_n\mathrm{}={\displaystyle \frac{c\xi ^2}{16\pi a^2}}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{z^5dz}{(1+z^2)^3}}.`$ (8)
A consistent treatment of the product of two infinities $`E_{\mathrm{}}_{n=\mathrm{}}^{\mathrm{}}n^0`$ leads to a finite result (see and, especially, )
$$E_{\mathrm{}}\underset{n=\mathrm{}}{\overset{+\mathrm{}}{}}n^0=\frac{c\xi ^2}{16\pi a^2}\mathrm{ln}(2\pi ).$$
(9)
Thus
$$E=\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}\overline{E}_n+\frac{c\xi ^2}{16\pi a^2},$$
(10)
where
$`\overline{E}_n`$ $`=`$ $`\overline{E}_n={\displaystyle \frac{c}{4\pi a^2}}{\displaystyle _0^{\mathrm{}}}𝑑yy\left\{\mathrm{ln}\left[1\xi ^2\sigma _n^2(y)\right]+{\displaystyle \frac{\xi ^2}{4}}{\displaystyle \frac{y^4}{(n^2+y^2)^3}}\right\},n=1,2,\mathrm{},`$ (11)
$`\overline{E}_0`$ $`=`$ $`{\displaystyle \frac{c}{4\pi a^2}}{\displaystyle _0^{\mathrm{}}}𝑑yy\left\{\mathrm{ln}[1\xi ^2\sigma _0^2(y)]+{\displaystyle \frac{\xi ^2}{4}}{\displaystyle \frac{y^4}{(1+y^2)^3}}\right\},\sigma _n(y)=y(I_n(y)K_n(y))^{}.`$ (12)
The Casimir energy (10) is defined correctly because the integrals in Eqs. (11) and (12) exist and the sum in Eq. (10) converges . It is this formula that should be expanded in powers of $`\xi ^2`$. We confine ourselves with the first two terms in this expansion
$$EE(\xi ^2)=E^{(2)}\xi ^2+E^{(4)}\xi ^4+O(\xi ^6).$$
(13)
In the same way we have for $`\overline{E}_n`$
$$\overline{E}_n\overline{E}_n(\xi ^2)=E_n^{(2)}\xi ^2+E_n^{(4)}\xi ^4+O(\xi ^6),n=0,1,2,\mathrm{},$$
(14)
where
$`E_0^{(2)}`$ $`=`$ $`{\displaystyle \frac{c}{4\pi a^2}}{\displaystyle _0^{\mathrm{}}}𝑑yy\left[\sigma _0^2(y){\displaystyle \frac{y^2}{4(1+y^2)^3}}\right]={\displaystyle \frac{c}{4\pi a^2}}(0.490878),`$ (15)
$`E_n^{(2)}`$ $`=`$ $`{\displaystyle \frac{c}{4\pi a^2}}{\displaystyle _0^{\mathrm{}}}𝑑yy\left[\sigma _n^2(y){\displaystyle \frac{y^2}{4(n+y^2)^3}}\right],n=1,2,\mathrm{},`$ (16)
$`E_0^{(4)}`$ $`=`$ $`{\displaystyle \frac{c}{8\pi a^2}}{\displaystyle _0^{\mathrm{}}}𝑑yy\sigma _0^4(y)={\displaystyle \frac{c}{4\pi a^2}}(0.0860808),`$ (17)
$`E_n^{(4)}`$ $`=`$ $`{\displaystyle \frac{c}{8\pi a^2}}{\displaystyle _0^{\mathrm{}}}𝑑yy\sigma _n^4(y),n=1,2,\mathrm{}.`$ (18)
The integrals in Eqs. (16) and (18) containing Bessel functions can be calculated numerically only for $`n<n_0`$ with a certain fixed value of $`n_0`$. For all the rest partial energies with $`nn_0`$ one needs an analytic expression. We derive such a formula using the uniform asymptotic expansion (UAE) for the product of the modified Bessel functions . Taking into account all the terms up to the $`n^6`$ order we can write
$`\mathrm{ln}\left\{1\xi ^2\left[y{\displaystyle \frac{d}{dy}}(I_n(ny)K_n(ny))\right]^2\right\}=`$ (19)
$`=`$ $`\xi ^2{\displaystyle \frac{y^4t^6}{4n^2}}\left[1+{\displaystyle \frac{t^2}{4n^2}}(330t^2+35t^4)+{\displaystyle \frac{t^4}{4n^4}}(9256t^2+1290t^42037t^6+1015t^8)\right]`$ (21)
$`\xi ^4{\displaystyle \frac{y^8t^{12}}{32n^4}}\left[1+{\displaystyle \frac{t^2}{2n^2}}(330t^2+35t^4)\right]\xi ^6{\displaystyle \frac{y^{12}t^{18}}{192n^6}}+O\left({\displaystyle \frac{1}{n^8}}\right),`$
where $`t=1/\sqrt{1+y^2}`$.
Substituting this expression into Eq. (11) and integrating with the use of the formula
$`{\displaystyle _0^{\mathrm{}}}𝑑yy^\alpha t^\beta ={\displaystyle \frac{1}{\mathrm{\hspace{0.17em}2}}}{\displaystyle \frac{\mathrm{\Gamma }\left({\displaystyle \frac{\alpha +1}{2}}\right)\mathrm{\Gamma }\left({\displaystyle \frac{\beta \alpha 1}{2}}\right)}{\mathrm{\Gamma }\left({\displaystyle \frac{\beta }{2}}\right)}},`$ (22)
$`\text{Re}\left(\alpha +1\right)>0,\text{Re}\left({\displaystyle \frac{\alpha \beta +3}{2}}\right)<1`$ (23)
one obtains
$`\overline{E}_n`$ $`=`$ $`\overline{E}_n^{asymp}+O\left({\displaystyle \frac{1}{n^6}}\right),`$ (24)
$`\overline{E}_n^{asymp}`$ $`=`$ $`{\displaystyle \frac{c\xi ^2}{4\pi a^2}}\left({\displaystyle \frac{103\xi ^2}{960n^2}}{\displaystyle \frac{282247344\xi ^2+720\xi ^4}{15482880n^4}}\right).`$ (25)
From here we find the coefficients $`E_n^{(2)}`$ and $`E_n^{(4)}`$ entering Eq. (14)
$`E_n^{(2)asymp}={\displaystyle \frac{c}{4\pi a^2}}\left({\displaystyle \frac{1}{96n^2}}{\displaystyle \frac{7}{38040n^4}}\right),`$ (26)
$`E_n^{(4)asymp}={\displaystyle \frac{c}{4\pi a^2}}\left({\displaystyle \frac{1}{320n^2}}{\displaystyle \frac{17}{56064n^4}}\right).`$ (27)
Now by a direct numerical calculation it is necessary to estimate the value $`n=n_0`$ starting from which the exact formulas (16) and (18) can be substituted by the approximate ones (26) and (27). In Ref. it was shown that when calculating $`E^{(2)}`$ one can begin to use the approximate formula from $`n_0=6`$
$`E^{(2)}`$ $`=`$ $`E_0^{(2)}+2{\displaystyle \underset{n=1}{\overset{5}{}}}E_n^{(2)}+2{\displaystyle \underset{n=6}{\overset{\mathrm{}}{}}}E_n^{(2)asymp}+{\displaystyle \frac{c}{16\pi a^2}}\mathrm{ln}(2\pi )`$ (28)
$`=`$ $`E_0^{(2)}+2{\displaystyle \underset{n=1}{\overset{5}{}}}E_n^{(2)}+{\displaystyle \frac{c}{4\pi a^2}}\left({\displaystyle \frac{1}{48}}{\displaystyle \underset{n=6}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{n^2}}{\displaystyle \frac{7}{19020}}{\displaystyle \underset{n=6}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{n^4}}\right)+{\displaystyle \frac{c}{16\pi a^2}}\mathrm{ln}(2\pi )`$ (29)
$`=`$ $`{\displaystyle \frac{c}{4\pi a^2}}(0.490878+0.027638+0.0037780.000007+0.459469)`$ (30)
$`=`$ $`{\displaystyle \frac{c}{4\pi a^2}}(0.000000).`$ (31)
This result obtained in was interpreted there as the vanishing of the Casimir energy of a compact cylinder under the condition (3). However, as it will be shown below this is not the case.
Table I shows that when calculating the coefficient $`E^{(4)}`$ in Eq. (14), one can also take $`n_0=6`$. As a result we obtain for this coefficient
$`E^{(4)}`$ $`=`$ $`E_0^{(4)}+2{\displaystyle \underset{n=1}{\overset{5}{}}}E_n^{(4)}+2{\displaystyle \underset{n=6}{\overset{\mathrm{}}{}}}E_n^{(4)asymp}`$ (32)
$`=`$ $`E_0^{(4)}+2{\displaystyle \underset{n=1}{\overset{5}{}}}E_n^{(4)}{\displaystyle \frac{c}{4\pi a^2}}\left({\displaystyle \frac{1}{160}}{\displaystyle \underset{n=6}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{n^2}}{\displaystyle \frac{17}{56032}}{\displaystyle \underset{n=6}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{n^4}}\right)`$ (33)
$`=`$ $`{\displaystyle \frac{c}{4\pi a^2}}(0.08608080.0083150.0011334+0.0000018)`$ (34)
$`=`$ $`{\displaystyle \frac{c}{4\pi a^2}}\mathrm{\hspace{0.17em}0.095528}.`$ (35)
Thus, the Casimir energy of a compact cylinder possessing the same speed of light inside and outside does not vanish and is defined up to the $`\xi ^4`$ term by the formula
$$E(\xi ^2)=\frac{c\xi ^4}{4\pi a^2}\mathrm{\hspace{0.17em}0.0955275}=0.007602\frac{c\xi ^4}{a^2}.$$
(36)
In contrast to the Casimir energy of a compact ball with the same properties
$$E_{ball}\frac{3}{64a}c\xi ^2=0.046875\frac{c\xi ^2}{a}$$
(37)
the Casimir energy of a cylinder under consideration turned out to be negative. Consequentially, the Casimir forces strive to contract the cylinder. The numerical coefficient in Eq. (36) proved really to be small, for example, in comparison with the analogous coefficient in Eq. (37). Probably it is a manifestation of the vanishing of the Casimir energy of a pure dielectric cylinder noted in the Introduction.
## III Numerical calculation of the Casimir energy for arbitrary $`\xi ^2`$
Equations (10)–(12) obtained in the preceding section enable one to calculate the Casimir energy $`E(\xi ^2)`$ numerically, without making any assumptions concerning the smallness of the parameter $`\xi ^2`$. Comparing the results obtained by the exact formula (11) and by the approximate one (25) we again find the value $`n=n_0`$ starting from which $`\overline{E}_n^{asymp}`$ reproduces $`\overline{E}_n`$ precisely enough. In the general case there is its own $`n_0`$ for each value of $`\xi ^2`$. Obviously, one should expect a substantial deviation from Eq. (36) only for $`\xi ^21`$. Moreover the main contribution into the Casimir energy determined by the sum (10) is given by the term $`\overline{E}_0`$ which is evaluated now exactly using Eq. (12) without expanding in powers of $`\xi ^2`$ as it has been done in the preceding Section.
The results of the calculations accomplished in this way for $`E(\xi ^2)`$ are presented in Fig. 1 (solid curve). Here the Casimir energy defined by Eq. (36) as a function of $`\xi ^2`$ is also plotted (dashed curve). When $`\xi ^2=1`$ we get the Casimir energy of a perfectly conducting cylindrical shell . If we used for its calculation the approximate formula (36), we should obtain for the dimensionless energy $`=(4\pi a^2/c)E`$ the value $`0.0955`$ instead of $`0.1704`$. Thereby, the approximate formula (36) at this point gives a considerable error of $`70\%`$. At the same time the analogous formula (37) for a compact ball at $`\xi ^2=1`$ gives the Casimir energy of a perfectly conducting spherical shell with a few percent error .
## IV Implication of the calculated Casimir energy in the flux tube model of confinement
The constancy condition for the velocity of gluonic field when crossing the interface between two media is used, for example, in a dielectric vacuum model (DVM) of quark confinement . This model has many elements in common with the bag models , but among the other differences, in DVM there is no explicit condition of the field vanishing outside the bag. It proves to be important for calculation of the Casimir energy contribution to the hadronic mass in DVM. The point is that in the case of boundaries with nonvanishing curvature there happens a considerable (not full, however) mutual cancellation of the divergences from the contributions of internal and external (with respect to the boundary) regions. If only the field confined inside the cavity is considered, as in the bag models , then there is no such a cancellation, and one has to remove some divergences by means of renormalization of the phenomenological parameter in the model defining the QCD vacuum energy density.
From a physical point of view the vanishing of the field or its normal derivative precisely on the boundary is an unsatisfactory condition, because due to quantum fluctuations it is impossible to measure the field as accurately as desired at a certain point of the space .
In the DVM there is also considered a cavity that appears in the QCD vacuum when the invariant $`F_{\mu \nu }F^{\mu \nu }𝐄^2𝐁^2`$ exceeds a certain critical value ($`𝐄`$ and $`𝐁`$ are the color fields). Inside the cavity the gluonic field can be treated as an Abelian field in view of the asymptotic freedom in QCD. In this approach it is assumed that in the QCD vacuum (outside the cavity) the dielectric constant tends to zero $`\epsilon _20`$ while the magnetic permeability tends to infinity $`\mu _2\mathrm{}`$ in such a way that the relativistic condition $`\epsilon _2\mu _2=1`$ holds. Inside the cavity $`\epsilon _1=\mu _1=1`$. As it was shown in the present paper for a compact cylinder and in Ref. for a compact ball, in calculation of the Casimir energy the condition $`\epsilon _1\mu _1=\epsilon _2\mu _2`$ proves to be essential, and it is possible to take the limit $`\epsilon _20,\mu _2\mathrm{}`$ in the resulting formula putting $`\xi ^2=(\epsilon _1\epsilon _2)^2/(\epsilon _1+\epsilon _2)^2=(\mu _1\mu _2)^2/(\mu _1+\mu _2)^2=1`$.
Hence, in the DVM as a vacuum energy of gluonic field one should take the Casimir energy of a perfectly conducting infinitely thin shell having the shape either of a sphere, or expanded ellipsoid, or cylinder. In the last case we deal with the flux tube model of confinement in which a heavy quark and antiquark are considered to be coupled through a cylindrical cavity (flux tube) in the QCD vacuum. Obviously, in the flux tube model of confinement the Casimir energy of a compact cylinder calculated under the condition $`\epsilon _1\mu _1=\epsilon _2\mu _2=1`$ should be regarded as a quantum correction to the classical string tension. To estimate this correction, it is necessary to define the value of the radius $`a`$ of the flux tube. Without pretending at high accuracy we shall take $`a`$ of the same order as the critical radius $`R_c`$ in the hadronic string model.In principle the radius of the gluonic tube may be deduced by minimizing the linear density of a total energy in this model, the QCD vacuum energy being considered to be negative . However in this case $`a`$ is expressed through the phenomenological parameter, the flux of the gluonic field, that in its turn requires a definition. At the distances between the quarks smaller than $`R_c`$ the flux tube model has no sense. In the Nambu-Goto string model $`R_c`$ is determined by the string tension $`M_0^2`$
$$R_c^2=\frac{\pi }{6M_0^2}.$$
(38)
Hence, we obtain the following estimation for the Casimir energy contribution into the string tension in the flux tube model
$$\left|\frac{8E}{M_0^2}\right|=8\frac{7.610^3}{a^2M_0^2}=8\frac{7.610^3}{R_c^2M_0^2}0.1.$$
(39)
The multiplier $`8`$ makes an account for the contribution of the eight gluonic field components into the string tension. Thus, unlike the conclusion made in \[17, (1988)\], the quantum correction to the classical string tension, determined by the gluonic field confined in the flux tube, turned out to be essential $`(10\%)`$. This fact should be taken into account in detailed examination of this model.
## V Conclusion
The Casimir energy of a compact cylinder under the condition $`\epsilon \mu =c^2`$ does not vanish, but it is negative with the absolute value increasing as $`\xi ^4`$ for small $`\xi ^2`$. The Casimir forces seek to contract the cylinder.
The calculation of the vacuum energy for the boundary conditions of different geometries both with the account for the properties of the materials and without such accounting enables one to make the following general conclusion. In a concrete problem the direction of the Casimir forces is determined only by the geometry of the boundaries. Dielectric and magnetic properties of the media cannot change the direction of these forces.
This conclusion is confirmed by the calculation of the Casimir effect for parallel conducting plates, for a sphere and cylinder, these boundaries being considered in the vacuum or dividing the materials with different dielectric and magnetic properties. Even a dilute dielectric cylinder mentioned above does not violate this pattern. Maybe the Casimir forces in this case vanish in fact, but there are no indications that they can become repulsive.
The account for the dispersion probably does not change this inference. The calculation of the Casimir energy carried out in for a compact ball with $`\epsilon `$ and $`\mu `$ dependent on the frequencies of electromagnetic oscillations $`\omega `$ confirms this. In Ref. the Casimir forces affecting a compact cylinder when $`\epsilon (\omega )\mu (\omega )=c^2`$ were investigated. To remove the divergences the authors introduced a double cutoff over the frequency $`\omega _0`$ and over the angular momentum $`n_0`$. The finite answer proved to be very involved and depended on the cutoff parameters, but the Casimir forces are attractive as in our consideration. However there are other points of view concerning the role of dispersion in the Casimir effect .
###### Acknowledgements.
This work was accomplished with financial support of Russian Foundation for Basic Research (Grant No. 97-01-00745).
|
no-problem/9907/cond-mat9907179.html
|
ar5iv
|
text
|
# Na-site substitution effects on the thermoelectric properties of NaCo2O4
## Abstract
The resistivity and thermopower of Na<sub>1+x</sub>Co<sub>2</sub>O<sub>4</sub> and Na<sub>1.1-x</sub>Ca<sub>x</sub>Co<sub>2</sub>O<sub>4</sub> are measured and analyzed. In Na<sub>1+x</sub>Co<sub>2</sub>O<sub>4</sub>, whereas the resistivity increases with $`x`$, the thermopower is nearly independent of $`x`$. This suggests that the excess Na is unlikely to supply carriers, and decreases effective conduction paths in the sample. In Na<sub>1.1-x</sub>Ca<sub>x</sub>Co<sub>2</sub>O<sub>4</sub>, the resistivity and the thermopower increase with $`x`$, and the Ca<sup>2+</sup> substitution for Na<sup>+</sup> reduces the majority carriers in NaCo<sub>2</sub>O<sub>4</sub>. This means that they are holes, which is consistent with the positive sign of the thermopower. Strong correlation in this compound is evidenced by the peculiar temperature dependence of the resistivity.
There appears a growing interest to a hunt for new thermoelectric (TE) materials, reflecting urgent needs for a new energy-conversion system in harmony with our environments. A TE material generates electric power in the presence of temperature gradient through the Seebeck effect, and pumps heat in the presence of electric current through the Peltier effect. A serious drawback is the low conversion efficiency: It is characterized by the so-called “figure of merit” $`Z=S^2/\rho \kappa `$, where $`S`$, $`\rho `$ and $`\kappa `$ are the thermopower, resistivity and thermal conductivity of a TE material, respectively. In other words, a good TE material is a material that shows large $`S`$, low $`\rho `$ and low $`\kappa `$. However, a high value of $`Z`$ is difficult to realize, because the three parameters cannot be changed independently. To overcome this difficulty, a number of new concepts and new materials have been examined.
Recently we have observed that a layered cobalt oxide NaCo<sub>2</sub>O<sub>4</sub> exhibits unusually large $`S`$ (100 $`\mu `$V/K at 300 K) accompanied by low $`\rho `$ (200 $`\mu \mathrm{\Omega }`$cm at 300 K) along the direction parallel to the CoO<sub>2</sub> plane. NaCo<sub>2</sub>O<sub>4</sub> belongs to a layered Na bronze Na<sub>x</sub>CoO<sub>2</sub>, which was studied as a cathode for sodium batteries. During the characterization, Molenda et al. first found a large $`S`$ in Na<sub>0.7</sub>CoO<sub>2</sub>. Although they noticed that $`S`$ was anomalously large, they did not mention a possibility for a TE material. Their samples were polycrystals, the resistivity of which was 2-4 m$`\mathrm{\Omega }`$cm at 300 K, much higher than that of our crystals. Our finding is that the carrier density ($`n`$) is of the order of $`10^{21}10^{22}`$ cm<sup>-3</sup>, and is two orders of magnitude larger than $`n`$ of conventional TE materials. This is difficult to understand in the framework of a conventional one-electron picture, and may indicate a way to get a good TE material other than the conventional approach. We have proposed that strong electron-electron correlation plays an important role in the enhancement of the thermopower of NaCo<sub>2</sub>O<sub>4</sub>.
Even in a correlated system, we can expect that a conductor of low $`n`$ will have a large $`S`$, because the diffusive part of $`S`$ is the transport entropy, of the order of $`k_BT/E_F`$, where $`E_F`$ is the Fermi energy. Thus it would be tempting to improve the TE properties in NaCo<sub>2</sub>O<sub>4</sub> by decreasing $`n`$. We easily think of three ways to change $`n`$ in NaCo<sub>2</sub>O<sub>4</sub>, i.e., (i) doping of excess Na<sup>+</sup>, (ii) the substitution of Ca<sup>2+</sup> for Na<sup>+</sup>, and (iii) the change of the oxygen content. Among them, we will discard the idea of (iii), because it will seriously deteriorate the conduction paths consisting of Co and O. Here we report on the resistivity and thermopower of Na<sub>1+x</sub>Co<sub>2</sub>O<sub>4</sub> and Na<sub>1.1-x</sub>Ca<sub>x</sub>Co<sub>2</sub>O<sub>4</sub> to study the doping effects.
We prepared polycrystalline samples of Na<sub>1+x</sub>Co<sub>2</sub>O<sub>4</sub> and Na<sub>1.1-x</sub>Ca<sub>x</sub>Co<sub>2</sub>O<sub>4</sub> by solid state reaction. Since Na is volatile, we added 10 % excess Na. Namely we expected the starting composition of Na<sub>1.1</sub>Co<sub>2</sub>O<sub>4</sub> to be NaCo<sub>2</sub>O<sub>4</sub>. An appropriate mixture of Na<sub>2</sub>CO<sub>3</sub>, CaCO<sub>3</sub>, Co<sub>3</sub>O<sub>4</sub> was thoroughly ground, sintered at 860–920C for 12 h in air. The sintered powder was then pressed into a pellet, and sintered again at 800–920C for 6 h in air.
The x-ray diffraction (XRD) was measured using a standard diffractometer with Fe K<sub>α</sub> radiation as an x-ray source in the $`\theta 2\theta `$ scan mode. Note that Cu K<sub>α</sub> radiation is not suitable for this compound, because it emits the fluorescent x-ray of Co to make a high noise in the XRD pattern. $`\rho `$ was measured through a four-probe method, in which the electric contacts with a contact resistance of 1 $`\mathrm{\Omega }`$ were made with silver paint (Dupont 4922). $`S`$ was measured using a steady-state technique. Temperature gradient ($``$0.5 K/cm) was generated by a small resistive heater pasted on one edge of the sample, and was monitored by a differential thermocouple made of copper-constantan. A thermopower of voltage leads was carefully subtracted. Temperature ($`T`$) was controlled from 4.2 to 300 K in a liquid He cryostat, and was monitored with a CERNOX resistance thermometer.
Figure 1 shows typical XRD patterns of the prepared samples. Almost all the peaks are indexed as the P2 phase reported by Jansen and Hoppe, though a tiny trace of impurity phases is detected as marked with $``$ in Fig. 1. Note that all the XRD patterns are nearly the same, which means that XRD is not very powerful for the sample characterization. Thus the best way to characterize the samples is to measure their thermoelectric properties directly. Usually an impurity phase including Na will be Na<sub>2</sub>O, and exist as deliquesced NaOH (Na<sub>2</sub>O +H<sub>2</sub>O). We think, however, that Na<sub>2</sub>O is not a major impurity phase for the present case. The samples are stable enough to handle in air, and the contact resistance and the surface do not deteriorate against several-hour exposure to the air.
Figure 2(a) shows $`\rho `$ for Na<sub>1+x</sub>Co<sub>2</sub>O<sub>4</sub> plotted as a function of $`T`$. Both the magnitude and the $`T`$ dependence are consistent with previous studies. All the samples show a metallic conduction down to 4.2 K without any upturn at low temperatures. This suggests that the conduction paths are not disturbed by the doped excess Na. The $`T`$ dependence of $`\rho `$ roughly resembles the in-plane resistivity for single-crystal NaCo<sub>2</sub>O<sub>4</sub>, implying that the conduction of polycrystals is mainly determined by the in-plane conduction. Note that $`\rho `$ for $`x`$=0 is higher than $`\rho `$ for $`x`$=0.1, which suggests that a small amount of Na is evaporated through the sintering process.
Contrary to the change of $`\rho `$ with $`x`$, $`S`$ for Na<sub>1+x</sub>Co<sub>2</sub>O<sub>4</sub> is nearly independent of $`x`$ as shown in Fig. 2(b). This indicates that $`n`$ remains intact by doping Na. It is, at first sight, unusual why the doped monovalent Na<sup>+</sup> does not change $`n`$. We point out two possibilities: One is that the excess Na is excluded from the crystal to increase the resistance at the grain boundary, and the other is that it is in the grain to make an insulating phase nearby. Note that NaCoO<sub>2</sub> (corresponding to $`x`$=1) is an insulator. In both cases, excess Na cations decrease the number of conduction paths to reduce the effective cross section for the current.
Making a remarkable contrast to Fig. 2(a), Figure 3 (a) shows a drastic change of $`\rho `$ for Na<sub>1.1-x</sub>Ca<sub>x</sub>Co<sub>2</sub>O<sub>4</sub> with $`x`$. Above 50 K, while $`\rho `$ for $`x`$=0 shows a positive curvature, $`\rho `$ for $`x`$=0.35 shows a negative curvature to saturate near 300 K. Unlike the case of the excess Na, the residual resistivity, though not well-defined, tends to increase with $`x`$, which means that Ca acts as a scattering center. $`S`$ is also increased with $`x`$ as shown in Fig. 3(b). Considering that both $`\rho `$ and $`S`$ increase with Ca, we conclude that the substitution of Ca<sup>2+</sup> for Na<sup>+</sup> decreases the carriers. Namely the majority carrier of NaCo<sub>2</sub>O<sub>4</sub> is a hole, which is consistent with the transport properties of Na<sub>0.7</sub>CoO<sub>2-δ</sub>. As expected, the TE properties are (slightly) improved by decreasing $`n`$, and $`S^2/\rho `$ is maximized at $`x`$=0.15.
One may notice that Na<sub>1.1</sub>Co<sub>2</sub>O<sub>4</sub> shows different $`\rho `$ between Figs. 2 and 3. The magnitude of $`\rho `$ was scattered from batch to batch, possibly because the control of the grain growth is difficult. (Thermopower is a quantity less affected by grain boundaries, and the measured $`S`$ was independent of batches within experimental errors.) To see the reproducibility we made Na<sub>1.1</sub>Co<sub>2</sub>O<sub>4</sub> as a reference at every preparation run. Figure 4 shows $`\rho `$ for Na<sub>1.1</sub>Co<sub>2</sub>O<sub>4</sub> prepared in different runs, where the magnitude of $`\rho `$ is scattered beyond experimental errors ($``$10%). We note that the relative change of $`\rho `$ among the same batch is reproducible, and the $`T`$ dependence is essentially identical from batch to batch. All the $`\rho T`$ data in Fig. 4 normalized at 295 K fall into a single curve, as shown in the inset of Fig. 4.
Figure 5 shows $`\rho `$ of Na<sub>1.1</sub>Co<sub>2</sub>O<sub>4</sub> in Fig. 2(a) is plotted in a log-log scale. Since $`\rho `$ is linear below 50 K and above 80 K, $`\rho `$ is proportional to $`T^p`$ in the two regions. From fitting $`\rho `$ by $`T^p`$, we estimated $`p`$ to be 0.67 below 50 K and 1.2 above 80 K (see the solid and dashed lines in Fig. 5). We will remark three points on the $`T`$ dependence of $`\rho `$. First, it is a piece of evidence for strong correlation that $`\rho `$ continues to decrease with decreasing $`T`$ down to 4.2 K where no phonons are thermally excited. At least we can say that the conduction in this system is not dominated by the conventional electron-phonon scattering. Secondly the $`T`$ dependence of $`\rho `$ of this system is not typical for strongly correlated systems. In usual strongly correlated systems, resistivity and electron-electron scattering are proportional to $`(k_BT/E_F)^2`$. Most of heavy fermions, organic conductors, transition-metal oxides show $`\rho T^2`$. As shown in the inset of Fig. 5, $`\rho `$ for Na<sub>1.1</sub>Co<sub>2</sub>O<sub>4</sub> is not proportional to $`T^2`$ at any temperatures. A prime exception is the $`T`$-linear resistivity in high-$`T_c`$ superconductors. Actually, $`\rho `$ and $`S`$ of NaCo<sub>2</sub>O<sub>4</sub> are qualitatively consistent with some theories for high-$`T_c`$ superconductors. In particular, $`\rho `$ of Na<sub>1.1-x</sub>Ca<sub>x</sub>Co<sub>2</sub>O<sub>4</sub> can be explained by adjusting the parameters in Ref. . Thirdly all the samples show no indication of localization. This means that the mean free path (MFP) of the carriers is much longer than the lattice parameters, and that the carriers do not feel the disorder in the Na layer. On the other hand, phonons will be affected by the disorder in the Na layer, since the disorderd Na<sup>+</sup> ions make ionic bonding with adjacent O<sup>2-</sup> ions. In fact, a preliminary measurement has revealed that $`\kappa `$ for Na<sub>1.1</sub>Co<sub>2</sub>O<sub>4</sub> is as low as 10 mW/cmK, suggesting that MFP of the phonons is of the order of the lattice spacing. Thus MFP of the carriers is much longer than MFP of the phonons in NaCo<sub>2</sub>O<sub>4</sub>. We therefore propose that this material is a new class of “electron crystals and phonon glasses”.
Finally let us comment on strong correlation. Since the diffusive part of $`S`$ corresponds to the transport entropy, as mentioned above, larger electronic specific heat can give larger $`S`$. Thus $`S`$ would be enhanced if the carriers could couple with some outside entropy such as optical phonon, spin fluctuation, or orbital fluctuation. Recently a similar scenario is independently proposed by Palsson and Kotliar. Heavy fermions or valence-fluctuation systems are indeed the case, some of which show large $`S`$. Very recently Ando et al. have measured the specific heat of Na<sub>1.1-x</sub>Ca<sub>x</sub>Co<sub>2</sub>O<sub>4</sub> at low temperatures, and have found a large electronic specific heat of 48 mJ/mol K<sup>2</sup>, which is one order of magnitude larger than conventional metals.
In summary, we have prepared polycrystals of Na<sub>1+x</sub>Co<sub>2</sub>O<sub>4</sub> and Na<sub>1.1-x</sub>Ca<sub>x</sub>Co<sub>2</sub>O<sub>4</sub>, and measured the resistivity and thermopower from 4.2 to 300 K. The excess Na and the substituted Ca affect the transport properties of NaCo<sub>2</sub>O<sub>4</sub> differently. The former seems to decrease the effective conducting region, and the latter decreases the carrier density. The temperature dependence of the resistivity is drastically changed by substituting Ca, which strongly suggests that the scattering mechanism depends on the carrier density. Combining this with the peculiar temperature dependence of the resistivity, we conclude that strong electron-electron correlation plays an important role in this compound.
The authors would like to thank Y. Ando, K. Segawa and N. Miyamoto for collaboration. They are also indebted to H. Yakabe, K. Fukuda, K. Kohn, S. Kurihara, S. Saito, and M. Takano for the fruitful discussion.
|
no-problem/9907/hep-ph9907292.html
|
ar5iv
|
text
|
# Condensation of a Strongly Interacting Parton Plasma into a Hadron Gas in High Energy Nuclear Collisions
## 1 Introduction
Heavy ion collision experiments at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven are going to study the possibility of the recreation of a deconfined parton plasma. This plasma consists of gluons, quarks and antiquarks which can roam freely in a region of sizable spatial extent. This is made possible by the high energies of the partons through the asymptotic freedom property of QCD and the color screening property of a dense medium of color charges. The latter weakens the otherwise very strong color fields responsible for keeping the partons well hidden inside hadrons under normal circumstances. This same screening property has the additional advantage of removing the infrared divergence in the parton interactions and thus there is no need to introduce an ad hoc and somewhat arbitrary momentum cutoff as in the usual studies within perturbative QCD in a vacuum setting. This signals important differences of a dense parton system in comparison to an extremely dilute system as found in deep inelastic scattering or in $`e^+e^{}`$ annihilation. Because the latters have been studied for well over a decade, it is tempting and certainly easiest to apply all the acquired knowledge or understanding in exactly the same way as done in these studies to heavy ion collisions. In view of the above mentioned differences in the environment, this cannot be entirely correct. It is clear that one has to bear in mind these differences when building numerical models.
The time evolution of a parton plasma has been studied in various models . What we have learned are that gluon equilibration tends to be fast in both thermal and chemical aspects but this is not so for quarks and antiquarks. But regardless of the state of the system, that is whether it is fully equilibrated or not, hadronization will take place once the conditions are right. The medium effects such as the Landau-Pomeranchuk-Migdal effects of gluon emission and the shielding of infrared divergence have been considered in and in only the former has been incorporated. All were done, however, within the deconfined phase. In this talk, we are concerned with the next stage in the time evolution of the parton system, that is the transition into hadrons and the medium effects on that. In a hadronization scheme was introduced at the end of the time evolution of the parton cascades but the role of the medium on hadronization, which we will consider here, has been totally neglected.
In our previous investigation, the interaction strength in a parton plasma was found to increase in time due to the decreasing average energy of the system , this has the consequence of an increasing screening mass $`m_D`$ with time at least at RHIC for the duration that we have investigated. The screening length therefore behaved in the opposite manner. We got a screening length $`l_D0.4`$ fm at the end of our time evolution when the temperature estimates fell to $`200`$ MeV. This is unfavorable for hadronization because this $`l_D`$ value is comparable to the typical common hadron size. This fact is further reinforced by the lattice calculation of $`m_D`$ up to $`𝒪(g^3)`$ in . They found that it was larger than the leading order result with $`m_D3.3m_D^{\mathrm{LO}}`$. Using their results and choosing $`\overline{\mathrm{\Lambda }}_{\mathrm{QCD}}234`$ MeV, at $`T200`$ MeV
$$m_D^{\mathrm{LO}}405\mathrm{MeV}l_Dl_D^{\mathrm{LO}}/3.30.16\mathrm{fm}$$
and at $`T150`$ MeV
$$m_D^{\mathrm{LO}}332\mathrm{MeV}l_Dl_D^{\mathrm{LO}}/3.30.20\mathrm{fm}.$$
These are small sizes compared to most hadrons. So it is clear that the color screening barrier to hadronization is not small in a parton plasma found at RHIC.
## 2 Time Evolution Equations for the Parton-Hadron Conversion
In , we wrote down the equations for the time evolution of a parton plasma undergoing one-dimensional expansion. The main ingredient of our scheme is to combine the reduced Boltzmann equation, the relaxation time approximation for the parton collision terms $`C_i^\mathrm{p}`$ and their explicit perturbative construction. The resulting equations are
$$\frac{\text{d}f_i^\mathrm{p}}{\text{d}\tau }|_{p_z\tau =\mathrm{const}.}=\frac{f_i^\mathrm{p}f_{ieq}^\mathrm{p}}{\theta _i^\mathrm{p}}=C_i^\mathrm{p}.$$
(1)
With this combination, the distributions $`f_i^\mathrm{p}`$ which describe completely the time development of the system can be solved.
To extend the time evolution beyond the parton phase and to try to learn something about the medium effects on the parton-hadron conversion, it is not necessary to have the full three-dimensional expansion. There is also no need for a full set of hadrons. So we will continue with one-dimensional expansion and only consider pions and kaons. Moreover these hadrons or resonances will be assumed to consist of only a quark and an antiquark $`q\overline{q}^{}`$ and thus their formation will be from the clustering of $`q`$ and $`\overline{q}^{}`$. Since the parton plasma is gluon dominated, the gluons must be converted somehow into $`q`$ and $`\overline{q}`$. Perturbative conversion is highly inefficient so a non-perturbative mechanism must be introduced. In , exactly such a gluon splitting mechanism was introduced for this very purpose in the context of $`e^+e^{}`$ annihilations. Although our parton plasma is different from a parton shower, a term of similar nature $`C_{igq}^\mathrm{p}`$ will be introduced in the time evolution equations. Together with some new confining terms $`C_{iph}^\mathrm{p}`$ which describe the clustering of color singlet $`q\overline{q}^{}`$ pairs into resonances and the subsequent decay into hadrons, the time evolution equations become
$$\frac{\text{d}f_i^\mathrm{p}}{\text{d}\tau }|_{p_z\tau =\mathrm{const}.}=\frac{f_i^\mathrm{p}f_{ieq}^\mathrm{p}}{\theta _i^\mathrm{p}}=C_i^\mathrm{p}+C_{igq}^\mathrm{p}+C_{iph}^\mathrm{p}.$$
(2)
Because of the parton-hadron conversion, an equation for each hadron will also have to be introduced. Using the same method, we write
$$\frac{\text{d}f_i^\mathrm{h}}{\text{d}\tau }|_{p_z\tau =\mathrm{const}.}=\frac{f_i^\mathrm{h}f_{ieq}^\mathrm{h}}{\theta _i^\mathrm{h}}=C_{iph}^\mathrm{h}.$$
(3)
Because they are non-essential to our investigation so no hadron-hadron or parton-hadron interactions are present in Eqs. (2) and (3). These form our basic set of equations for the conversion of a parton into a hadron gas. The explicit forms of the $`C_i^\mathrm{p}`$’s and $`C_i^\mathrm{h}`$’s can be found in . We stress that no medium effect on hadronization has yet been included.
To understand how medium effects such as color screening would affect hadronization, we use the following physical picture. Each hadron must have a certain physical size which can be thought of as the internal separation $`b`$ of the $`q`$ and $`\overline{q}^{}`$ pair. Because of the internal motion, this separation is not fixed and smaller separations should be more favorable. So this likelihood can be parametrized by a distribution $`F(b)`$. Therefore for a hadron or resonance existing inside a color screening medium, there is a chance that it will dissolve depending on whether $`b<l_D`$ or $`b>l_D`$ which we represent by $`P_<`$ and $`P_>`$, respectively. They are related to the distribution by $`P_<=_0^{l_D}\text{d}bF(b)`$ and $`P_>=_{l_D}^{\mathrm{}}\text{d}bF(b)`$. They also dictate whether a hadron or resonance can be formed or not. With these probabilities, the parton and hadron time evolution equations with color screening can now be written in the following forms
$$\frac{\text{d}f_i^\mathrm{p}}{\text{d}\tau }|_{p_z\tau =\mathrm{const}.}=\frac{f_i^\mathrm{p}f_{ieq}^\mathrm{p}}{\theta _i^\mathrm{p}}=\left(C_i^\mathrm{p}C_{iq_a\overline{q}_a^{}}^\mathrm{p}\right)+C_{iq_a\overline{q}_a^{}}^\mathrm{p}+C_{igq}^\mathrm{p}+C_{iph}^\mathrm{p}+C_{ihp}^\mathrm{p},$$
(4)
$$\frac{\text{d}f_i^\mathrm{h}}{\text{d}\tau }|_{p_z\tau =\mathrm{const}.}=\frac{f_i^\mathrm{h}f_{ieq}^\mathrm{h}}{\theta _i^\mathrm{h}}=C_{iph}^\mathrm{h}+C_{ihp}^\mathrm{h}.$$
(5)
$`C_{iq_a\overline{q}_a^{}}^\mathrm{p}`$ is the color singlet $`q\overline{q}^{}`$ scattering term and the primed $`C^{}`$’s are almost the same terms as in Eqs. (2) and (3) above but are now weighed by either $`P_<`$ or $`P_>`$. The new terms $`C_{ihp}`$ describe the melting of hadrons as discussed above. Their forms, further details of the above equations and discussions are given in .
## 3 Color Screening Effects on Hadronization in a Strongly Interacting Parton Plasma
Solving for the distributions from the color screened and unscreened equations above, we can compare the effects of the medium on hadronization and vice versa . In Fig. 1, the $`\pi `$ and $`K`$ number densities $`n_h`$ against time are plotted on the left figure and the parton fugacities $`l_g,l_q`$ on the right. In the $`n_h`$ plot, from top to bottom three pairs of results, color screened (solid) and unscreened (dashed), are shown for the $`\pi ^\pm `$, $`\pi ^0`$ and $`K^{\pm ,0}`$, respectively. Clearly there is a slow start and a delay for forming hadrons in a properly color screened plasma because of the struggle between confinement and color screening. Consequently, the maximum densities are lower because of the expansion. In the second plot on the right, one effect of hadronization on the medium is shown. The top (bottom) three curves are the $`l_g`$ ($`l_q`$) results. The long dashed, dotted and solid curves are for the case with no hadronization, with hadronization but no screening and with both included, respectively. So parton equilibration is seen to be seriously disrupted by confinement. Unless equilibration is extremely fast and could finish well before the phase transition which is unlikely, fully equilibrated quark-gluon plasma should not be expected. In the two cases with hadronization, the disruption to the partons is again delayed when screening is included. So any model without screening will not get the hadronization time scale correct. Other effects such as lower hadron densities caused by the delay will affect estimates on the background contributions from the hadron gas to the proposed signatures for the quark-gluon plasma. In view of the imminent operation of RHIC a proper incorporation of medium effects in numerical models with parton-hadron transition is urgently needed.
|
no-problem/9907/hep-ph9907424.html
|
ar5iv
|
text
|
# 𝐵_𝑐 Production at RHIC as a Signal for Deconfinement
## 1 Introduction
This work investigates the possibility that the production of $`B_c`$ mesons at RHIC may serve as a signal for the presence (or absence) of a deconfined state of matter . The study of the b-c sector has the advantage of a long history of potential model analysis in the $`b\overline{b}`$ and $`c\overline{c}`$ sectors. These studies have provided robust predictions for the mass and lifetime of the $`B_c`$ states, and the recent measurements by CDF are consistent with those calculations.
First let us estimate at RHIC the production rate of different heavy quarks and mesons, which one would expect if it results just from a superposition of the initial nucleon-nucleon collisions. For heavy quark production, pQCD calculations for p-p interactions fit present accelerator data and bracket the RHIC energy range. Hard Probes Collaboration estimates indicate about 10 $`c\overline{c}`$ pairs and 0.05 $`b\overline{b}`$ pairs per central collision at RHIC. $`J/\psi `$ and $`\mathrm{{\rm Y}}`$ production involves the use of some model, such as the Hard Probes color singlet fits, which would predict bound state fractions of order somewhat less than the one percent level.
A similar analysis for $`B_c`$ production reaches significantly different results<sup>1</sup><sup>1</sup>1In the following we include in the term $`B_c`$ also the vector 1S state $`B_c^{}`$, since its mass splitting should only allow an electromagnetic decay into the pseudoscalar ground state and thus both will contribute identically in the experimental signatures. Since the $`b`$ and $`\overline{c}`$ must be produced in the same nucleon-nucleon interaction, parton subprocesses of order $`\alpha _s^4`$ are the leading order contributions. This leads to a substantial reduction of the bound state fraction
$$R_b\frac{B_c+B_c^{}}{b\overline{b}}$$
relative to the few percent levels for the corresponding $`\mathrm{{\rm Y}}`$ state fractions. At RHIC energies, typical values are $`R_b=310\times 10^5`$, with the uncertainty from the scale choice in the pQCD calculations.
To convert these numbers into $`B_c`$ production predictions for RHIC, we have looked at two scenarios for the luminosity. a) The “first year” case assumes a luminosity of 20 inverse microbarns with no trigger. b) The “design” luminosity assumes 65 Hz event rate with a 10% centrality trigger in Phenix, and uses $`10^7`$ sec/year. The predictions we obtain are listed in Table 1. Included in the estimates are both the weak branching fraction of the $`B_c`$ plus the dimuon decay fraction for $`J/\psi `$. Similar numbers are shown for the $`J/\psi `$ and $`\mathrm{{\rm Y}}`$ production and detection via $`\mu ^+\mu ^{}`$, and also the underlying heavy quark production which may be useful to make contact with other estimates. One sees easily that in this scenario there is no hope of seeing $`B_c`$’s at RHIC.
## 2 Deconfinement Scenario
Now the principal reason for our interest - could deconfinement change the $`B_c`$ production rate at RHIC? We have investigated the following scenario: In those events in which a $`b\overline{b}`$ pair are produced, one could avoid the small $`B_c`$ formation fraction if the $`b`$-quarks are allowed to form bound states by combining with $`c`$-quarks from among the 10 $`c\overline{c}`$ pairs already produced by independent nucleon-nucleon collisions in the same event. This can occur if and only if there is a region of deconfinement which allows a spatial overlap of the b and c quarks. In addition, one would expect some $`c\overline{c}`$ production in the deconfined phase during its lifetime, as a result of the approach toward chemical equilibration. The large binding energy of $`B_c`$ (840 Mev) would favor their early “freezing out” and they will tend to survive as the temperature drops to the phase transition value. The same effect for the B mesons and indeed for the $`B_s`$ will not be so competitive, since these states are not bound at the initial high temperatures (or equivalently they are ionized at a relatively high rate by thermal gluons).
To do a quantitative estimate of these effects, we calculate the dissociation rate of bound states due to collisions with gluons, utilizing a quarkonium break-up cross section based on the operator product expansion :
$$\sigma _B(k)=\frac{2\pi }{3}\left(\frac{32}{3}\right)^2\left(\frac{2\mu }{ϵ_o}\right)^{1/2}\frac{1}{4\mu ^2}\frac{(k/ϵ_o1)^{3/2}}{(k/ϵ_o)^5},$$
(1)
where $`k`$ is the gluon momentum, $`ϵ_o`$ the binding energy, and $`\mu `$ the reduced mass of the quarkonium system. This form assumes the quarkonium system has a spatial size small compared with the inverse of $`\mathrm{\Lambda }_{QCD}`$, and its bound state spectrum is close to that in a nonrelativistic Coulomb potential. The magnitude of the cross section is controlled just by the geometric factor $`\frac{1}{4\mu ^2}`$, and its rate of increase in the region just above threshold is due to phase space and the p-wave color dipole interaction.
For the breakup rate $`\lambda _B`$ of $`B_c`$ states in deconfined matter, we calculate the thermal average:
$$\lambda _B=v_gn_g\sigma _B=\frac{8}{\pi ^2}_{ϵ_o}^{\mathrm{}}k^2𝑑ke^{\frac{k}{T}}\sigma _B(k),$$
(2)
where $`v_g`$ = 1 and all modes of massless color octet gluons have been included. Numerical results for these rates are shown in Fig. 1. For comparison, breakup rates are also shown for the $`J/\psi `$ and $`\mathrm{{\rm Y}}`$ (and even the $`B_s`$, but the the approximations made for this cross section probably have a very marginal validity in view of such a large state). One sees that in the range of temperatures expected at RHIC, these breakup rates for $`B_c`$ lead to time scales of order $`110\mathrm{fm}`$.
For an estimate of the corresponding cross section for the formation reaction $`\sigma _F(b+\overline{c}B_c+g)`$ we utilized detailed balance relations. This leads to a finite value of $`\sigma _F`$ at threshold, since it is an exothermic reaction. In the approximation that the massive $`b`$-quarks are stationary, which is expected to be a reasonable approximation due to their energy loss in the hot plasma , the formation rate is then calculated for a thermal distribution of charm quarks:
$$\lambda _F=v_cn_c\sigma _F=\frac{3}{\pi ^2}_0^{\mathrm{}}\left(\frac{p}{E_p}\right)p^2𝑑pe^{\frac{E_p}{T}}\sigma _F(p)$$
(3)
where $`E_p=\sqrt{p^2+m_c^2}`$. These formation rates are shown in Fig. 2. They have been calculated for three different values of charm quark mass. It is apparent that the results are quite sensitive to this choice, due to the strong dependence of total charm quark population. The same values of $`m_c`$ have very little effect on the breakup rates, since they only change the overall scale in the geometric factor of the breakup cross section.
Also shown in Fig. 2 are the ratios $`\lambda _B/\lambda _F`$, which in our normalization is related to the bound state fraction in the equilibrium limit:<sup>2</sup><sup>2</sup>2 This bound state fraction is reached if the system has enough time in its dynamical evolution to relax to the steady-state solution at each temperature. We have verified that this is roughly the case down to $`T=300`$ MeV, at which point the $`B_c`$ abundance begins to freeze out.
$$R_b\frac{B_c+B_c^{}}{b\overline{b}}=\frac{\frac{3}{2}\frac{\lambda _F}{\lambda _B}}{1+\frac{3}{4}\frac{\lambda _F}{\lambda _B}}.$$
(4)
Note that this ratio approaches its upper limit of 2 when the formation rate dominates over the breakup rate. This corresponds to the situation in which every b-quark produced in the initial collisions emerges as a $`B_c`$ bound state.
We choose a transition temperature $`T_f=160`$ MeV at which to evaluate the final bound state populations. Here the equilibrium bound state fraction $`R_b`$ drops to as low as several percent, but it is at least a factor of 100 above what one may expect in the no-deconfinement scenario. We have chosen to use the equilibrium ratios although at this final temperature the rates are not sufficient for them to be approached. This provides an even more conservative estimate for the final bound state populations. The corresponding entries in the Table for numbers of $`B_c`$ mesons (labeled QGP + $`c\overline{c}`$ in Chemical Equil.) uses this conservative lower limit estimate.
Implicitly, this analysis uses the full chemical equilibrium density for c-quarks. To get a more realistic limit we repeated the calculation, using only the initially-produced $`c`$-quarks in the formation rate. From the initial population of 10 $`c\overline{c}`$-pairs produced via nucleon-nucleon collisions in a central Au-Au collision at RHIC, and an initial volume $`V_o=\pi (R_{Au})^2\tau _o`$ with $`\tau _o`$ = 1.0 fm, one concludes that only for initial temperatures $`T_o<300`$ MeV is the initial charm quark density comparable to that for full chemical equilibrium. For initial temperature $`T_o`$ = 500 MeV, for example, the chemical equilibrium charm quark density would be about a factor of 40 higher than that actually provided by the initially-produced charm quarks. As temperature decreases below $`T_o`$, the isentropic expansion $`VT^3=`$ Const. leads to a decrease in the $`c`$-quark density proportional to $`T^3`$, rather than the $`e^{m_c/T}`$ of chemical equilibrium. We have verified that the rates of both charm annihilation and production in a deconfined state for $`T<300`$ MeV then lead to charm quark occupancies which exceed those for chemical equilibrium as one approaches the transition point . Fig. 3 displays a comparison of chemical equilibrium charm quark densities and those resulting from a constant number of initially-produced charm quarks with isentropic expansion.
These more realistic charm quark densities are used to recalculate the formation rates, and the resulting ratios $`\lambda _F/\lambda _B`$ are shown in Fig. 4 for several values of initial temperature $`T_o`$. The last few rows in the Table show the corresponding $`B_c`$ numbers at RHIC in this scenario, where we have used the equilibrium bound state fractions again at a final temperature $`T_f`$ = 160 MeV. They depend quite strongly on the initial temperature, which determines the final charm density through the assumed isentropic expansion.
We are in the process of refining these preliminary results . Initial numerical solutions of the kinetic equations using time-dependent formation and breakup rates indicate the final bound state populations saturate at values appropriate to those for equilibrium temperatures somewhat above the transition values. This would be expected, since the rates at low temperatures are not sufficient to reach the equilibrium solutions before the volume expansion reduces the temperature to even lower values. Also, production and annihilation of additional charm quark pairs is most effective at higher temperatures, which enhances the effective formation rates. Both of these effects will enhance the bound state production fractions for the higher initial temperatures, and reduce it somewhat for lower initial temperatures. However, it appears that the sensitivity to the parameters of the deconfined state will remain, making the $`B_c`$ signal a sensitive probe of QGP.
While numerical considerations presented here will see a considerable refinement in the near future , the firm conclusion we are able to make today is that should QGP be formed at RHIC there would be a very significant enhancement of the formation of $`B_c`$ mesons which can be observed. The primary mechanism responsible for this enhancement is the interaction of initially-produced bottom and charmed quarks, which will not operate in a confining phase. The observation of any $`B_c`$’s at RHIC is thus both a “smoking gun” signal of deconfinement and a probe of the initial temperature of the system and the initial density of deconfined charm.
Acknowledgment: This work was supported by a grant from the U.S. Department of Energy, DE-FG03-95ER40937.
|
no-problem/9907/quant-ph9907061.html
|
ar5iv
|
text
|
# Untitled Document
Comment on “A local hidden variable model of quantum correlations exploiting the detection loophole”.
Recently, N. and B. Gisin presented a local hidden variable (l.h.v.) model exploiting the detection loophole which reproduces exactly the quantum correlations of the singlet state, whenever the detectors efficiency is less than or equal to 2/3 . The first aim of the present comment is to show that, modulo slight modifications, this model also allows us to simulate the quantum correlations that would be observed during the static realisation of a Franson-type experiment in the case of fully efficient detectors. Such a result was already presented in . The second aim of this comment is to compare the models developed in the references and . We shall show that, beyond strong similarities between the two approaches, only the first one makes it possible to simulate the situation in which the correlations of the singlet state are tested with non-coplanar settings of the polariser.
First of all, we invite the interested reader to consult at least the references and before reading our comment, because we shall systematically refer to them for technical details, in order not to overload the presentation of this comment.
Remarkably, the procedures followed in these models present strong similarities. In both cases, the starting point is to consider the ”linear” model, a well-known l.h.v. model which saturates Bell’s inequalities. In this sense, this model can be considered to furnish one of the ”best” simulations of non-local correlations in terms of local ones. It predicts that the correlation function between the values of dichotomic observables measured in two different locations exhibits a linear dependence on the difference of two arbitrary local phases. These phases correspond to the directions of the local polarisers observed in a Bell-like situation and to phase shifts between two arms of local interferometers in Franson-like situations . At this level, both models proceed in the following way: 50 % of the values of the hidden variables that contributed to this linear dependence are erased in such a way that the correlation function now exhibits a cosinusoidal dependence on the phase difference, which is proportional to the quantum correlation function exhibited by the singlet state. For sure, the price to pay is that 50 % of the coincident detections that appeared in the linear model have now disappeared. In the model of , they are replaced by (conveniently symmetrised between both detectors) single detections. This trick allows the authors, by exploiting the detection loophole, to simulate a situation for which the detectors efficiencies are equal to 2/3, in which case one checks directly that the ratio between the rate of single and coincident detections is equal to one, as it must be ($`\frac{21/32/3}{2/32/3}=\frac{50\%}{50\%}`$).
In the case of a Franson-like experiment with perfect detectors, the situation is slightly different: coincident detections occur in 50 % of the cases, and non-coincident detections occur otherwise. The trick here is to follow the same procedure as for the Bell situation with unefficient detectors but, instead of associating the values of the hidden parameters that are erased from the linear model with no-clicks in one detector, it is sufficient to let them correspond with delayed, non-coincident, detections in this detector. The delay time is equal to the difference between the times of flight along the long arm and the short arm of the interferometers present in Franson’s device (see for details). By doing so, the total number of clicks is conserved, which corresponds to the assumption of ideal detection (efficiency of the detectors equal to one), and one gets 50 % of non-coincident detections and 50 % of coincident ones, in accordance with the standard predictions in this case. This is the essence of the model presented in . Remark that a symmetrisation procedure is now necessary in order to avoid to introduce an artificial temporal dissymmetry between the times of appearance of single detections and those of coincident detections. Incidentally, this symmetrisation procedure restablishes the symmetry between both detectors. Note that this procedure is valid only if the delayed statistics is the same as the advanced one, which means that the phase settings inside the interferometers used during the experiment are not changed too quickly. This property can be shown to be a particular case of a very general feature: no l.h.v. model exists that could reproduce the statistics of Franson-like experiments whenever the phase settings are changed at a rate superior to the time delay between both arms of the interferometers. This point is discussed with great accuracy in .
At first sight one could think that the Franson-like situation corresponds to an effective efficiency of the detectors equal to $`\sqrt{0.5}`$ (close to 70 %), which is higher than 2/3, but the effective efficiency that can be reached thanks to this procedure (when one erases 50 % of the statistics) is necessarily equal to 2/3 as we already showed. What differenciates both situations is the appearance of coincident null counts which have no counterpart in the Franson-like situation, in which detectors are assumed to be perfect. In this situation, 50 % of the events are good in the sense that their statistics violates Bell’s inequalities, and 50 % are bad, but bad events ALWAYS come in pairs, whereas in the situation with unefficient detectors, non-detection events come either in single detection station OR in both.
This was the first point of our comment: strong analogies exist between l.h.v. models aimed at simulating the violation of Bell’s inequalities in the case of unefficient detectors and l.h.v. models that simulate the statistics of Franson-like experiments. In summary, deleted clicks in the former corrrespond to delayed or advanced, non-coincident, clicks in the latter.
In fact, the general method sketched here which consists of departing from the linear model and of ”erasing” some values of the hidden parameter in order to simulate the (biased) quantum correlations was already developed earlier. For instance, in the references and , the authors even showed, thanks to an ingenious symmetrisation procedure, how to simulate the statistics obtained with detectors of efficiency equal to 77.80 %, which lies very closely to the theoretical upper bound on efficiencies<sup>1</sup><sup>1</sup>1See and for a detailed discussion in the (realistic) case of a non-ideal visibility of the correlation function. (82.83 %). At first sight, the approach followed in and is the most efficient one, in comparison to the treatment presented in (2/3 is less than 4/5). Nevertheless, the possibility of non-coplanar settings of the polarisers in a Bell-like experiment allows us to distinguish more finely these two approaches. In the first model , the hidden variable implemented in the linear model, essentially a direction, is isotropically distributed on the unity sphere, a 2-dimensional surface. In the other approaches , the corresponding hidden variable, essentially an angle, is distributed on the unity circle, a 1-dimensional surface. In the last case, it is impossible to apply the model to the case of non-coplanar settings, as we shall now show. The correlation function of the singlet state is cosinusoidal in the relative angle between the directions of the settings of the polarisers. In virtue of the constraints imposed by the requirement of locality, the hidden variable in , which is an angle, must be evaluated relatively to an arbitrary direction of reference that must be fixed before the particles leave the source. Now, in the case of non-coplanar settings, the triangular inequality on the sphere imposes that the relative angle between two directions is, in general, not equal to the difference of the angles taken between these directions and an a priori fixed direction of reference, so that the model does not work in general. For what does concern the model developed in , which does not violate the rotational invariance of the singlet state, the extension to the case of non-coplanar settings is realised without problem.
It is shown in that one can build analytical l.h.v. models that approach very closely the statistics of the singlet state whenever the detectors efficiency is less than or equal to the efficiency treshhold (82.83 %), provided one considers coplanar settings of the polarisers only. It is natural to ask the question whether this is still true whenever one also considers non-coplanar settings, in which case the amount of constraints to be fulfilled considerably increases. It could be that l.h.v. models for which hidden variables are triplets of orthogonal vectors of the unity sphere (or Bloch sphere) provide good candidates for performing the job. Such a model supplies the best presently available candidate for classical teleportation in the sense that it minimizes the amount of classical information necessary in order to simulate quantum teleportation (see and references therein).
Dr. Thomas Durt, post-doctoral fellow of the Fund for Scientific Research (FWO), Flanders;
FUND, V.U.B., Pleinlaan 2, 1050, Brussels, Belgium. e-mail: [email protected]
N. Gisin and B. Gisin, quant-ph/9905018, A local hidden variable model of quantum correlation exploiting the detection loophole. S. Aerts, P. Kwiat, J-A. Larsson and M. Zukowski, quant-ph/9812053, Two-photon Franson-type interference experiments are no tests of local realism. J-A. Larsson, Phys. Lett. A, 256: 245-252, 1999, Modeling the singlet state with local variables. E. Santos, Phys. Lett. A, 212: 10-14, 1996, Unreliability of performed tests of Bell’s inequalities using parmetric-down converted photons. N-J. Cerf, N. Gisin, and S. Massar, quant-ph/99061, Classical Teleportation of a Quantum Bit.
Acknowledgements: The author would like to thank M. Zukowski and D. Kaszlikowski for helpful and stimulating discussions during his visit in Gdansk in June 1999, and for financial support of the Flemish-Polish Scientific Collaboration Program No. 007.
|
no-problem/9907/cond-mat9907496.html
|
ar5iv
|
text
|
# The problem of phase breaking in the electronic conduction in mesoscopic systems: a linear-response theory approach
## I INTRODUCTION
The scattering approach to quantum electronic transport in mesoscopic systems was devised by Landauer and later extended by a number of other authors (see, for instance, as representative articles, Refs. and the references contained therein). In an independent-electron picture, it aims at understanding the electric conductance of a sample in terms of its scattering properties. The problem of electric transport is thus converted into that of solving the quantum-mechanical scattering problem of an electron that impinges on the sample through leads that, ideally, extend to infinity, once the experimental environment the sample is attached to in the laboratory is disconnected. An approach to this problem using the methods of linear-response theory (LRT) has been given, for instance, by the authors of Refs. (see also other publications referred to there).
In the original conception of the scattering approach to electronic transport, inelastic electron scattering or other phase-breaking mechanisms are not allowed inside the sample. As a result, the phase of the wave function is completely coherent in that region. Yet, in various circumstances the effects of the electron-electron interaction or the interaction with the phonon field may not be negligible. In a further development of the theory , the single-electron picture is maintained and phase-breaking events in a given region are modelled by attaching the system to a “fake wire”, which in turn is connected to a phase-randomizing reservoir, so that there is no phase coherence in the wave function for an electron entering and exiting the reservoir. The chemical potential of the reservoir is chosen so that the net current along the fake wire vanishes. This model provides sensible answers and has been used, for instance, in the study of electric transport through quantum dots .
A number of authors have attempted to generalize the scattering approach to include inelastic scattering explicitly, instead of modelling it as described above. In Ref. the problem of quantum transport in the presence of phase-breaking scattering is formulated using an exacly soluble model for the electron-phonon interaction and, using linear-response theory, a generalized conduction formula is found. Ref. uses Landauer’s approach and analyzes the effect of a single impurity scatterer, at which both elastic and inelastic processes can occur. The form of the electron-impurity interaction can be quite general; in the weak-scattering limit, in which Born approximation is invoked, the authors arrive at a generalized conduction formula. The possibility of energy exchange with the scatterer makes Pauli blocking effects important and an extension beyond Born approximation seems difficult.
The authors of Ref. analyze in great generality the problem of quantum-mechanical phase breaking: they consider the interference between the terms of an electron wave function arising from two different electron paths and study the effect on that interference of an “environment” the electron can interact with. In an experiment in which the electron and not the environment is measured, the coordinate of the latter is integrated upon: as a result, the interference is lost if, in the two interfering partial waves, the states of the environment are orthogonal to each other. It is emphasized that, for this to occur, energy exchange between the electron and the environment need not be invoked. That this loss of interference is irretrievable is a quantum-mechanical effect, common to a number of situations, like the two-slit experiment.
In the present paper we plan to incorporate the mechanism of Ref. –briefly described in the previous paragraph– to the LRT approach to transport provided by Refs. , in order to study the problem of phase breaking in the electronic conduction in mesoscopic systems. The emphasis on the mechanism of Ref. for phase breaking is the main difference between the present and previous work on the problem. Another characteristic of the present paper is that the applications are discussed in terms of the scattering matrices of the various impurities (static or non-static), a procedure that has been found advantageous in a number of previous publications (see Refs. and references contained therein).
Quite generally, we can pose the problem by thinking that the electrons, besides suffering elastic collisions with static scatterers, interact with a number of scatterers, or phase breakers (PB), that possess internal degrees of freedom and can live, say, in $`m`$ possible quantum-mechanical states altogether. Even in the absence of the electron-electron Coulomb interaction, the problem is now no longer one of a single-electron, but is a full many-body problem: one electron is incident on the PB, this being in some state $`\mu `$; after the interaction there is a certain probability to find the PB in state $`\nu `$, and this is what the next electron coming in will see. This memory effect, or, equivalently, the electron-electron interaction induced by the PB, gives rise to a situation similar to the one found in Kondo problem and we are bound to find similar complications.
The paper is organized as follows. In order to become acquainted with the physical phenomena produced by a PB we first study, in the next section, the problem of a single electron scattered by two static impurities and a PB. We show how the interference terms that occur with the static impurities alone are affected by the presence of the PB. The discussion parallels that given in Ref. and is done in terms of the scattering matrices of the various impurities.
In Sec. III we pose the conduction problem (a many-electron problem) of an electronic system with static impurities and a PB \[but not subject to a magnetic field, so that we have time-reversal invariance (TRI)\], from the standpoint of LRT. We show that we can make a number of quite general statements. However, we reach a point where we are unable to calculate the conductivity tensor in full generality: we thus resort to a simplified, soluble model that we introduce in Sec. IV. The conductivity tensor is calculated within that model and is found to be expressible entirely in terms of a single-electron picture, i.e. in terms of single-electron Green’s functions. It is then shown that the resulting zero-temperature dc conductance can be expressed in terms of a total transmission coefficient at the Fermi energy $`ϵ_F`$, but now containing a trace over the $`m`$ states of the PB, Eq. (94). Within the restrictions of the model, the result does not depend on the strength of the electron-PB interaction, as other analysis do. We present (see discussion at the end of section III around equation (57)) a speculation as to the validity of our main result beyond the assumptions of the soluble model, in the strict linear-transport regime and above the Kondo temperature (which is taken to be extremely low) associated with the $`m`$-level PB.
In Sec. V we set up a random-matrix description of the electron-PB system, with possible applications to chaotic cavities, in order to calculate the effect of the PB on the average conductance and its fluctuations. The limitations of the model become apparent here.
Our conclusions are discussed in Sec. VI.
## II QUANTUM INTERFERENCE IN A ONE-ELECTRON SCATTERING PROBLEM: THE EFFECT OF A PHASE BREAKER.
We analyze in this section the scattering problem of a single electron interacting with a combination of three scatterers in series: two static ones and a PB in the middle. We show how the interference terms that would occur with the static scatterers alone are affected by the presence of the PB.
For simplicity, the problem is treated as a 1D one. It is described by the Hamiltonian
$$H=\frac{p^2}{2m}+V_1(x)+\underset{\mu ,\nu }{}|\mu V_{\mu \nu }(x)\nu |+V_3(x).$$
(1)
In this equation, $`V_1(x)`$ and $`V_3(x)`$ are the potentials arising from the two static scatterers, and the third term represents the interaction of the electron with the PB; the $`m`$ states of the latter, denoted by $`\mu `$, $`\nu `$, are degenerate in energy. Below, we shall find it convenient to write our states as $`m`$-component “spinors”, so that $`H`$ acquires a matrix form, where rows and columns are associated with the $`m`$ states of the PB.
Instead of modelling the potentials for the various scatterers, we have found it advantageous to model their scattering properties through the corresponding scattering matrices. In this language, the three scatterers are described by the $`S`$ matrices $`S_1`$, $`S_2`$ and $`S_3`$, respectiverly. We write the $`S`$ matrix $`S_2`$ for the PB as
$$S_2=\left[\begin{array}{cc}r_2& t_2^{}\\ t_2& r_2^{}\end{array}\right],$$
(2)
where the reflection and transmission matrices (for incidence from the left or from the right, respectively) $`r_2`$, $`r_2^{}`$, $`t_2`$, $`t_2^{}`$ are $`m`$-dimensional, with matrix elements $`r_2^{\mu \nu }`$, etc. The $`S`$ matrices for the elastic scatterers on the left and on the right of the PB are written, respectively, as
$$S_1=\left[\begin{array}{cc}r_1I_m& t_1^{}I_m\\ t_1I_m& r_1^{}I_m\end{array}\right],$$
(3)
$$S_3=\left[\begin{array}{cc}r_3I_m& t_3^{}I_m\\ t_3I_m& r_3^{}I_m\end{array}\right],$$
(4)
where $`r_1`$, … , $`r_3`$, … , are just complex numbers (we are in 1D) and $`I_m`$ is the $`m`$-dimensional unit matrix in the space of the PB states: recall that scatterers 1 and 3 do not change the state of the PB.
The total transmission matrix $`t`$ for the chain of three scatterers in series is given by
$$t=\left(t_3I_m\right)\frac{1}{I_mr_{12}^{}r_3}t_{12}.$$
(5)
In this equation, $`t_{12}`$ is the transmission matrix for the system formed by $`S_1`$ and $`S_2`$, given by
$$t_{12}=t_2\frac{1}{I_m\left(r_1^{}\right)r_2}\left(t_1I_m\right).$$
(6)
Similarly, $`r_{12}^{}`$, the reflection matrix for the combination $`S_1`$, $`S_2`$, is given by
$$r_{12}^{}=r_2^{}+t_2\frac{1}{I_m\left(r_1^{}\right)r_2}\left(r_1^{}I_m\right)t_2^{}.$$
(7)
To be more specific, we now choose $`r_2=r_2^{}=0`$ in the matrix $`S_2`$ of Eq. (2) that defines the PB. Thus the PB will not change the electron momentum; however, we shall see that it reduces, in the electronic current, the interference among the multiply reflected paths occurring between the two elastic scatterers. As a consequence of this choice, the matrices $`t_2`$ and $`t_2^{}`$ are $`m`$-dimensional unitary matrices
$$t_2t_2^{}=I_m,t_2^{}t_2^{}=I_m.$$
(8)
Also, $`t_{12}`$ reduces to
$$t_{12}=t_2t_1$$
(9)
and $`r_{12}^{}`$ to
$$r_{12}^{}=t_2r_1^{}t_2^{}$$
(10)
We thus have, for $`t`$
$`t=\left(t_3t_1\right){\displaystyle \frac{1}{I_m\left(r_1^{}r_3\right)t_2t_2^{}}}t_2`$
$$=\left(t_3t_1\right)t_2\frac{1}{I_m\left(r_1^{}r_3\right)t_2^{}t_2}=\left(t_3t_1\right)t_2\frac{1}{I_mau},$$
(11)
where we have defined the complex number
$$a=r_1^{}r_3=\left|a\right|\mathrm{exp}(i\rho )$$
(12)
and the unitary matrix
$$u=t_2^{}t_2.$$
(13)
The total transmission matrix $`t`$ of Eq. (11) is $`m`$-dimensional. Its element $`t^{\mu \nu }`$ gives the probability amplitude for the process: {the electron comes from the left, the PB being in state $`\nu `$}$``$ {the electron is transmitted to the right, the PB being shifted to the state $`\mu `$}. The corresponding probability is $`T^{\mu \nu }=|t^{\mu \nu }|^2`$. Now, $`T^\nu =_\mu T^{\mu \nu }`$ is the transmission probability when the PB is initially in state $`\nu `$ and the PB is not observed; $`T^\nu `$ can be written as
$`T^\nu =(t^{}t)^{\nu \nu }`$
$$=T_3T_1\left[\frac{1}{I_ma^{}u^{}}\frac{1}{I_mau}\right]^{\nu \nu },$$
(14)
where Eq. (8) was used. Let us emphasize that the superscript $`\nu `$ specifies the initial pure state of the PB. If, however, the PB is initially in a mixed state (a situation of particular interest for the conductance problem, Sec. IV 1), then an additional sum over $`\nu `$ is needed. Assuming that the PB can be found, with equal probability, in each of its $`m`$ states, one obtains for the transmission coefficient $`T`$:
$$T=\frac{1}{m}\underset{\nu }{}T^\nu =\frac{1}{m}tr(t^{}t)$$
(15)
$$=T_1T_3\frac{1}{m}tr\left[\frac{1}{I_ma^{}u^{}}\frac{1}{I_mau}\right].$$
(16)
It is instructive to expand $`t`$ of Eq. (11) as the power series
$$t=\left(t_3t_1\right)t_2\left[I_m+au+a^2u^2+\mathrm{}\right].$$
(17)
This series can be easily interpreted in terms of the multiple scattering processes that occur between the two elastic scatterers, influenced by the PB in the middle. The transmitted wave function is a linear combination of all these terms, arising from internal multiple reflections.
We now study a number of particular cases.
### A The Case m=1
We set
$$t_2=t_2^{}=1.$$
(18)
From Eq. (11) we have
$$t=\frac{t_3t_1}{1r_1^{}r_3}=\frac{t_3t_1}{1a}.$$
(19)
In this case, scatterer 2 is not a PB, but a static scatterer: it is thus like having just the two elastic scatterers \[more generally, we could choose $`t_2=e^{i\alpha },t_2^{}=e^{i\beta }`$; then the effect of scatterer 2 would simply be the addition of the relative phase $`(\alpha +\beta )`$ between the original scatterers 1 and 3; that extra phase could equally well be obtained, for instance, by setting the two elastic scatterers a distance $`d`$ farther apart, where $`kd=(\alpha +\beta )`$\]. The transmission probability discussed above is
$`T^1=T={\displaystyle \frac{T_3T_1}{\left|1a\right|^2}}=T_3T_1\left|{\displaystyle \frac{1+a}{1a^2}}\right|^2`$
$$=T_3T_1\frac{1+\left|a\right|^2+2Rea}{\left|1a^2\right|^2}T_{coh}.$$
(20)
This result will be termed the fully coherent response $`T_{coh}`$.
### B The Case m=2
Now the PB has two (orthogonal) states. We follow the various multiply scattered terms occurring between the two elastic scatterers –as given by Eq. (17)– in order to understand more closely what the PB does to them. We consider two examples.
1. Let
$$t_2=t_2^{}=\sigma _x=\left[\begin{array}{cc}0& 1\\ 1& 0\end{array}\right],$$
(21)
so that, from Eq. (13)
$$u=t_2^{}t_2=I_2=\left[\begin{array}{cc}1& 0\\ 0& 1\end{array}\right].$$
(22)
We examine the multiple-scattering series (17). In each passage through the PB, the state of the latter is shifted to the orthogonal state. But, after the pair of reflections described by the product $`a=r_1^{}r_3`$, the PB is visited twice and is then back to the original state. In other words, in each passage, the PB exactly undoes what it did in the previous one. This is the significance of $`u=t_2^{}t_2=I_2`$ in Eq. (22).
We thus find for $`t`$, Eq. (11)
$$t=\left(t_3t_1\right)t_2\frac{1}{1r_1^{}r_3},$$
(23)
which leads to Eq. (20), exactly as for the case with no PB.
2. Let
$$u=t_2^{}t_2=\left[\begin{array}{cc}0& 1\\ 1& 0\end{array}\right]=\sigma _x.$$
(24)
This could be obtained, for instance, with the choices
$$t_2^{}=i\sigma _yt_2=\sigma _z,$$
(25)
or
$$t_2=t_2^{}=\frac{1}{2}\left[\begin{array}{cc}1+i& 1i\\ 1i& 1+i\end{array}\right].$$
(26)
Now $`u`$ shifts the two PB states: i.e., state $`|1`$ is shifted to $`|2`$ and viceversa. This fact has important consequences. The multiple-scattering series (17) for $`t`$ now gives
$`t=\left(t_3t_1\right)t_2`$
$$\left[I_2+a\sigma _x+a^2I_2+a^3\sigma _x+\mathrm{}\right],$$
(27)
which divides naturally into an even-order and an odd-order contribution, which can be summed up to give
$$t=\left(t_3t_1\right)t_2\left[\frac{1}{1a^2}I_2+\frac{a}{1a^2}\sigma _x\right].$$
(28)
Suppose that in the incident state the electron comes from the left and the PB is in the pure state
$$|0=\left[\begin{array}{c}\alpha \\ \beta \end{array}\right],$$
(29)
say. The transmitted wave function on the right is thus
$`|\mathrm{\Psi }_k_{trans}^0={\displaystyle \frac{t_3t_1}{2(1a^2)}}e^{ikx}`$
$$\times \left[\begin{array}{c}(1+a)(\alpha +\beta )+(1a)(\alpha \beta )i\\ (1+a)(\alpha +\beta )(1a)(\alpha \beta )i\end{array}\right].$$
(30)
The transmission probability, that we call $`T^0`$, is given by
$$T^0=T_3T_1\frac{1+\left|a\right|^2+2\mathrm{R}\mathrm{e}(\alpha \beta ^{})2\mathrm{R}\mathrm{e}(a)}{\left|1a^2\right|^2}.$$
(31)
In the absence of the PB, on the other hand, we have the fully coherent response $`T_{coh\text{ }}`$of Eq. (20). The interference term $`2Re(a)`$ in Eq. (20) has been reduced, in Eq. (31), by a factor, whose magnitude $`\left|2\mathrm{R}\mathrm{e}(\alpha \beta ^{})\right|1`$. The effect of the PB, and of having measured the electron but not the PB, has thus been to decrease the magnitude of the interference term. We stress again that this effect is there even if the incident state is the pure state (29). In particular, for either pure state $`\alpha =1`$, $`\beta =0`$, or $`\alpha =0`$, $`\beta =1`$, we obtain, in the notation introduced right after Eq. (13)
$$T^1=T^2=T_3T_1\frac{1+\left|a\right|^2}{\left|1a^2\right|^2},$$
(32)
where the interference term in the numerator is absent. This last result could be obtained directly from Eq. (14). For a mixture of these two states we thus obtain the same answer, i.e. $`T=T^1=T^2`$.
### C The Case $`m\mathrm{}`$
Let us inquire as to what kind of a PB would lead to the “classical” composition rule for the two elastic scatterers, i.e. to the equation
$$T=\frac{T_1T_3}{1R_1R_3}.$$
(33)
We shall see that for this to occur we need a PB with many states, i.e. $`m\mathrm{}`$. We analyze two possibilities.
1. Choose
$$(t_2)_{\mu \nu }=(t_2^{})_{\mu \nu }=\delta _{\nu ,\mu 1}.$$
(34)
For finite $`m`$ we use periodic boundary conditions, i.e. $`m+11`$. But below we are interested in $`m\mathrm{}`$. Assume that the electron is impinging from the left and the PB is initially, say, in its first state $`|1`$. Each passage of the electron through the PB will flip the latter to its next (orthogonal) state. Therefore, from the expansion (17), the transmitted wavefunction is:
$$|\mathrm{\Psi }_k_{trans}^1=e^{ikx}t_3t_1[|2+a|4+a^2|6+\mathrm{}]$$
(35)
It is now clear that the transmission coefficient $`T^1`$ for the electron (without observing the PB) is given by Eq. (33).
2. We now discuss a model that allows varying the degree of dephasing in a continuous fashion, thus permitting the description of the crossover between the fully coherent response (20) and the fully incoherent, or classical, one (33).
We choose $`t_2=t_2^{}=exp(iH/2)`$ where $`H`$ is an $`m\times m`$ Hermitean matrix. Its eigenvalues $`\theta `$ are chosen so that their density $`mg(\theta )`$ has width $`\lambda `$ (a trivial way to get that is to take a diagonal matrix and put the required spectrum by hand, then make an arbitrary unitary transformation). Obviously, the eigenvalues of $`t_2^2`$ will be $`exp(i\theta )`$, where $`\theta `$ is an eigenvalue of $`H`$. Assuming that the PB is in a mixed state and, thus, using Eq. (15), we have
$$T=T_1T_3\frac{1}{m}\underset{r=1}{\overset{m}{}}\frac{1}{|1ae^{i\theta _r}|^2},$$
(36)
which in the $`m\mathrm{}`$ limit can be written as
$$T=T_1T_3_0^{2\pi }\frac{g(\theta )d\theta }{|1ae^{i\theta }|^2}.$$
(37)
Rather then attempt to do the last integral exactly for a given $`g(\theta )`$, it is very instructive to expand, as in Eq. (17), each of the two factors in the denominator as a sum of powers such as $`a^k\mathrm{exp}(ik\theta )`$, and the complex conjugate, $`\left(a^{}\right)^k^{}\mathrm{exp}(ik^{}\theta )`$. All the “diagonal” terms give the classical result, Eq. (33), as above. The mixed terms with $`k`$ $``$ $`k^{}`$ give the interference. With vanishing $`\lambda `$, we get the full interference terms. Otherwise the interference terms are killed once $`|kk^{}|\lambda >O(1)`$. Thus $`1/\lambda `$ plays the role of the dephasing length, in units of the distance between the static scatterers.
It is possible, using the above, to analyze a number of interesting physical situations. Consider $`a`$ real and close to unity, i.e. $`a=1\delta `$, with $`0<\delta 1`$ and $`g(\theta )`$ picked at $`\theta =0`$. Take, for example, the peak of the resonant transmission for the fully coherent case, $`\lambda =0`$. The height of the resonance is $`1/\delta ^2`$. The relevant number of bounces (the number of significant terms in the series obtained upon expansion of $`1/(1ae^{i\theta })`$ needed to form the resonance) is $`1/\delta `$. Now introduce a finite $`\lambda `$. If $`\lambda \delta `$, it will affect distant (i.e. further than $`1/\delta `$) terms in the series and thus will have no effect on the height of the resonance. Dephasing starts to “hurt” the peak once $`\lambda \delta `$.
## III THE CONDUCTION PROBLEM IN THE PRESENCE OF A PHASE BREAKER IN LINEAR-RESPONSE.
In this section we discuss the conduction problem –a many-electron problem– in the presence of a PB from the standpoint of linear-response theory (LRT) , following the random phase approximation (RPA) scheme developed in Refs. and . The system to be studied has the geometry shown in Fig. 1: the constriction represents the sample, where we allow for the presence of a PB. As usual, the expanding horns represent the external leads that, in a laboratory setup, are attached to macroscopic bodies.
When no external voltage is applied, the whole system is in equilibrium and is described by the unperturbed Hamiltonian
$`H_0={\displaystyle \underset{i}{}}\left[{\displaystyle \frac{p_i^2}{2m}}e{\displaystyle \underset{I}{}}{\displaystyle \frac{e_I}{\left|𝐫_i𝐑_I\right|}}e{\displaystyle \underset{j}{}}{\displaystyle \frac{e_j}{\left|𝐫_i𝐑_j\right|}}\right]`$ (38)
$`+{\displaystyle \underset{i<j}{}}{\displaystyle \frac{e^2}{\left|𝐫_i𝐫_j\right|}}+{\displaystyle \underset{\mu =1}{\overset{m}{}}}|\mu E_\mu \mu \left|+{\displaystyle \underset{\mu ,\nu =1}{\overset{m}{}}}\right|\mu V_{\mu \nu }(𝐫_i)\nu |`$ (39)
(40)
Here, $`e`$ is the electronic charge, $`𝐫_i`$ the position variable of the $`i`$-th conduction electron, $`𝐑_I`$ the (static) position of the $`I`$-th ion with positive charge $`e_I`$ (screened by the bound electrons), and $`e_j`$ and $`𝐑_j`$ the charge and position of the $`j`$-th impurity, considered to be static. Thus $`H^0`$ contains the kinetic energy, the interaction of the electrons with the ions and the static impurities, as well as the electron-electron interaction, for which no approximation is assumed for the time being. It also contains, in the last two terms of Eq. (40), the intrinsic Hamiltonian of the PB and its interaction with the electrons, of the type introduced in the previous section for one electron. We do not consider a static magnetic field to be present, so that the problem is time-reversal invariant.
We denote by $`\rho _0(𝐫)`$ and $`\varphi _0(𝐫)`$ the equilibrium charge density and potential, respectively, that satisfy Poisson’s equation
$$^2\varphi _0(𝐫)=4\pi \rho _0(𝐫).$$
(41)
Application of an external voltage with frequency $`\omega `$ will cause a current density $`J_\alpha ^\omega (𝐫)`$ in the system. It will also lead to a change in the charge density and in the potential. We denote these changes by $`\delta \rho ^\omega (𝐫)`$ and $`\delta \varphi ^\omega (𝐫)`$ and empasize that these are full changes, with respect to the equilibrium values $`\rho _0(𝐫)`$ and $`\varphi _0(𝐫)`$. In the RPA approximation, which is employed in the present paper, there is no need to separate the full changes into external and induced parts. The essence of the RPA is to omit the electron-electron interactions from the Hamiltonian $`H_0`$ but to use, instead, the full potential $`\delta \varphi ^\omega (𝐫)`$ (rather than the external one) in the formulation of LRT. The full potential is then determined self-consistently from the Poisson equation. We thus have the three following equations :
$$J_\alpha ^\omega (𝐫)=d^3𝐫^{}\mathrm{\Gamma }_\alpha ^\omega (𝐫,𝐫^{})\delta \varphi ^\omega (𝐫^{}),$$
(42)
$$\delta \rho ^\omega (𝐫)=d^3𝐫^{}\mathrm{\Pi }^\omega (𝐫,𝐫^{})\delta \varphi ^\omega (𝐫^{}),$$
(43)
$$^2\delta \varphi ^\omega (𝐫)=4\pi \delta \rho ^\omega (𝐫)$$
(44)
with the kernels
$$\mathrm{\Gamma }_\alpha ^\omega (𝐫,𝐫^{})=\frac{i}{\mathrm{}}_0^{\mathrm{}}𝑑\tau e^{i\omega \tau \gamma \tau }[\stackrel{~}{j}_\alpha (𝐫,\tau ),\stackrel{~}{\rho }(𝐫^{},0)]_{00}$$
(45)
$$\mathrm{\Pi }^\omega (𝐫,𝐫^{})=\frac{i}{\mathrm{}}_0^{\mathrm{}}𝑑\tau e^{i\omega \tau \gamma \tau }<[\stackrel{~}{\rho }_{el}(𝐫,\tau ),\stackrel{~}{\rho }_{el}(𝐫^{},0)]>_{00},$$
(46)
where the limit $`\gamma 0`$ is implied. The expectation values of the interaction picture operators are taken with respect to the Hamiltonian
$$H_{00}=H_0\underset{i<j}{}\frac{e^2}{\left|𝐫_i𝐫_j\right|},$$
(47)
with the electron-electron interaction switched off.
The Poisson equation (44) should be supplemented by boundary conditions. Sufficiently far inside the horns the local current density $`J_\alpha ^\omega (𝐫)`$ becomes vanishangly small, so that these distant regions remain practically in equilibrium. This means that well inside the horns $`\delta \varphi ^\omega (𝐫)`$ approaches constant values, $`\delta \varphi ^\omega (\mathrm{})`$ and $`\delta \varphi ^\omega (+\mathrm{})`$. The difference
$$\delta \varphi ^\omega (\mathrm{})\delta \varphi ^\omega (+\mathrm{})=V^\omega $$
(48)
is the total potential drop on the system sample+horns ($`V^\omega `$ generally does not coincide with the external EMF, since some voltage drop can occur in other parts of the circuit, e.g., near the points where the external EMF source is connected to the horns). At the internal boundary of the system one should require zero current density normal to the boundary
$$J_n(𝐫_s)=0,$$
(49)
$`𝐫_s`$ being a point on the internal boundary.
Solving (42), (43), (44) with the boundary conditions (48), (49) would enable one to determine the charge, current density and field distribution within the system. One could then compute the total current $`I^\omega `$ and, thus, the conductance $`G^\omega =I^\omega /V^\omega `$. This is a formidable problem. A great simplification, however, occurs in the dc limit, thanks to a result obtained in Ref. that states that, in that limit and for fixed $`𝐫`$, $`𝐫^{}`$, the conductivity tensor is divergenceless; i.e.
$$_\beta ^{}\sigma _{\alpha \beta }^\omega (𝐫,𝐫^{})0,$$
(50)
The conductivity tensor relates current density to the electric field \[rather than to the potential, as in equation (42)\]; i.e.
$$J_\alpha ^\omega (𝐫)=d^3𝐫^{}\sigma _{\alpha \beta }^\omega (𝐫,𝐫^{})_\beta ^{}\delta \varphi _\omega (𝐫^{}),$$
(51)
and is given in terms of the current-current correlation function as
$`\sigma _{\alpha \beta }^\omega (𝐫,𝐫^{})`$ $`={\displaystyle \frac{1}{\mathrm{}\omega }}{\displaystyle _0^{\mathrm{}}}𝑑\tau e^{i\omega \tau \gamma \tau }[\stackrel{~}{j}_\alpha (𝐫,\tau ),\stackrel{~}{j}_\beta (𝐫^{},0)]_{00}`$ (53)
$`{\displaystyle \frac{e^2n_0(𝐫)}{im\omega }}\delta (𝐫𝐫^{})\delta _{\alpha \beta },`$
$`n_0(𝐫)`$ being the electron density in equilibrium. Integrating Eq. (51) by parts shows that, in the dc limit, the current density is insensitive to the full potential profile within the sample and depends only on the total potential drop between the two distant surfaces well inside the horns (Fig. 1):
$$J_\alpha ^{\omega 0}(𝐫)=\left[\delta \varphi ^{\omega 0}(+\mathrm{})\mathrm{\Gamma }_\alpha ^+(𝐫)+\delta \varphi ^{\omega 0}(\mathrm{})\mathrm{\Gamma }_\alpha ^{}(𝐫)\right],$$
(54)
where
$$\mathrm{\Gamma }_\alpha ^\pm (𝐫)=_{S_\pm }𝑑S_\beta ^{}\sigma _{\alpha \beta }^{\omega 0}(𝐫,𝐫^{}).$$
(55)
This observation paves the way for a derivation of a Landauer formula from the LRT, and demonstrates that interactions, within RPA, do not affect the conductance. This conclusion, known for purely elastic scatterers , remains valid also in presence of the PB.
For zero temperature, and in the absence of PB, the answer is the well known one
$$G=\frac{e^2}{h}\underset{ab}{}\left|t_{ab}\right|^2,$$
(56)
where $`t`$ is the single-particle transmission matrix at the Fermi energy, from well inside the left horn to deep inside the right one.
When the system is described by the unperturbed Hamiltonian $`H_{00}`$ of Eq. (47), the electrons can change the state of the PB through the e-PB interaction represented by the last term in Eq. (40); the coupling of the electrons with the PB has the consequence that the former are no longer independent. This results in a complicated structure for the $`N`$-electron eigenstates of $`H_{00}`$, and the calculation of $`\sigma _{\alpha \beta }^0(𝐫,𝐫^{})`$ is, in principle, no longer feasible along the lines followed in the absence of the PB. In the next section we discuss a model for the e-PB interaction that does lead to a solution along similar lines. One might speculate on physical grounds that our final result, equation (94), should be valid under the following assumptions. One should first take the temperature to be very low, but much larger than the Kondo temperature, $`T_K`$, due to the interaction of the PB with the electron gas. Furthermore, one should stay in the strict linear response regime, where for a finite distance $`L`$ between the two reservoirs, the current satisfies:
$$e/IL/v_F.$$
(57)
This assures that the separation in time between consecutive electrons participating in the transport is so large that the first electron reaches the downstream reservoir and thermalizes there before the next electron starts its journey. This can be expected to eliminate the electron-electron interaction mediated by the PB, which is a coherent second-order process.
## IV A SOLUBLE MODEL
We assume that $`H_{00}`$ of Eq. (47) has such a structure that, in a suitable PB basis, labelled by $`\sigma `$, $`\sigma ^{}`$ below, it acquires the diagonal form
$$\overline{H}_{00}^{\sigma \sigma ^{}}=\left\{\underset{i}{}\left[\frac{p_i^2}{2m}+U^\sigma (𝐫_i)\right]+\mathrm{\Delta }^\sigma \right\}\delta _{\sigma \sigma ^{}}.$$
(58)
Just as we did in Sec. II, we write the Hamiltonian in matrix form, where rows and columns, i.e. $`\sigma `$, $`\sigma ^{}`$, label the $`m`$ states of the PB. The bar in $`\overline{H}_{00}^{\sigma \sigma ^{}}`$indicates that the Hamiltonian is expressed in the new PB basis, that we shall call, for short, the D-basis (D for diagonal), as opposed to the original, or ND-basis.
In the ND basis, $`H_{00}`$ is obtained from $`\overline{H}_{00}`$ by means of a constant, real (in order to preserve reality of the Hamiltonian, and hence time-reversal invariance) orthogonal transformation $`O`$ (an $`m`$-dimensional matrix), i.e.
$$H_{00}=O\overline{H}_{00}O^T,$$
(59)
or, in terms of its matrix elements
$$H_{00}^{\mu \nu }=\underset{i}{}\left[\frac{p_i^2}{2m}\delta _{\mu \nu }+\underset{\sigma =1}{\overset{m}{}}O^{\mu \sigma }U^\sigma (𝐫_i)O^{\nu \sigma }\right]+\underset{\sigma =1}{\overset{m}{}}O^{\mu \sigma }\mathrm{\Delta }^\sigma O^{\nu \sigma }.$$
(60)
We choose $`\mathrm{\Delta }^\sigma `$ constant, i.e. independent of $`\sigma `$ (and hence we set it equal to zero), so as not to have constant terms in the off-diagonal matrix elements $`\mu \nu `$. In the language of Eq. (40), the energies $`E_\mu `$ of the PB states are degenerate and set equal to zero.
The Schrödinger equation in the D and ND-basis is
$`\overline{H}_{00}|\overline{\mathrm{\Psi }}`$ $`=`$ $`E|\overline{\mathrm{\Psi }},`$ (61)
$`H_{00}|\mathrm{\Psi }`$ $`=`$ $`E|\mathrm{\Psi },`$ (62)
respectively, with
$$|\mathrm{\Psi }=O|\overline{\mathrm{\Psi }}.$$
(63)
In the D-basis, the Hamiltonian $`\overline{H}_{00}`$ can be written as
$$\overline{H}_{00}^{\sigma \sigma ^{}}=\underset{i}{}\overline{H}_{00}^{\sigma \sigma ^{}}(i)=\overline{H}_{00}^\sigma \delta _{\sigma \sigma ^{}},$$
(64)
with
$$\overline{H}_{00}^{\sigma \sigma ^{}}(i)=\overline{H}_{00}^\sigma (i)\delta _{\sigma \sigma ^{}}$$
(65)
and
$$\overline{H}_{00}^\sigma (i)=\frac{p_i^2}{2m}+U^\sigma (𝐫_i).$$
(66)
In the ND basis,
$$H_{00}^{\mu \nu }=\underset{i}{}H_{00}^{\mu \nu }(i),$$
(67)
with
$$H_{00}^{\mu \nu }(i)=\frac{p_i^2}{2m}\delta _{\mu \nu }+U^{\mu \nu }(𝐫_i)$$
(68)
and
$$U^{\mu \nu }(𝐫_i)=\underset{\sigma =1}{\overset{m}{}}O^{\mu \sigma }U^\sigma (𝐫_i)O^{\nu \sigma }.$$
(69)
If the matrix $`M^{\mu \sigma }=\left[O^{\mu \sigma }\right]^2`$ has nonzero determinant, we can find the $`U^\sigma (𝐫_i)`$’s ($`\sigma =`$1,$`\mathrm{}`$, $`m`$) that reproduce any given set of diagonal potentials $`U^{\mu \mu }(𝐫_i)`$’s ($`\mu =`$1,$`\mathrm{}`$, $`m`$); but then we are left with no freedom to select the off-diagonal interactions $`U^{\mu \nu }(𝐫_i)`$ ($`\mu \nu `$), which become uniquely determined by the $`U^\sigma (𝐫_i)`$’s. So, it is clear that the matrix elements $`U^{\mu \nu }(𝐫_i)`$ of Eq. (69) show strong correlations among themselves. These correlations make it possible to find a D-basis in which the Hamiltonian takes the form of Eqs. (64)-(66) and the problem breaks up into $`m`$ independent single-particle ones: this is the feature that makes the problem soluble. Solving the Schrödinger equation (61) thus reduces to solving the $`m`$ single-particle Schrödinger equations
$$\left[\frac{p_i^2}{2m}+U^\sigma (𝐫_i)\right]\psi _k^\sigma (𝐫_i)=ϵ_k^\sigma \psi _k^\sigma (𝐫_i),\sigma =1,\mathrm{},m.$$
(70)
We shall assume that none of the $`U^\sigma (𝐫_i)`$’s admits bound states.
In the D-basis, the states
$$|\overline{\mathrm{\Psi }}_k^\sigma =\left[\begin{array}{c}0\\ \mathrm{}\\ 0\\ \psi _k^\sigma (𝐫_i)\\ 0\\ \mathrm{}\\ 0\end{array}\right],\sigma =1,\mathrm{},m,$$
(71)
with a nonzero value in the $`\sigma `$-th entry, form a complete set of orthonormalized (in a $`\delta `$-function sense) single-particle states, eigenfunctions of the single-particle Hamiltonian matrix (65).
In the D-basis, the $`S`$ matrix associated with a scattering solution has the form
$$\overline{S}^{\sigma \sigma ^{}}=\overline{S}^\sigma \delta _{\sigma \sigma ^{}}.$$
(72)
If the problem admits $`N`$ open spatial channels, each $`S^\sigma `$ is $`2N`$-dimensional. In the ND-basis $`S`$ takes the form
$$S^{\mu \nu }=\underset{\sigma =1}{\overset{m}{}}O^{\mu \sigma }\overline{S}^\sigma O^{\nu \sigma }$$
(73)
and is $`2mN`$-dimensional. The number of independent parameters associated with each unitary symmetric matrix $`\overline{S}^\sigma `$ is $`N(2N+1)`$ and is thus $`mN(2N+1)`$ for the total $`\overline{S}`$ and hence for $`S`$. This makes it clear that the $`S`$ matrices allowed by our soluble model do not have the “generic” structure used in some of the considerations of Sec. II, but have a rather restricted one: in fact, a generic $`mN`$-dimensional $`S`$-matrix has a larger number, i.e. $`mN(2mN+1)`$, of independent parameters.
For two particles, say, we have the states (not antisymmetrized yet), with $`\sigma =\pm 1`$
$$|\overline{\mathrm{\Psi }}_{k_1k_2}^\sigma =\left[\begin{array}{c}0\\ \mathrm{}\\ 0\\ \psi _{k_1}^\sigma (𝐫_i)\psi _{k_2}^\sigma (𝐫_i)\\ 0\\ \mathrm{}\\ 0\end{array}\right].$$
(74)
To antisymmetrize we use a second-quantization language, so that for $`N`$ electrons we have the states
$$|\overline{\mathrm{\Psi }}_{k_1\mathrm{}k_N}^\sigma =\left[\begin{array}{c}0\\ \mathrm{}\\ 0\\ (c_{k_1}^\sigma )^{}\mathrm{}(c_{k_N}^\sigma )^{}\\ 0\\ \mathrm{}\\ 0\end{array}\right]|0,$$
(75)
with $`(c_k^\sigma )^{}`$ creating one electron in state $`\psi _k^\sigma (𝐫)`$ and $`|0`$ being the electron vacuum.
#### 1 The conductivity tensor and the conductance
The conductivity tensor $`\sigma _{\alpha \beta }^\omega (𝐫,𝐫^{})`$ is given in Eq. (53). The expectation value occurring in that equation has to be understood as
$`[\stackrel{~}{j}_\alpha (𝐫,\tau ),\stackrel{~}{j}_\beta (𝐫^{},0)]_{00}`$
$$=\underset{𝒩M\sigma }{}P(𝒩M;\sigma )𝒩M;\sigma \left|[\stackrel{~}{j}_\alpha (𝐫,\tau ),\stackrel{~}{j}_\beta (𝐫^{},0)]\right|𝒩M;\sigma .$$
(76)
Here, $`|𝒩;\sigma `$ are the states of Eq. (75) in the D PB basis, $`𝒩`$ being the number of electrons and $`M`$ an abbreviation for the configuration $`k_1`$, $`k_2`$, $`\mathrm{}`$, $`k_𝒩`$. The states $`|𝒩M;\sigma `$ are eigenstates of the Hamiltonian $`\overline{H}_{00}`$ of Eq. (64), with the energy $`E_{NM;\sigma }=ϵ_{k_1}^\sigma +\mathrm{}+ϵ_{k_N}^\sigma `$.
The current operator $`\stackrel{~}{j}_\alpha (𝐫,\tau )`$ in the interaction representation is given by
$$\stackrel{~}{j}_\alpha (𝐫,\tau )=e^{\frac{i}{\mathrm{}}\overline{H}_{00}\tau }j_\alpha (𝐫)e^{\frac{i}{\mathrm{}}\overline{H}_{00}\tau }.$$
(77)
In the D PB basis the Hamiltonian $`\overline{H}_{00}`$ has the diagonal form given by Eq. (64)-(66); since the current operator $`j_\alpha (𝐫)`$ does not depend on the PB explicitly, $`\stackrel{~}{j}_\alpha (𝐫,\tau )`$ takes the diagonal form
$$\stackrel{~}{j}_\alpha ^{\sigma \sigma ^{}}(𝐫,\tau )=\stackrel{~}{j}_\alpha ^\sigma (𝐫,\tau )\delta _{\sigma \sigma ^{}},$$
(78)
where
$$\stackrel{~}{j}_\alpha ^\sigma (𝐫,\tau )=e^{\frac{i}{\mathrm{}}\overline{H}_{00}^\sigma \tau }j_\alpha (𝐫)e^{\frac{i}{\mathrm{}}\overline{H}_{00}^\sigma \tau },$$
(79)
$`\overline{H}_{00}^\sigma `$ being given by Eq. (64).
Definition (76) implies, as usual, that the density matrix is diagonal in the representation in which the Hamiltonian is diagonal, with diagonal elements $`P(𝒩M;\sigma )`$. Explicitly, $`P(𝒩M;\sigma )`$ is given by the grand-canonical ensemble (understanding now the labels $`𝒩`$, $`M`$ as the set of occupation numbers $`n_1`$, $`n_2`$, $`\mathrm{}`$ ) as
$$P(n_1,n_2,\mathrm{};\sigma )=\frac{e^{\beta _{i=1}^{\mathrm{}}n_i(ϵ_i^\sigma \mu )}}{𝒵(\beta ,\mu )},$$
(80)
where the grand partition function is
$$𝒵(\beta ,\mu )=\underset{\sigma =1}{\overset{m}{}}𝒵(\beta ,\mu ;\sigma ),$$
(81)
with
$$𝒵(\beta ,\mu ;\sigma )=\underset{n_i=0,1}{}e^{\beta _{i=1}^{\mathrm{}}n_i(ϵ_i^\sigma \mu )}.$$
(82)
We define the conditional occupation probability
$$P(n_1,n_2,\mathrm{}|\sigma )=\frac{e^{\beta _{i=1}^{\mathrm{}}n_i(ϵ_i^\sigma \mu )}}{𝒵(\beta ,\mu ;\sigma )},$$
(83)
the condition being that the PB be precisely in the state $`\sigma `$ of the D basis; its trace is 1. Then
$$P(n_1,n_2,\mathrm{};\sigma )=p(\sigma )P(n_1,n_2,\mathrm{}|\sigma ),$$
(84)
where
$$p(\sigma )=\frac{𝒵(\beta ,\mu ;\sigma )}{𝒵(\beta ,\mu )}$$
(85)
is the probability of finding the PB in state $`\sigma `$, the temperature and chemical potential, not indicated explicitly, being $`\beta ,\mu `$. Of course we have
$$\underset{\sigma =1}{\overset{m}{}}p(\sigma )=1,$$
(86)
and so the trace of (84) is 1.
We can thus write the expectation value (76) as
$`[\stackrel{~}{j}_\alpha (𝐫,\tau ),\stackrel{~}{j}_\beta (𝐫^{},0)]_{00}`$ (87)
$`={\displaystyle \underset{\sigma =1}{\overset{m}{}}}p(\sigma ){\displaystyle \underset{n_1,n_2,\mathrm{}}{}}P(n_1,n_2,\mathrm{}|\sigma )`$ (88)
$`\times n_1,n_2,\mathrm{};\sigma \left|[\stackrel{~}{j}_\alpha ^\sigma (𝐫,\tau ),\stackrel{~}{j}_\beta ^\sigma (𝐫^{},0)]\right|n_1,n_2,\mathrm{};\sigma .`$ (89)
(90)
Thus in the D PB basis the problem breaks up into m independent problems, for the Hamiltonians $`\overline{H}_{00}^\sigma `$ , $`\sigma =1,\mathrm{},m`$. The conductivity tensor $`\sigma _{\alpha \beta }^\omega (𝐫,𝐫^{})`$ of Eq. (53) can thus be written as
$$\sigma _{\alpha \beta }^\omega (𝐫,𝐫^{})=\underset{\sigma =1}{\overset{m}{}}p(\sigma )\sigma _{\alpha \beta }^{\omega ;\sigma }(𝐫,𝐫^{}),$$
(91)
where $`\sigma _{\alpha \beta }^{\omega ;\sigma }(𝐫,𝐫^{})`$ is a conductivity tensor that can be expressed entirely in terms of single-electron Green’s functions for the Hamiltonian $`\overline{H}_{00}^\sigma (i)`$.
We write the conductance $`G`$ in terms of the “dimensionless conductance” $`g`$ as
$$G=\frac{e^2}{h}g.$$
(92)
As $`\omega 0`$ and then the temperature $`0`$, we find, for “spinless electrons”
$$g=T=\underset{\sigma =1}{\overset{m}{}}p(\sigma )tr\left[(\overline{t}^\sigma )^{}\overline{t}^\sigma \right],$$
(93)
$`\overline{t}^\sigma `$ being the transmission matrix (an $`N\times N`$ block of the matrix $`\overline{S}^\sigma `$ of Eq. (72)) arising from the potential $`U^\sigma (𝐫)`$; the trace in the above equation is over spatial channels, as usual. Should $`U^\sigma (𝐫)=U(𝐫)`$, i.e. independent of $`\sigma `$ (and hence $`U^{\mu \nu }(𝐫)=U(𝐫)\delta _{\mu \nu }`$ in any basis), the above formula would go over into the standard one. In the ND PB basis we finally find ($`a`$, $`b`$ being spatial-channel indices)
$`T={\displaystyle \underset{\mu \nu \nu ^{}}{}}{\displaystyle \underset{ab}{}}\rho _{PB}^{\nu \mu }\left[t_{ab}^{\nu ^{}\mu }\right]^{}t_{ab}^{\nu ^{}\nu }`$
$$=Tr\left(\rho _{PB}t^{}t\right),$$
(94)
the trace being now over the spatial channels and the PB states. We have defined
$$\rho _{PB}^{\nu \mu }=\underset{\sigma }{}O^{\nu \sigma }p(\sigma )O^{\mu \sigma }.$$
(95)
Eq. (94) has the structure of the standard Landauer formula, with an extra average over the PB states. This is also the structure of Eq. (15), that was obtained in the study of a single electron interacting with a PB with equal weights assigned to every PB state.
Should there be circumstances where the various $`p(\sigma )`$ discussed above, at zero temperature, were all equal to $`1/m`$, we could write
$$T=\frac{1}{m}\underset{\mu ,\nu =1}{\overset{m}{}}\underset{ab}{}\left|t_{ab}^{\mu \nu }\right|^2.$$
(96)
We recall that it is for an e-PB interaction of the type described in Eq. (69) that
1) the many-electron Hamiltonian (with the e-e interaction switched off) breaks up, in the D basis, into $`m`$ independent single-electron Hamiltonians, and hence
2) the conduction problem breaks up in a similar manner and is soluble in terms of single-electron quantities.
We also remind the reader that within our model the single-e-PB $`S`$ matrix, and hence the transmission amplitudes appearing in Eq. (94), are not “generic”, but have a rather restricted structure.
The final answer, though, i.e. Eq. (94), is a very appealing one and is likely to be valid beyond the situation envisaged by the present model, i.e. to cases where the e-PB $`S`$ matrix has a more general structure. However, in such a generic case it is not known to us under what approximations the conduction problem –even with the e-e Coulomb interaction switched off– could still be reduced to independent single-electron problems and thus expressed in terms of single-electron quantities. The best we can do at present is conjecture the validity of Eq. (94) for a generic e-PB $`S`$ matrix, under suitable approximations, as explained at the end of section III around equation (57).
## V A RANDOM-MATRIX MODEL FOR THE SCATTERING MATRIX
In the past, quantum electronic transport in mesoscopic systems has been described successfully in terms of ensembles of single-electron $`S`$ matrices (see, for instance, Refs. ). In particular, it was shown that, in the absence of direct processes, quantum transport through classically chaotic cavities can be studied in terms of the invariant measure for the $`S`$ matrix, which is a precise mathematical formulation of the intuitive notion of “equal-a-priori-probabilities” in $`S`$-matrix space. For TRI systems, like the ones we are studying here, the invariant measure is also known as the Circular Orthogonal Ensemble (COE) . The COE for $`S`$ matrices of dimensionality $`2N`$, $`N`$ being the number of open channels supported by the two leads attached to the cavity, gives, for the ensemble averaged (indicated by brackets $`\mathrm{}`$) spinless conductance and its variance
$$T=\frac{N^2}{2N+1},$$
(97)
$$var(T)=\frac{N(N+1)^2}{\left(2N+1\right)^2\left(2N+3\right)},$$
(98)
respectively. The $`1`$ in the denominator of Eq. (97) is the weak-localization correction (WLC).
We construct here an ensemble of $`S`$ matrices for the system consisting of a single electron and the PB, and, using the conductance formula obtained in the previous section, Eq. (94), we analyze the effect of the PB on the average and variance of the conductance.
Within the model we have been discussing for the e-PB system, we postulate $`m`$ independent COE’s for the $`S`$ matrices $`\overline{S}^\sigma `$ of Eq. (72). Ensemble averaging Eq. (93) we obtain
$`T_m`$ $`=`$ $`{\displaystyle \underset{\sigma =1}{\overset{m}{}}}p(\sigma )tr\left[(\overline{t}^\sigma )^{}\overline{t}^\sigma \right]`$ (99)
$`=`$ $`{\displaystyle \underset{\sigma =1}{\overset{m}{}}}p(\sigma ){\displaystyle \frac{N^2}{2N+1}}`$ (100)
$`=`$ $`{\displaystyle \frac{N^2}{2N+1}},`$ (101)
the same result as in Eq. (97), in the absence of the PB. Thus the model is not generic enough to decrease the WLC. For the variance we obtain
$`\left[var(T)\right]_m`$ $`=`$ $`{\displaystyle \underset{\sigma =1}{\overset{m}{}}}\left[p(\sigma )\right]^2var\left\{tr\left[(\overline{t}^\sigma )^{}\overline{t}^\sigma \right]\right\}`$ (102)
$`=`$ $`{\displaystyle \frac{N(N+1)^2}{\left(2N+1\right)^2\left(2N+3\right)}}{\displaystyle \underset{\sigma =1}{\overset{m}{}}}\left[p(\sigma )\right]^2.`$ (103)
If one PB state $`\sigma _0`$ has probability 1 and all the others $`0`$, $`_{\sigma =1}^m\left[p(\sigma )\right]^2=1`$. In the other extreme case of equiprobable PB states, $`_{\sigma =1}^m\left[p(\sigma )\right]^2=1/m`$. The conductance thus fluctuates less than in the absence of the PB, as expected .
Should the conjecture we made at the end of last section be true, and the result (94) be valid more generally, i.e. for a generic e-PB $`S`$ matrix, we could postulate a COE for the full $`2mN`$-dimensional $`S`$ matrix in the ND basis. We would then obtain, ensemble averaging Eq. (94)
$$T_m=\underset{\mu \nu \nu ^{}}{}\underset{ab}{}\rho _{PB}^{\nu \mu }\left[t_{ab}^{\nu ^{}\mu }\right]^{}t_{ab}^{\nu ^{}\nu }$$
(104)
From Ref. we find
$`\left[t_{ab}^{\nu ^{}\mu }\right]^{}t_{ab}^{\nu ^{}\nu }={\displaystyle \frac{\delta _{\mu \nu }}{2mN+1}}`$
and, since $`_\nu \rho _{PB}^{\nu \nu }=1`$, we finally obtain
$$T_m=\frac{N^2}{2N+\frac{1}{m}}.$$
(105)
Now we observe that, as $`m\mathrm{}`$, the effect of the PB is to kill the WLC, as expected.
We find the variance of $`T`$ only in the simplified situation described by Eq. (96)
$$\left[var(T)\right]_m=\frac{1}{m^2}var\left[\underset{\mu ,\nu =1}{\overset{m}{}}\underset{ab}{}\left|t_{ab}^{\mu \nu }\right|^2\right].$$
(106)
The variance appearing on the right hand side of this last equation can be read form Eq. (98), with the replacement $`NmN`$, to find
$$\left[var(T)\right]_m=\frac{1}{m^2}\frac{N(N+\frac{1}{m})^2}{\left(2N+\frac{1}{m}\right)^2\left(2N+\frac{3}{m}\right)}.$$
(107)
For large $`m`$, conductance fluctuations are killed as well, as expected.
## VI CONCLUSIONS
We have discussed the quantum electronic conduction problem in a mesoscopic system, allowing for the presence of a phase breaker (PB) the system can interact with. The PB can exist in $`m`$ quantum-mechanical states. It could represent an impurity with internal degrees of freedom, so that its state may change via the interaction with the electrons: for instance, it could be a magnetic impurity interacting with the electron spin. The PB might also represent the environment –for instance, the phonon field– whose state is allowed to change.
We first studied the problem of a single electron interacting with the PB, in order to investigate the effect of the latter in the interference terms that are there in the absence of the PB. We described the static, as well as the dynamical scatterers (the PB), in terms of their scattering or $`S`$ matrices: this makes the discussion very intuitive, quite general, and amenable to the application of random-matrix models that have been developed in the past. We found an $`S`$-matrix formulation that is capable of decribing the crossover from a purely coherent response to a classical, or incoherent one.
We then set out to study the conduction problem –a many-electron problem. The e-e interaction is treated in an RPA approximation. We could make the very general statement (that so far, to our knowledge, was known only in the absence of a PB) that in the dc limit one can give an explicit expression for the current in terms of the potential difference applied between the two reservoirs, the full potential profile not being needed. That expression involves the conductivity tensor $`\sigma _{\alpha \beta }^0(𝐫,𝐫^{})`$ which, in the absence of the PB, can be calculated in terms of single-particle Green’s function and, eventually, single-particle transmission coefficients. In the presence of the PB, the e-e interaction induced by the PB has not allowed us to follow a similar path. That induced interaction is there even in the absence of the e-e Coulomb interaction, which, within RPA, was disposed of and replaced, in turn, by the solution of a self-consistent problem.
We proposed a model for the e-PB interaction that can be diagonalized and transformed into $`m`$ single-particle problems. The conduction problem splits into $`m`$ single-particle problems as well, and the final result for the conductance, given in Eq. (94), is like the standard one without a PB, except that now an extra trace over the $`m`$ PB states appears. We should stress that, within the model, result (94) is not perturbative in the e-PB interaction strength. However, the model implies a single-electron-PB $`S`$ matrix of a rather restricted form. For a “generic” single-electron-PB $`S`$ matrix we do not even know under what approximations (that would imply disregarding the induced e-e interaction) the conduction problem can be reduced to a single-electron one. The result expressed in Eq. (94) is so intuitive, though, that we conjecture its validity under a more general e-PB $`S`$ matrix, within some suitable approximation, as discussed at the end of section III around equation (57).
It would be instructive to solve this same conduction problem within the spirit of Landauer’s approach , just as in Ref. , and verify that one arrives at the result (94) under the special model used here and not in general. A suitable approximation for treating the more general problem might suggest itself in that approach. But this we have not succeeded to do so far.
Finally, a random-matrix model was setup for the description of the e-PB system, with possible applications to chaotic cavities: the effect of the PB on the conductance average and its fluctuations was analyzed. The limitations of the model became apparent in that study; in contrast, a generic e-PB $`S`$ matrix was shown to give much more freedom in the description of the weak-localization correction and the conductance fluctuations.
## acknowledgements
One of the authors (PAM) acknowledges partial financial support from CONACyT, Mexico, through Contract No. 2645P-E, as well as the hospitality and partial financial support of the I.T.P. of the Technion, Haifa, and of the Weizmann Institute, Rehovot, where parts of this work were discussed. He also acknowledges fruitful discussions with H. Suárez and with M. Büttiker, the latter during a stay at the Université de Genève, whose hospitality is greatly acknowledged. We acknowledge instructive discussions with the late R. Landauer and with D. E. Khmelnitskii. The research was partially supported by the DIP project on “Quantum Electronics in Low-Dimensional Systems” and by the GIF project on “Electron Interactions and Disorder in Finite Conductors”.
|
no-problem/9907/astro-ph9907297.html
|
ar5iv
|
text
|
# Spin Down of Pulsations in the Cooling Tail of an X-ray Burst from 4U 1636-53
## 1 Introduction
Millisecond oscillations in the X-ray brightness during thermonuclear bursts, “burst oscillations”, have been observed from six low mass X-ray binaries (LMXB) with the Rossi X-ray Timing Explorer (RXTE) (see Strohmayer, Swank & Zhang et al. 1998 for a recent review). Considerable evidence points to rotational modulation as the source of these pulsations (see for example, Strohmayer, Zhang & Swank 1997; Strohmayer & Markwardt 1999). Anisotropic X-ray emission caused by either localized or inhomogeneous nuclear burning produces either one or a pair of hot spots on the surface which are then modulated by rotation of the neutron star. A remarkable property of these oscillations is the frequency evolution which occurs in the cooling tail of some bursts. Recently, Strohmayer & Markwardt (1999) have shown that the frequency in the cooling tail of bursts from 4U 1728-34 and 4U 1702-429 is well described by an exponential chirp model whose frequency increases asymptotically toward a limiting value. Strohmayer et. al (1997) have argued this evolution results from angular momentum conservation of the thermonuclear shell, which cools, shrinks and spins up as the surface radiates away the thermonuclear energy. To date, only frequency increases have been reported in the cooling tails of bursts, consistent with settling of the shell as its energy is radiated away.
In this Letter we report observations of a decreasing burst oscillation frequency in the tail of an X-ray burst. We find that an episode of spin down in the cooling tail of a burst observed on December 31, 1996 at 17:36:52 UTC (hereafter, burst A, or the “spin down” burst) from 4U 1636-53 is correlated with the presence of an extended. In §1 we present an analysis of the frequency evolution in this burst, with emphasis on the spin down episode. In §2 we present time resolved energy spectra of the spin down burst, and we investigate the energetics of the extended tail. Throughout, we compare the temporal and spectral behavior of the spin down burst with a different burst observed on December 29, 1996 at 23:26:46 UTC (hereafter, burst B) which does not show either a spin down episode nor an extended tail of emission, but which is similar to the spin down burst in most other respects. We conclude in §3 with a summary and discussion of the spin down episode and extended emission in the context of an additional, delayed thermonuclear energy release which might re-expand the thermonuclear shell and perhaps account for both the spin down and the extended tail of thermal emission.
## 2 Evidence for Spin Down
Oscillations at 580 Hz were discovered in thermonuclear bursts from 4U 1636-53 by Zhang et al. (1996). More recently, Miller (1999a) has reported evidence during the rising phase of bursts of a significant modulation at half the 580 Hz frequency suggesting that 580 Hz is twice the neutron star spin frequency and that a pair of antipodal spots produce the oscillations. Here we focus on a burst from 4U 1636-53 which shows a unique decrease in the $`580`$ Hz oscillation frequency. To study the evolution in frequency of burst oscillations we employ the $`Z_n^2`$ statistic (Buccheri et al. 1983). We have described this method previously, and details can be found in Strohmayer & Markwardt (1999). We first constructed for both bursts A and B a dynamic “variability” spectrum by computing $`Z_1^2`$ as a function of time on a grid of frequency values in the vicinity of 580 Hz. We used 2 second intervals to compute $`Z_1^2`$ and started a new interval every 0.25 seconds. This variability spectrum is very similar to a standard dynamic power spectrum, however, the $`Z_1^2`$ statistic allows for a more densely sampled frequency grid than a standard Fast Fourier Transform power spectrum. The results are shown in Figure 1 (bursts A and B are in the top and bottom panels, respectively) as contour maps of constant $`Z_1^2`$ through each burst. The contour map for the spin down burst (top panel) suggests that the oscillation began with a frequency near 579.6 Hz at burst onset, reappeared later in the burst after “touchdown” of the photosphere at an increased frequency, $`580.7`$ Hz, but then beginning near 11 seconds dropped to $`579.6`$ Hz over several seconds. For comparison, we also show in Figure 1 (bottom panel) a similar variability spectrum for burst B which also shows strong oscillations near 580 Hz, but shows no evidence of a similar spin down episode.
To investigate the evolution of the oscillation frequency more quantitatively we fit a model for the temporal evolution of the frequency, $`\nu (t)`$, to the 4.5 second interval during which the oscillation is evident in the dynamic variability spectrum (Figure 1, top panel). Our model is composed of two linear segments, each with its own slope, joined continuously at a break time $`t_b`$. This is similar to the model employed by Miller (1999b), and has four free parameters, the initial frequency, $`\nu _0`$, the two slopes, $`d_\nu ^1`$ and $`d_\nu ^2`$, and the break time, $`t_b`$. We used this frequency model to compute phases $`\varphi _{t_j}`$ for each X-ray event, viz. $`\varphi _{t_j}=_0^{t_j}\nu (t^{})𝑑t^{}`$, where $`t_j`$ are the photon arrival times, and then varied the model parameters to maximize the $`Z_1^2`$ statistic. We used a downhill simplex method for the maximization (see Press et al. 1989). Figure 2 compares $`Z_1^2`$ vs. parameter $`\nu _0`$ for the best fitting two segment model (solid histogram) and a simple constant frequency model ($`\nu (t)=\nu _0`$, dashed histogram). The two segment model produces a significant increase in the maximum $`Z_1^2`$ of about 40 compared with no frequency evolution, and it also yields a single, narrower peak in the $`Z_1^2`$ distribution. The increase of 40 in $`Z_1^2`$, which for a purely random process is distributed as $`\chi ^2`$ with 2 degrees of freedom, argues convincingly that the frequency drop is significant. We note that Miller (1999b) has also identified the same spin down episode during this burst using a different, but related method. The best fitting two segment model is shown graphically as the solid curve in Figure 1 (top panel).
## 3 Time history, spectral evolution and burst energetics
A comparison of the 2 - 20 keV time history of the spin down burst with other bursts from the same observations reveals that this burst is also unique in having an extended tail of thermal emission. This is well illustrated in Figure 3, which compares the 2 - 20 keV time histories of the spin down burst and burst B. To further investigate the energetics of the thermal burst emission we performed a spectral evolution analysis. We accumulated energy spectra for varying time intervals through both bursts. Using XSPEC we fit blackbody spectra to each interval by first subtracting a pre-burst interval as background, and then investigated the temporal evolution of the blackbody temperature, $`kT`$, inferred radius, $`R_{BB}`$ and bolometric flux, $`F`$. In most intervals we obtained acceptable fits with the blackbody model. The results for both bursts are summarized in Figure 4, We have aligned the burst profiles in time for direct comparison. Both bursts show evidence for radius expansion shortly after onset in that $`kT`$ drops initially and then recovers. Their peak fluxes are also similar, consistent with being Eddington limited. Out to about 7 seconds post-onset both bursts show the same qualitative behavior, after this, however, the spin down burst (solid curve) shows a much more gradual decrease in both the blackbody temperature $`kT`$ and the bolometric flux $`F`$ than is evident in burst B. We integrated the flux versus time profile for each burst in order to estimate fluences and establish the energy budget in the extended tail. We find fluences of $`1.4\times 10^6`$ and $`5.1\times 10^7`$ ergs cm<sup>-2</sup> for bursts A and B respectively. That is, the spin down burst has about 2.75 times more energy than burst B. Put another way, most of the energy in the spin down burst is in the extended tail. In figure 4 we also indicate with a vertical dotted line the time $`t_b`$ associated with the beginning of the spin down episode based on our modelling of the 580 Hz oscillations. The spectral evolution analysis indicates that at about the same time the spin down episode began there was also a change in its spectral evolution as compared with that of burst B. This behavior is evident in figure 5 which shows the evolution of $`kT`$ (dashed curve) and the inferred blackbody radius $`R_{BB}`$ (solid curve) for the spin down burst. Notice the secondary increase in $`R_{BB}`$ and an associated dip in $`kT`$ near time $`t_b`$ (vertical dotted line). This behavior is similar to the signature of radius expansion seen earlier in both bursts, but at a weaker level, it suggests that at this time there may have been an additional thermonuclear energy input in the accreted layers, perhaps at greater depth, which then diffused out on a longer timescale, producing the extended tail. That this spectral signature ocurred near the same time as the onset of the spin down episode suggests that the two events may be causally related.
## 4 Discussion and Summary
The observation of thermonuclear bursts with extended tails is not a new phenomenon. Czerny, Czerny, & Grindlay (1987) reported on a burst from the soft X-ray transient Aql X-1 which showed a long, relatively flat X-ray tail. Bursts following this one were found to have much shorter, weaker tails. Fushiki et al. (1992) argued that such long tails were caused by an extended phase of hydrogen burning due to electron captures at high density ($`\rho 10^7`$ g cm<sup>-3</sup>) in the accreted envelope. Such behavior is made possible because of the long time required to accumulate an unstable pile of hydrogen-rich thermonuclear fuel when the neutron star is relatively cool ($`10^7`$ K) prior to the onset of accretion. This, they argued, could occur in transients such as Aql X-1, which have long quiescent periods during which the neutron star envelope can cool, and thus the first thermonuclear burst after the onset of an accretion driven outburst should show the longest extended tail. Other researchers have shown that the thermal state of the neutron star envelope, the abundance of CNO materials in the accreted matter, and variations in the mass accretion rate all can have profound effects on the character of bursts produced during an accretion driven outburst (see Taam et al. 1993; Woosley & Weaver 1985; and Ayasli & Joss 1982, and references therein). For example, Taam et al. (1993) showed that for low CNO abundances and cool neutron star envelopes the subsequent bursting behavior can be extremely erratic, with burst recurrence times varying by as much as two orders of magnitude. They also showed that such conditions produce dwarf bursts, with short recurrence times and peak fluxes less than a tenth Eddington, and that many bursts do not burn all the fuel accumulated since the last X-ray burst. Thus residual fuel, in particular hydrogen, can survive and provide energy for subsequent bursts. These effects lead to a great diversity in the properties of X-ray bursts observed from a single accreting neutron star. Some of these effects were likely at work during the December, 1996 observations of 4U 1636-53 discussed here, as both a burst with a long extended tail, as well as a dwarf burst were observed (see Miller 1999b).
The spin up of burst oscillations in the cooling tails of thermonuclear bursts from 4U 1728-34 and 4U 1702-429 has been discussed in terms of angular momentum conservation of the thermonuclear shell (see Strohmayer et al 1997; Strohmayer & Markwardt 1999). Expanded at burst onset by the initial thermonuclear energy release, the shell spins down due to its larger rotational moment of inertia compared to its pre-burst value. As the accreted layer subsequently cools its scale height decreases and it comes back into co-rotation with the neutron star over $``$ 10 seconds. To date the putative initial spin down at burst onset has not been observationally confirmed, perhaps due to the radiative diffusion delay, on the order of a second, which can hide the oscillations until after the shell has expanded and spun down (see, for example, Bildsten 1998). We continue to search for such a signature, however. Although the initial spin down at burst onset has not been seen, the observation of a spin down episode in the tail of a burst begs the question; can it be understood in a similar context, that is, by invoking a second, thermal expansion of the burning layers? The supporting evidence is the presence of spin down associated with the spectral evidence for an additional energy source, the extended tail, as well as the spectral variation observed at the time the spin down commenced (see Figure 5). Based on these observations we suggest that the spin down began with a second episode of thermonuclear energy release, perhaps in a hydrogen-rich layer underlying that responsible for the initial instability, and built up over several preceeding bursts. Such a scenario is not so unlikely based on previous theoretical work (see Taam et al. 1993, Fushiki et al. 1992). The observed rate of spin down, $`d_\nu ^2=1.01\times 10^3`$ s<sup>-1</sup>, interpreted as an increase in the height of the angular momentum conserving shell gives $`dr/dt(\mathrm{\Delta }\nu /2\mathrm{\Delta }T\nu _0)R5.25`$ m s<sup>-1</sup>, for a neutron star radius of $`R=10`$ km. Calculations predict increases in the scale height of the bursting layer on the order of 20-30 m during thermonuclear flashes (see Joss 1977; and Bildsten 1995). Based on this and the energy evident in the extended tail, the additional expansion of about 12 m does not appear overly excessive. If correct this scenario would require that the oscillation frequency eventually increase again later in the tail. Unfortunately the oscillation dies away before another increase is seen.
It is interesting to note that bursts from 4U 1636-53 do not appear to show the same systematic evolution of the oscillation frequency as is evident in bursts from 4U 1728-34 and 4U 1702-429 (see for example, Miller 1999b; and Strohmayer & Markwardt 1999). In particular, there is no strong evidence for an exponential-like recovery that is often seen in 4U 1728-34 and 4U 1702-429. Rather, in 4U 1636-53, when the burst oscillation frequency reappears after photospheric touchdown in many bursts it appears almost immediately at the higher frequency. In the context of a spinning shell this might suggest that the shell recouples to the underlying star more quickly than in 4U 1728-34 or 4U 1702-429. Interestingly, 4U 1636-53 is also the only source to show significant pulsations at the sub-harmonic of the strongest oscillation frequency, and this has been interpreted as revealing the presence of a pair of antipodal hot spots (see Miller 1999a). These properties may be related and could, for example, indicate the presence of a stronger magnetic field in 4U 1636-53 than the other sources.
Another physical process which could alter the observed frequency is related to general relativistic (GR) time stretching. If the burst evolution modulates the location in radius of the photosphere, then the rotation at that radius, as seen by a distant observer, is affected by a redshift such that, $`\mathrm{\Delta }r/R=(R/r_g)(1r_g/R)(11/(\nu _h/\nu _l)^2)`$, where $`\mathrm{\Delta }r`$ is the change in height of the photosphere, $`r_g=2GM/c^2`$ is the schwarzchild radius, and $`\nu _h/\nu _l`$ is the ratio of the highest and the lowest observed frequencies. If this were the sole cause of the frequency changes, then it would imply a height change for the photosphere of $`120`$ m, which is much larger than the increases predicted theoretically for bursting shells. Note that this effect works counter to angular momentum conservation of the shell, since increasing the height makes the frequency higher compared to deeper layers. Since the thicknesses of pre- and post-burst shells are on the order of 20 - 50 m, we estimate from the above that the GR correction amounts to about 10 - 20% of the observed frequency change, and, if the angular momentum conservation effect is at work, requires a modest increase in the height of the shell over that estimated non-relativistically.
We have reported in detail on the first observations of a spin down in the frequency of X-ray brightness oscillations in an X-ray burst. We have shown that this event is coincident with the ocurrence of an extended tail as well as a spectral signature which both suggest a secondary release of thermonuclear energy in the accreted layer. It is always difficult to draw conclusions based on a single event, however, if the association of spin down episodes with an extended X-ray tail can be confirmed in additional bursts this will provide strong evidence in support of the hypothesis that angular momentum conservation of the thermonuclear shell is responsible for the observed frequency variations during bursts. The combination of spectral and timing studies during bursts with oscillations can then give us a unique new probe of the physics of thermonuclear burning on neutron stars.
We thank Craig Markwardt and Jean Swank for many helpful discussions.
## 5 Figure Captions
Figure 1: Dynamic variability spectra for bursts A (top) and B (bottom) from 4U 1636-53. Shown are contours of constant power $`Z_1^2`$ computed from 2 s intervals with a new interval every 0.25 s. The countrate vs. time in the PCA is also shown. For burst A, the two segment frequency evolution model is shown as the solid curve. The best fitting model parameters were; $`\nu _0=580.70`$ Hz, $`d_\nu ^1=9.0\times 10^5`$ s<sup>-1</sup>, $`d_\nu ^2=1.0\times 10^3`$ s<sup>-1</sup>, and $`t_b=10.86`$ s.
Figure 2: A plot of $`Z_1^2`$ vs. frequency parameter $`\nu _0`$ for the spin down burst (burst A).The solid curve shows the result for the best fitting two-segment frequency evolution model. The dashed curve was produced assuming no frequency evolution.
Figure 3: 2 - 20 keV light curves for bursts A (solid) and B (dashed) from 4U 1636-53. Notice the long, extended tail in burst A. The pre-burst and peak countrates were virtually the same for both bursts. The bursts were aligned in time to facilitate direct comparison.
Figure 4: Results of spectral evolution analysis for bursts A and B. The top panel shows evolution of the bolometric flux deduced from the best fitting black body parameters for bursts A (solid) and B (dashed). Note the long tail on burst A, and that the peak fluxes are consistent. The bottom panel shows the evolution of the black body temperature $`kT`$ for bursts A (solid) and B (dashed). The initial drop in $`kT`$ followed by an increase is a signature of radiation driven photospheric expansion. The dotted vertical line marks $`t_b`$, the break time which marks the onset of spin down (see discussion in text).
Figure 5: Evolution of the black body temperature $`kT`$ (dashed) and inferred radius $`R_{BB}`$ (solid) for the spin down burst (burst A). The dotted vertical line marks $`t_b`$, the break time which marks the onset of spin down (see discussion in text). The burst begins with an episode of photospheric radius expansion, marked by the simultaneous decrease in temperature and increase in radius. Notice the secondary signature of a weak radius expansion event near time $`t_b`$ (dotted vertical line).
|
no-problem/9907/cond-mat9907295.html
|
ar5iv
|
text
|
# Maximum thickness of a two-dimensional trapped Bose system
## Abstract
The trapped Bose system can be regarded as two-dimensional if the thermal fluctuation energy is less than the lowest energy in the perpendicular direction. Under this assumption, we derive an expression for the maximum thickness of an effective two-dimensional trapped Bose system.
PACS numbers: 05.30.Jp, 03.75.F, 68.65.+g
It has been known that the Bose-Einstein Condensation (BEC) cannot occur in either two-dimensional (2D) or one-dimensional (1D) uniform Bose gas at a finite temperature because thermal fluctuations destabilize the condensate . However, when a spatially varying potential which breaks the translational invariance exists, the BEC may occur in low dimensional inhomogeneous systems. In the presence of harmonic trapping, the effect of thermal fluctuations are strongly quenched due to the different behavior exhibited by the density of states.
In three-dimensional (3D) traps, the experimental results for the BEC have been obtained assuming that the thermal fluctuation energy $`k_BT`$ is much larger than all the oscillator energies ($`\mathrm{}\omega _x,\mathrm{}\omega _y,\mathrm{}\omega _z`$). In order to achieve a 2D BEC in the trap, it is necessary to choose the frequency $`\omega _z`$ large enough to satisfy the condition $`\mathrm{}\omega k_BT_{2D}\mathrm{}\omega _z`$, where $`\omega =\sqrt{\omega _x\omega _y}`$ and $`T_{2D}`$ is the 2D transition temperature. This is a rather difficult condition to satisfy in the trap design, and leads the realization of a 2D system in another way.
Recently, Safonov et al. have reported an observation of a quasi-2D BEC in liquid hydrogen layers. They successfully confined hydrogen atoms on a liquid <sup>4</sup>He surface which corresponds to a potential well of 20 $`\mu `$m width. Also, Gauck et al. claimed that they achieved another quasi-2D system of argon atoms confined in a planer matter waveguide in the close vicinity of $`\mu `$m. However, the question that up to what thickness we can regard the system as 2D is remained. The mathematical concept of 2D which neglects one (z-direction) degree of freedom and allows particles to move only in surface (x-y plane) is not physically acceptable due to the uncertainty principle. In this communication, we suggest a criterion for the BEC to exhibit 2D behavior in 3D space, and obtain a maximum thickness of the 2D trapped Bose system.
In the 2D experimental setup, the z-directional thickness corresponds to ideal rigid walls. For the infinite potential well of $`d/2zd/2`$, the lowest energy is given by $`E_g=\mathrm{}^2\pi ^2/2md^2`$. The system can be regarded as 2D as long as the thermal fluctuation energy is less than the z-directional lowest energy. That is,
$$k_BT_{2D}<\frac{\mathrm{}^2\pi ^2}{2md^2}.$$
(1)
The transition temperature of the BEC is not precisely known except for the ideal Bose gas in the harmonic trap. Although the systems used in the experiment were not ideal, the measured transition temperatures in 3D were found to be very close to the ideal gas value. A similar situation is expected in 2D. The transition temperature of the 2D BEC for the ideal Bose gas system in the harmonic trap is given by
$$k_BT_{2D}=\mathrm{}\omega \left(\frac{N}{\zeta (2)}\right)^{1/2},$$
(2)
where $`N`$ is the number of atoms in the trap and $`\zeta (x)`$ is the Riemann-Zeta function.
Substituting Eq. (2) into Eq. (1), we obtain an effective 2D thickness of
$$\frac{d}{a_{ho}}<\frac{\pi }{\sqrt{2}}\left(\frac{\zeta (2)}{N}\right)^{1/4},$$
(3)
or
$$(\mathrm{Thickness}\mathrm{of}\mathrm{\hspace{0.17em}2}\mathrm{D})<\frac{2.516}{N^{1/4}}a_{ho},$$
(4)
where $`a_{ho}`$ is the harmonic oscillator length given as $`a_{ho}=\sqrt{\mathrm{}/m\omega }`$. We note that a typical value for $`a_{ho}`$ is of the order of $`\mu `$m for alkali Bose atoms and much larger for hydrogen atoms.
In conclusion, we have obtained a maximum value of the effective thickness of a 2D trapped Bose system in which to observe the BEC. The maximum radius of a 1D trapped Bose system could be obtained in a similar way.
We thank Professors C.K. Kim, K. Nahm, and M. Chung for useful discussions.
|
no-problem/9907/nucl-th9907066.html
|
ar5iv
|
text
|
# Reconstruction of the Proton Source in Relativistic Heavy Ion Collisions
## Abstract
We describe a direct method to reconstruct the transverse proton source formed in a relativistic heavy ion collision, making use of experimentally measured proton and deuteron spectra and assuming that deuterons are formed via two-nucleon coalescence. We show that an ambiguity with respect to the source temperature still persists and we indicate a possible solution to the problem.
It has been recently shown that a simple description of the proton phase space distribution can provide a good qualitative understanding of deuteron spectra . On the other hand, it appeared clear that a more solid method to extract the properties of the source had to be established. We have therefore reconstructed the phase space distribution of protons directly from the observed proton and deuteron spectra, exploiting the coalescence prescription together with the notion of collective flow. Here we will outline the procedure while all details can be found in . We make use of a relativistic description of collective flow based on the boost-invariant picture for the longitudinal expansion, together with a longitudinally-independent transverse velocity, and therefore write the proton phase space distribution as
$$f_p(x,p)=(2\pi )^3\mathrm{exp}(p_\mu u^\mu (x)/T_0)B_pn_p(r_{}),$$
(1)
where $`p_\mu u^\mu (x)=\gamma (r)(m_{}\mathrm{cosh}(y\eta )\stackrel{}{p}_{}\stackrel{}{v}(\stackrel{}{r}))`$ is the energy in the global frame and $`B_p`$ is the normalisation coefficient of the Boltzmann distribution in the local frame. The local density $`n_p(r)`$ is assumed to be independent of the longitudinal rapidity.
The deuteron phase space distribution is calculated using the coalescence model. Its evaluation is simplified when considering large and hot systems, neglecting the smearing effect of the deuteron Wigner density in comparison to the characteristic scales of the system in position and momentum space. One therefore obtains the deuteron phase space distribution
$$f_d(x,p)\frac{3}{8}R_{np}\left[f_p(x,p/2)\right]^2.$$
(2)
The neutron to proton ratio in the source was taken to be $`R_{np}=1.2`$. The deuteron phase space distribution has the same structure as the proton one in eq. (1), now with $`n_p(r)`$ replaced by $`n_d(r)=\lambda _dn_p^2(r)`$, where $`\lambda _d=3/8R_{np}(2\pi )^3B_p^2/B_d`$.
One can now calculate proton and deuteron invariant momentum spectra using the Cooper-Frye formula, on an approximate freeze-out hypersurface of constant longitudinal proper time $`\tau _0`$. The integrations over the space-time rapidity and the azimuthal angle can be easily performed, yielding an expression in terms of the two functions $`v(r)`$ and $`n(r)`$. The ambiguity in the description of the single particle spectrum is explicit, since the two functions cannot be mapped out uniquely from only one function as the transverse momentum spectrum. To partially remove the ambiguity, we first change the integration variable and introduce the auxiliary function $`\stackrel{~}{n}`$ through the relation $`vdv\stackrel{~}{n}(v)=rdr\tau _0n(r)`$, obtaining the new expression for the momentum spectrum
$$S(p_{})=4\pi Bm_{}_0^1𝑑vvK_1\left(\frac{\gamma m_{}}{T_0}\right)I_0\left(\frac{v\gamma p_{}}{T_0}\right)\stackrel{~}{n}(v).$$
(3)
The one-to-one correspondence between $`\stackrel{~}{n}(v)`$ and $`S(p_{})`$ is now evident. We then make use of the coalescence model and the definition of $`\stackrel{~}{n}`$, both for protons and for deuterons, to obtain a first order differential equation that can be directly integrated and gives the closed solution
$$r^2=2\frac{\lambda _d}{\tau _0}_0^v𝑑uu\frac{\stackrel{~}{n}_p^2(u)}{\stackrel{~}{n}_d(u)}.$$
(4)
Therefore, by independently extracting the functions $`\stackrel{~}{n}_p`$ and $`\stackrel{~}{n}_d`$ from the observed momentum spectra $`S_p(p_{})`$ and $`S_d(p_{})`$, we can find the function $`r(v)`$ by a simple numerical integration. Inverting the obtained function as $`r(v)v(r)`$, we obtain the collective velocity profile. We also obtain the local proton density as
$$n_p(r)=\frac{v(r)}{r}\frac{dv(r)}{dr}\frac{\stackrel{~}{n}_p(v(r))}{\tau _0}.$$
(5)
The described procedure was applied to the transverse momentum spectra, resulting from Pb+Pb collisions and measured by the NA44 collaboration at the CERN-SPS . We fitted $`\stackrel{~}{n}_p`$ and $`\stackrel{~}{n}_d`$ to these data, using eq. (3) for protons and deuterons and assuming different values for $`T_0`$, from $`50`$ MeV to $`150`$ MeV. We chose a form of the profile functions characterized by three parameters, and we extracted their value from the experimental data with a Monte Carlo search minimising $`\chi ^2`$. The fitted spectra are shown, for the extreme values of temperature considered, in the top part of Fig. 2. Although evaluated at different temperatures, they are indistinguishable from one another. On the other hand, the profiles $`\stackrel{~}{n}`$ result to be very different for different temperatures.
In all calculations the freeze-out time was fixed at $`\tau _0=10`$ fm/c . After numerical integration of eq. (4), we obtained the function $`v(r)`$ shown in Fig. 2 for the two extreme temperatures. It shows a linear rise at small $`r`$ and saturates for large $`r`$. The velocity profiles clearly depend on the temperature chosen. The local proton density is also plotted in Fig. 2. At high temperature the density shows a shell-like structure which disappears as the temperature is lower. Similar shell-like structures have been recently found in analytic solutions of non-relativistic hydrodynamics . From the plots one can observe the different transverse sizes corresponding to different temperatures. It is therefore necessary to know $`T_0`$ precisely in order to determine the system size. This information cannot be extracted solely from proton and deuteron spectra. Contrary to what is commonly done, source radii cannot be extracted from the $`d/p^2`$ ratio ($`B_2`$ emitting volume) unless $`T_0`$ is known.
It is clear that to resolve the remaining ambiguity in the source temperature one needs some additional experimental information. Recently, $`\pi \pi `$ correlation data were used as a constraint . Since pions may freeze-out in a different way than protons, it would be even better to consider $`pp`$ correlations, although they are more sensitive to final state interactions than pions. On the other hand, heavier clusters can provide additional constraints. We now address this issue following and we describe the fusion process of $`A`$-nucleons into a bound state within the density matrix formalism. Making use of the same approximation leading to eq. (2),
one obtains cluster phase space distribution
$$f_c(x,p)g_A\frac{(R_{np})^N}{Z!N!}[f_p(x,p/A)]^A.$$
(6)
For $`N=Z=1`$, $`A=2`$ and $`g_2=3/8`$ it reduces to eq. (2). The statistical prefactor $`g_A`$ is of crucial importance. In the conventional approach we have $`g_A=(2S_A+1)Z!N!/2^AA!`$, which results in $`g_d=3/8`$, $`g_t=g_{{}_{}{}^{3}He}=1/12`$, and $`g_{{}_{}{}^{4}He}=1/96`$. Although straightforward, this approach has been successful only for the description of deuteron production. In fact, it significantly underestimates the yields of heavier clusters. A possible improvement can be achieved by allowing for additional formation processes. Besides the already considered direct process, it is possible that deuteron-like correlations contribute to cluster formation. For the triton, as an example, we should therefore account for the possibility that a proton and a neutron are already in a bound state with deuteron quantum numbers and coalesce with another neutron, with a different statistical prefactor. More precisely we can write $`g_t=g_{pnnt}+2g_{pnd}g_{dnt}`$, where the factor 2 counts the different ways to associate the proton with the two neutrons in forming a deuteron. The spin-isospin counting is straightforward and gives the modified statistical prefactor $`g_t=1/3`$, therefore enhancing the triton yield by a factor 4. The same arguments apply to $`{}_{}{}^{3}He`$, so that $`g_{{}_{}{}^{3}He}=1/3`$. The case of $`{}_{}{}^{4}He`$ is more involved. Counting all the possible processes we obtained $`g_{{}_{}{}^{4}He}=13/48`$, so that the $`{}_{}{}^{4}He`$ yield is increased by a factor 26. For more details see .
We now use the coalescence model in this improved version to calculate the transverse mass spectrum of tritons. Using the flow and density profiles extracted from the analysis of $`p`$ and $`d`$ spectra, we examine to what extent heavier clusters can constrain the ambiguity of the temperature. Using eqs. (1) and (6), we obtain $`n_t(r)=\lambda _tn_p^3(r)`$, with $`\lambda _t=g_t(R_{np})^2/2(2\pi )^6B_p^3/B_t`$, while the collective velocity is the same for all clusters. The triton spectrum is plotted in Fig. 2, together with the previously fitted $`p`$ and $`d`$ spectra. The absolute values and the shape compare very well with the experimental data. This confirms, that the improved statistical approach is consistent with the measured spectra. Furthermore, the results obtained with the two extreme temperatures show a different curvature. The high temperature case presents a clear bending over, absent for low temperature. We argue that this specific difference might narrow down the allowed temperatures and therefore provide additional constraints. Unfortunately, the triton spectrum was measured only in a limited range in transverse momentum, and therefore a quantitative fit is not useful.
We also suggest that a further possibility to constrain the temperature lies in the combined study of single and composite spectra, together with $`pp`$ correlations. The extracted profiles shown in Fig. 2 could in fact be used to evaluate the $`pp`$ correlation function, which is sensitive to temperature and flow in a different way respect to inverse slopes of spectra. These remarks are important for the interpretation of the future experiments at the Relativistic Heavy Ion Collider.
|
no-problem/9907/hep-ph9907541.html
|
ar5iv
|
text
|
# References
HZPP-9909
July 30, 1999
The Influence of Multiplicity Distribution
on the Erraticity Behavior of Multiparticle Production <sup>1</sup><sup>1</sup>1 This work is supported in part by the Natural Science Foundation of China (NSFC) under Grant No.19575021.
Liu Zhixu Fu Jinghua Liu Lianshou
Institute of Particle Physics, Huazhong Normal University, Wuhan 430079 China
Tel: 027 87673313 FAX: 027 87662646 email: [email protected]
Abstract The origin of the erraticity behaviour observed recently in the experiment is studied in some detail. The negative-binomial distribution is used to fit the experimental multiplicity distribution. It is shown that, with the multiplicity distribution taken into account, the experimentally observed erraticity behaviour can be well reproduced using a flat probability distribution. The dependence of erraticity behaviour on the width of multiplicity distribution is studied.
PACS number: 13.85 Hd
Keywords: Multiparticle production, Negative-binomial distribution
Erraticity
Since the finding of unexpectedly large local fluctuations in a high multiplicity event recorded by the JACEE collaboration , the investigation of non-linear phenomena in high energy collisions has attracted much attention . The anomalous scaling of factorial moments, defined as
$`F_q`$ $`=`$ $`{\displaystyle \frac{1}{M}}{\displaystyle \underset{m=1}{\overset{M}{}}}{\displaystyle \frac{n_m(n_m1)\mathrm{}(n_mq+1)}{n_m^q}}`$ (1)
at diminishing phase space scale or increasing division number $`M`$ of phase space :
$`F_qM^{\varphi _q},`$ (2)
called intermittency (or fractal) has been proposed for this purpose. The average $`\mathrm{}`$ in Eq.(1) is over the whole event sample and $`n_m`$ is the number of particle falling in the $`m`$th bin. This kind of anomalous scaling has been observed successfully in various experiments .
A recent new development along this direction is the event-by-event analysis . An important step in this kind of analysis was made by Cao and Hwa , who first pointed out the importance of the fluctuation in event space of the event factorial moments defined as
$`F_q^{(\mathrm{e})}`$ $`=`$ $`{\displaystyle \frac{\frac{1}{M}\underset{m=1}{\overset{M}{}}n_m(n_m1)\mathrm{}(n_mq+1)}{\left(\frac{1}{M}\underset{m=1}{\overset{M}{}}n_m\right)^q}}.`$ (3)
Its fluctuations from event to event can be quantified by its normalized moments as:
$$C_{p,q}=\mathrm{\Phi }_q^p,\mathrm{\Phi }_q=F_q^{(e)}/F_q^{(e)},$$
(4)
and by $`dC_{p,q}/dp`$ at $`p=1`$:
$`\mathrm{\Sigma }_q=\mathrm{\Phi }_q\mathrm{ln}\mathrm{\Phi }_q.`$ (5)
If there is a power law behavior of the fluctuation as division number goes to infinity, or as resolution $`\delta =\mathrm{\Delta }/M`$ goes to very small, i.e.,
$$C_{p,q}(M)M^{\psi _q(p)},$$
(6)
then the phenomenon is referred to as erraticity . The derivative of exponent $`\psi _q(p)`$ at $`p=1`$
$$\mu _q=\frac{d}{dp}\psi _q(p)|_{p=1}=\frac{\mathrm{\Sigma }_q}{\mathrm{ln}M}.$$
(7)
describes the anomalous scaling property of fluctuation-width and is called entropy index.
The erraticity behaviour of multiparticle final states as described above has been observed in the experimental data of 400 GeV/$`c`$ pp collisions from NA27 . However, it has been shown that the single event factorial moment as defined in Eq.(3), using only the horizontal average over bins, cannot eliminate the statistical fluctuations well, especially when the multiplicity is low. A preliminary study shows that the experimentally observed phenomenon can be reproduced by using a flat probability distribution with only statistical fluctuations . This result is preliminary in the sense that it has fixed the multiplicity to 9 while the multiplicity is fluctuating in the experiment and has an average of $`n_{\mathrm{ch}}=9.84`$ . Since the erraticity phenomenon is a kind of fluctuation in event space and depends strongly on the multiplicity, the fluctuation in event space of the multiplicity is expected to have important influence on this phenomenon.
In this letter this problem is discussed in some detail. The negative binomial distribution will be used to fit the experimental multiplicity distribution . Putting the resulting multiplicity distribution into a flat-probability-distribution model, the erraticity behaviour is obtained and compared with the experimental data. The consistency of these two shows that the erraticity behaviour observed in the 400 GeV/$`c`$ pp collision data from NA27 is mainly due to statistical fluctuations.
The negative-binomial distribution is defined as
$$P_n=\left(\begin{array}{c}n+k1\\ n\end{array}\right)\left(\frac{\overline{n}/k}{1+\overline{n}/k}\right)^n\frac{1}{(1+\overline{n}/k)^k},$$
(8)
where $`n`$ is the multiplicity, $`\overline{n}`$ is its average over event sample, $`k`$ is a parameter related to the second order scaled moment $`C_2n^2/n^2`$ through
$$C_21=\frac{1}{\overline{n}}+\frac{1}{k}.$$
(9)
Using Eq.(8) to fit the multiplicity distribution of 400 GeV/$`c`$ pp collision data from NA27, we get the parameter $`k=12.76`$. The result of fitting is shown in Fig.1. It can be seen that the fitting is good.
Then we take a flat (pseudo)rapidity distribution, i.e. let the probability for a particle to fall into each bin be equal to $`p_m=1/M`$ when the (pseudo)rapidity space is divided into $`M`$ bins. This means that there isn’t any dynamical fluctuation.
Let the number $`N`$ of particles in an event be a random number distributed according to the negative binomial distribution Eq.(8) with $`\overline{n}=9.84,k=12.76`$. Put these $`N`$ particles into the $`M`$ bins according to the Bernouli distribution
$$B(n_1,n_2,\mathrm{},n_M|p_1,p_2,\mathrm{},p_M)=\frac{N!}{n_1!\mathrm{}n_M!}p_1^{n_1}\mathrm{}p_M^{n_M},$$
(10)
$$\underset{m=1}{\overset{M}{}}n_m=N.$$
In total 60000 events are simulated in this way and the resulting $`C_{p,q}`$ are shown in Fig.2 together with the experimental data of 400 GeV/$`c`$ pp collisions from NA27. It can be seen from the figures that the model results are consistent with the data, showing that the erraticity phenomenon observed in this experiment is mainly due to statistical fluctuations.
In order to study the relation of erraticity behaviour with the width of multiplicity distribution, the same calculation has been done for the cases: $`\overline{n}=9,k=0.1,0.5,1.0,2.25,4.5,9,18`$. These values of $`k`$ corresponds to diminishing width of distribution with $`C_2=11.1,3.11,2.11,`$ $`1.56,1.33,1.22,1.17`$ respectively, cf. Fig.3. The resulting ln$`C_{p,2}`$ and $`\mathrm{\Sigma }_2`$ as function of ln$`M`$ are shown in Fig.4 and Fig.5.
It can be seen from the figures that the moments $`C_{p,2}`$ for different $`p`$ separate farther and the characteritic function $`\mathrm{\Sigma }_2`$ becomes larger when the value of $`k`$ is smaller. This means that the single event factorial moments fluctuate stronger in event space when the width of multiplicity distribution is wider. On the other hand, the straight lines obtained from fitting the last three points of $`\mathrm{\Sigma }_2`$ versus ln$`M`$ are almost parallel for different $`k`$, and their slopes — the entropy indices $`\mu _2`$, which is the characteristic quantity of erraticity are insensitive to the width of multiplicity distribution.
In summary, the multiplicity distribution of 400 GeV/$`c`$ pp collision data from NA27 has been fitted to the negative binomial distribution. Taking this multiplicity distribution into account, the erraticity phenomenon in a model without any dynamical fluctuation, i.e. with a flat probability distribution, has been studied. The resulting moments $`C_{p,q}`$ turn out to fit the experimental data very well. This shows that the erraticity phenomenon observed in this experiment is mainly due to statistical fluctuations.
The dependence of erraticity phenomenon on the width of multiplicity distribution is examimed. It is found that the fluctuation of single event factorial moments in event space becomes stronger — $`C_{p,2}`$ and $`\mathrm{\Sigma }_2`$ become larger — when the width of multiplicity distribution is wider. On the other hand, the entropy index $`\mu _2`$ depends mainly on the average multiplicity and is insensitive to the width of multiplicity distribution.
Figure Captions
Fig.1 Fitting of the multiplicity distribution of 400 GeV/$`c`$ pp collision data to negative binomial distribution. Data taken from Ref..
Fig.2 The moments $`C_{p.2}`$ from a flat probability distribution model with the multiplicity distribution taken into account, as compared with the 400 GeV/$`c`$ pp collision data taken from Ref..
Fig.3 The negative binomial distribution with different values of parameter $`k`$. The average multiplicity is $`\overline{n}=9`$.
Fig.4 The dependence of ln$`C_{p,2}`$ on ln$`M`$ in the flat probability distribution model, taken the negative-binomial type multiplicity distribution into account. The parameter $`k`$ takes different vaues as shown in the figure. The average multiplicity is $`\overline{n}=9`$.
Fig.5 The dependence of $`\mathrm{\Sigma }_2`$ on ln$`M`$ in the flat probability distribution model, taken the negative-binomial type multiplicity distribution into account. The parameter $`k`$ takes different vaues as shown in the figure. The average multiplicity is $`\overline{n}=9`$.
ln$`P(n_{\mathrm{ch}})`$
$`n_{\mathrm{ch}}`$
Fig. 1
Fig. 2
ln$`P(n_{\mathrm{ch}})`$
$`n_{\mathrm{ch}}`$
Fig. 3
Fig. 4
Fig. 5
|
no-problem/9907/astro-ph9907230.html
|
ar5iv
|
text
|
# Lithium depletion in open clusters
## 1 Introduction
Lithium is the only metal produced in significant quantities in the Big Bang. In principle, measurements of Li in old Population II stars yield the primordial Li abundance, which would (in conjunction with $`H_0`$) strongly constrain the universal baryon density and Big Bang nucleosynthesis models. Sadly, in addition to processes which create Li in the universe, there are mechanisms which lead to its destruction in stellar interiors via $`p,\alpha `$ reactions at only $`(23)\times 10^6`$ K. There is debate about whether the $`A(\mathrm{Li})`$ ($`=12+\mathrm{log}[N(\mathrm{Li})/N(\mathrm{H})]`$) value of 2.1-2.2 measured in Population II stars is almost undepleted from the primordial value, or whether the primordial value is closer to the $`A(\mathrm{Li})`$ of 3.3 measured in the youngest stars and solar system meteorites, and has been significantly depleted in Population II stars (Bonifacio & Molaro 1997, Deliyannis & Ryan 1997).
The former interpretation requires processes that increase the Galactic Li abundance by factors of 10 in $`5`$ Gyr, while the latter requires us to rethink the way that material is mixed in stellar interiors. Standard models, which incorporate only convective mixing, predict little ($`0.1`$ dex) Li depletion in Population II stars. However, many extensions to the standard model, incorporating non-standard mixing such as microscopic diffusion, turbulence induced by rotational instabilities, meridional circulation and gravitational waves have been proposed and developed in some detail by a number of groups (see Pinsonneault 1997 for a review).
Open clusters are excellent laboratories for investigating non-standard stellar physics relating to Li depletion. We can assume that we have co-eval groups of stars with very similar compositions and by choosing clusters with a range of ages and compositions we can hope to answer the questions posed in the abstract. The observational database for Li in open clusters has grown enormously in the last 10 years, thanks to sensitive detectors and the accessibility and strength of the Li i 6708Å resonance doublet upon which most abundance measurements are based. Table 1 gives a summary of these observations, listing clusters, ages and distances (from the Lyngå 1987 catalogue – treat with extreme caution!), the number and spectral-types of (main-sequence \[MS\] or pre-main sequence \[PMS\]) stars surveyed, with references.
## 2 Models of Li Depletion
Figure 2 in the review of Pinsonneault (1997) gives an overview of the Li depletion predictions of standard stellar evolution models, where convection (and some convective overshoot) is the only mixing mechanism. Quantitatively, models produced by various groups depend upon the details of the adopted convective treatment and atmospheric opacities (e.g. D’Antona & Mazzitelli 1994). Qualitatively, there is general agreement that (a) In G and K stars Li depletion occurs mainly during the PMS phase with hardly any depletion on the main sequence for stars hotter than 5000 K. This is caused by the growth of the radiative zone, pushing the base of the convection zone (CZ) outward to temperatures too cool to burn Li. (b) Li depletion is strongly dependent on metallicity. A high metallicity leads to greater opacity, deeper CZs with higher base temperatures and hence greater Li depletion. A 0.2 dex change in mean metallicity should lead to an order of magnitude change in the PMS Li depletion at 5000 K.
Models incorporating non-standard mixing modes predict Li depletion in addition to that provided by PMS convection. For instance, Chaboyer, Demarque & Pinsonneault (1995) show that in G and K stars there is little extra depletion during the PMS phase and then depletion continues during the MS phase driven by instabilities associated with rotation and angular momentum loss. The faster rotators on the ZAMS are predicted to have a higher depletion rate. There is also likely to be some metallicity dependence as well, because the distance between the CZ base and where Li can be burned will be important. Other mechanisms such as microscopic diffusion are expected to be much less important in these stars with relatively deep convective envelopes.
## 3 The Pleiades and Hyades
The two best studied open clusters are the Pleiades and Hyades, with ages of $``$100 and 600 Myr, and consistently determined spectroscopic iron abundances of \[Fe/H\]=$`0.034\pm 0.024`$ and $`+0.127\pm 0.022`$ (Boesgaard & Friel 1990, Friel & Boesgaard 1992). Figure 1 shows Li abundances (determined using the same temperature scale and curves of growth) for these clusters using data on single stars gleaned from the sources in Table 1. Also shown are standard Li depletion models for two mean metallicities. The general pattern of Li depletion in the Pleiades is modelled reasonably well. The Hyades is more metal rich than the Pleiades, so we expect more PMS Li depletion, although not nearly as much as is observed. Two classes of solution can be put forward to explain this discrepancy. (a) Swenson, Stringfellow & Faulkner (1990) show that increasing interior opacities by modest amounts could bring standard models into agreement with the Hyades data. Such arguments do not explain why short-period, tidally locked binary systems in the Hyades are much less Li depleted than their single counterparts (Thorburn et al. 1993, Barrado y Navascués & Stauffer 1996). (b) Extra mixing whilst on the MS, driven by rotation and angular momentum loss seems capable of providing the additional Li depletion with a natural explanation for why the Li depletion in tidally locked binaries might be different (Chaboyer et al. 1995).
Standard models also struggle to explain spreads in Li abundance among late G and K-type Pleiades stars. The scatter appears to be correlated with rotation, although a more detailed consideration (e.g. Randich et al. 1998) shows that the correlation in both the Pleiades and $`\alpha `$ Per clusters is driven largely by the fact that fast rotating stars have suffered little Li depletion, whereas slowly rotating stars can have either high or low Li abundances (see Figure 2). There are some indications that this dispersion may decrease again at surface temperatures below 4500 K (Jones et al. 1996). One interpretation would be to invoke non-standard mixing during the PMS phase and the disk coupling paradigm for early angular momentum evolution (Bouvier et al. 1997). Slow rotators might suffer little extra mixing because they are born slow rotators and lose little angular momentum, or they could be born as fast rotators and lose considerable angular momentum by coupling to a long-lived circumstellar disk and consequently undergo greater mixing and Li depletion. Stars which are still fast rotators on the ZAMS would have been only briefly coupled to a disk, would not have lost significant angular momentum and suffered less internal mixing. The problem with this explanation may be that insufficient extra mixing associated with angular momentum can take place on the PMS, and that fast rotators in the Pleiades have Li abundances that lie above even the standard model predictions.
Adherents to the standard models could appeal to small metallicity variations between cluster stars or to the possibility that atmospheric inhomogeneities such as plages or starspots could cause a scatter in the equivalent widths of Li i lines at a given $`BV`$ value. This latter explanation has been reviewed by Stuik, Bruls & Rutten (1997), who make a plausible case for considering such effects and point out that the similarly formed K i 7699Å line shows a nearly equivalent scatter in Pleiades stars. As K abundance variations are not expected, then the scatter in K i equivalent widths at a given colour means that it is premature to ascribe the apparent Li abundance variation in late-type Pleiads to non-standard processes.
## 4 Metallicity, age and Li depletion
A natural question to ask is whether the Li depletion pattern in the Hyades when it was younger, looked like that in the Pleiades now? Standard models predict that the Hyades would look about the same as they do now because all the depletion occurred during PMS evolution (for $`T_{\mathrm{eff}}5000`$ K). Non-standard models predict a level of Li depletion somewhere between the present day Hyades and Pleiades levels, due to 500 Myr of non-standard MS mixing. Similarly, non-standard models predict that if the Pleiades were aged to about 600 Myr, the Li depletion pattern should lie between the present day Pleiades and Hyades because of reduced PMS Li depletion in the metal-poor Pleiades, followed by somewhat less efficient MS Li depletion than in the Hyades because of shallower CZs at a given $`T_{\mathrm{eff}}`$. These are very clear predictions. To test them simply requires Li abundance measurements in the G and K stars of a cluster at the age of the Hyades, but with the metallicity of the Pleiades, and vice-versa. These data now exist in the form of Li abundances in the Blanco 1, and Coma Berenices open clusters.
### 4.1 Blanco 1
Jeffries & James (1999) present Li abundances for G and K stars in Blanco 1, a young cluster (age 70 Myr) with a spectroscopically determined iron abundance of \[Fe/H\]=+0.14, when derived using the same colour-$`T_{\mathrm{eff}}`$ scale as used for other young clusters. Figure 3 presents the Li abundances of late-type stars in Blanco 1 compared with the Pleiades and Hyades. Clearly the Blanco 1 Li abundances are indistinguishable from those in the Pleiades and much higher than in the Hyades.
These observations present problems for both standard and non-standard Li depletion models. If the Hyades looked like Blanco 1 in the past then non-standard MS mixing and Li depletion is clearly indicated, because the stars in Blanco 1 should evolve to look like the Hyades in $`500`$ Myr, offering useful empirical constraints on the timescale for the mixing mechanisms. However, because non-standard models predict extra depletion compared with the standard models, an additional ingredient is required to explain why Blanco 1 has not suffered significantly more initial PMS Li depletion than the Pleiades, given it’s higher metallicity.
### 4.2 The Coma Berenices Open Cluster
The sparse Coma Berenices open cluster (age 500 Myr) has \[Fe/H\]=$`0.052\pm 0.026`$, determined in a rigorously consistent way with that of the Pleiades and Hyades values already quoted (Friel & Boesgaard 1992). Li abundances for G and K stars are presented by Jeffries (1999) and supplemented with a few more observations by Ford et al. (1999 - A&A submitted). The data for single stars are also shown in Figure 3. The Li depletion pattern for Coma Ber is very similar to that in the Hyades, with perhaps a hint of less Li depletion for stars cooler than 5700 K. Again, both standard and non-standard models have problems explaining these observations. The standard models would have that the Li depletion in Coma Ber, which occurred during PMS evolution, should be similar to or even less than that in the Pleiades. The extra depletion observed could be supplied by non-standard mixing (on timescales that agree very well with the Hyades-Blanco 1 comparison), but it is then hard to see why Coma Ber and the Hyades should be so close at the present day, unless the PMS Li depletion was not metallicity dependent and both clusters started out on the ZAMS with similar depletion patterns – as indicated by the Pleiades and Blanco 1 datasets.
### 4.3 Other clusters
To these two examples could be added Li abundance datasets for IC 2391/2602, $`\alpha `$ Per, IC 4665 and NGC 2516 (see Table 1). These clusters are either a little younger or a little older than the Pleiades and probably have a wide (albeit ill determined) range of metallicities. Yet the G and K stars in these clusters have Li depletion patterns very close to that in the Pleiades. Similarly, Praesepe and NGC 6633 have ages close to that of the Hyades, probably lower metallicities, yet show almost the same Li depletion pattern as the Hyades. There is perhaps some evidence in NGC 6633 that the K stars have not suffered quite as much depletion as in the Hyades, but they are significantly more depleted than the Pleiades (Jeffries 1997). There are also clusters with intermediate ages (NGC 1039, NGC 6475, 200-300 Myr) which show intermediate Li depletion patterns.
The global cluster dataset is clearly telling us that metallicity is not an important parameter in determining the amount of PMS Li depletion, which flatly contradicts the predictions of standard models. Non-standard mixing processes acting during MS evolution are required in order to rank the cluster Li depletion patterns according to age. Their appear to be no significant exceptions to this trend. The only ways of rescuing the conventional view of standard models are to either abandon the idea that one cluster is representative of clusters at the same age and composition, or assume that \[Fe/H\] is not representative of the overall metallicity of these clusters. Swenson et al. (1994) have shown that abundances of elements such as O and Si are important in determining CZ depth and PMS Li depletion. Detailed abundance analyses of key clusters are required to check that we are not seeing the effects of drastically non-solar abundance ratios, however this explanation would seem to require an unlikely conspiracy of circumstances, given the number of observed clusters.
For clusters with greater than solar metallicity, arriving on the ZAMS with similar Li depletion patterns to the Pleiades, a mechanism is indicated that severely reduces the predicted efficiency of Li depletion on the PMS. This requirement can be extended to lower metallicity clusters and is even more extreme if standard models incorporating the full spectrum of turbulence convection model are considered (Ventura et al. 1998). It has been suggested that structural changes associated with rapid rotation might do this job (Martín & Claret 1996) and at the same time, explain the Li abundance scatter in late-type Pleiades stars. Recently, Mendes, D’Antona & Mazzitelli (1999) have shown that the effects of rapid rotation might actually be in the opposite sense required and in any case, even the slow rotators in Blanco 1 have similar Li abundances to analogous stars in the Pleiades. Ventura et al. (1998) hypothesize that dynamo generated magnetic fields could steepen the adiabatic temperature gradient sufficiently to alter CZ properties and significantly diminish Li depletion. Stronger magnetic fields and less Li depletion would be expected in fast rotators, possibly matching observations in the Pleiades, Blanco 1 and other ZAMS clusters. At present this model is very crude, but the work of Ventura et al. shows that the size of the effect might certainly be enough to explain the lack of PMS Li depletion and its near independence of metallicity.
## 5 Older clusters
As the case for non-standard Li depletion has been made convincingly for younger clusters it is natural to ask how observations of older clusters might delineate the mechanisms and timescales responsible for the extra mixing. The Hyades-Blanco 1 and Pleiades-Coma Ber comparisons indicate a Li depletion rate of about 300-500 Myr per dex for ZAMS K-stars, and perhaps a factor $`23`$ slower in G-stars. If the Sun were taken as representative for a star of it’s age, the depletion rate in early G stars must average out to $`2`$ dex of depletion in 4 Gyr.
Li abundances in a good sample of old open clusters would constrain these timescales. Unfortunately old open clusters are relatively rare and tend to be distant. Furthermore, the K-stars have probably depleted Li beyond detection (although strong upper limits would be useful). Table 1 summarises the observational state of play. The best studied old open cluster is the solar-age M67. The data presented in Jones, Fischer & Soderblom (1999) and Pasquini, Randich & Pallavicini (1997) show an order of magnitude scatter in the Li abundances of solar-type stars at this age, and significant depletion with respect to standard model PMS Li depletion predictions. The solar Li abundance is positioned towards the lower end of the distribution.
That Li is detected at all in 4.5 Gyr old solar-type stars probably indicates that Li depletion slows from an initially higher rate on the ZAMS. This would certainly be expected for mixing mechanisms that were driven by a slowly declining rate of rotation and angular momentum loss. Jones et al. (1999) ascribe the spread in Li abundances to non-standard mixing in stars with a spread in initial ZAMS rotation rates. The abundance spread must develop over several Gyr, because the Pleiades and Hyades G stars show only marginal signs of this spread at younger ages (Thorburn et al. 1993). The stars with initially higher rotation rates would then be those with the lowest Li abundances in M67 and vice-versa (reversing the trend seen in Pleiades K stars!). The circumstantial evidence for this, is that the proportion of low and high Li abundances in M67 approximately matches the proportions of fast and slow rotators in the Pleiades.
This intriguing notion needs bolstering with measured rotation rates in M67 (although rotation rates may well have converged to be indistinguishable). If the scenario could be confirmed, then Jones et al. (1999) speculate that the low Li abundance of the Sun indicates that is was rapidly rotating on the ZAMS. This may still be premature because M67 has a slightly sub-solar metallicity. We lack the evidence to say by how much metallicity affects non-standard mixing on long timescales, but if higher metallicities enhance MS Li depletion, then the Sun may yet turn out to have a high Li abundance for its age. This could be addressed by observations of several older clusters and would be important in understanding how much prior depletion has occurred in very metal-poor Population II stars.
## 6 Conclusions
I end by attempting to briefly answer the original questions in the abstract. It is clear from the evidence reviewed that standard stellar evolution models struggle to explain the patterns of Li depletion seen in open clusters. Furthermore, observations of clusters with different metallicities provide difficulties for current non-standard models. There are strong indications that PMS Li depletion is not as strong as predicted in either class of model. This has not yet been widely recognized and hence explanations are so far rather speculative.
There are many pieces of evidence that non-standard mixing and Li depletion are important during MS evolution. These include the Hyades-Blanco 1 and Pleiades-Coma Ber comparisons, where the confusing factor of metallicity dependent PMS Li depletion has been removed, the general ordering of cluster Li depletion according to age and the strong depletion seen among older clusters and the Sun. The timescales for MS Li depletion are longer than PMS Li depletion timescales but are still uncertain. The current observational evidence suggests that the MS depletion timescales are shorter for K stars than G stars and may get longer as stars spin down.
Metallicity appears not to play a great role in PMS Li depletion, contradicting expectations. Abundance analyses are required for O and Si to see whether CZ depth is affected by non-solar abundance ratios, although the number of clusters in the extant dataset makes this possibility unlikely. If metallicity is not important for PMS Li depletion, then one of the major uncertainties in using Li abundances to date young stars is removed. The other is the scatter in abundances seen at a given age, which inevitably introduces uncertainties that can be well quantified by comparison with cluster datasets. Thus although using Li abundances to age young stars might be relatively inaccurate, depending on the spectral-type of star considered, the uncertainties can at least be empirically determined. It is very difficult however to date older stars using Li abundances because (a) they also show a scatter in Li abundance that develops with age, (b) stars cooler than G-type won’t have detectable Li once older than $`1`$ Gyr and (c) we still don’t know whether metallicity greatly affects the efficiency of MS Li depletion.
New observations could be made which would clarify a number of these issues. Detailed abundance analyses could be performed for all the key open clusters to check for non-solar abundance ratios. Li abundance measurements in several more old open clusters might betray any metallicity dependence of MS depletion timescales. Measuring rotation periods in many more cluster stars with Li abundances, including the slower rotators in older clusters where these measurements tend to be much more difficult, would allow further investigation of how depletion timescales depend on rotation rates. The connection between Li depletion and rotation in cool ZAMS stars is still far from resolved and may yet turn out to be problems in our understanding of inhomogeneous stellar atmospheres. In that respect, the connection between rotation, surface inhomogeneities, Li i, and K i equivalent width spreads needs to be carefully investigated, possibly using doppler tomographic techniques (e.g. Hussain, Unruh & Collier-Cameron 1998).
###### Acknowledgements.
The author would like to thank the staff at the Isaac Newton Group of Telescopes and the Anglo Australian Observatory for their assistance during the course of several observing campaigns.
|
no-problem/9907/astro-ph9907265.html
|
ar5iv
|
text
|
# Broad-band X-ray measurements of GS 1826-238
## 1 Introduction
A decade after its discovery (Makino et al. 1989), the nature of the compact object in the X-ray binary GS 1826-238 has finally been established. Monitoring observations with the Wide Field Cameras (WFCs) on BeppoSAX revealed the source to be a regular source of type I X-ray bursts which are explained as thermonuclear runaway processes on the hard surface of a neutron star (Ubertini et al. 1997, 1999). Previously, the nature was under debate because the X-ray emission exhibited characteristics that were until recently suspected to be solely due to black hole candidates (Tanaka 1989).
An optical counterpart has been identified (Motch et al. 1994, Barret et al. 1995) which classifies the binary as a low-mass X-ray binary (LMXB). Later this counterpart was found to exhibit optical bursts and a modulation which is likely to have a periodicity of 2.1 h (Homer et al. 1998). If the latter is interpreted to be of orbital origin, it would imply a binary that is compact among LMXBs.
GS 1826-238 appears unusual among LMXBs. First, X-ray flux measurements after its 1989 discovery are fairly constant (In ’t Zand 1992, Barret et al. 1995) at a level of approximately $`6\times 10^{10}`$ erg cm<sup>-2</sup>s<sup>-1</sup> in 2 to 10 keV. Second, the WFC measurements reveal a strong regularity in the occurrence of type I X-ray bursts for an unusually long time (Ubertini et al. 1999). These two facts are very probably related. The constant flux is indicative of a stable accretion of matter on the neutron star which fuels regularly ignited thermonuclear explosions that give rise to X-ray bursts.
GS 1826-238 has a hard spectrum, the initial Ginga observations measured a power law spectrum with a photon index of 1.8 (Tanaka 1989). This makes it particularly important to study the spectrum in a broad photon energy range. Strickman et al. (1996) have attempted this by combining the early Ginga 1-40 keV data with 60-300 keV OSSE data taken in 1994. Del Sordo et al. (1998) have performed a preliminary study of the 0.1-100 keV data taken with the narrow-field instruments (NFI) on board BeppoSAX in October 1997. In the present paper, we study data taken with the same instrumentation half a year before that. The primary purpose of this study is to accurately analyze the flux of the persistent emission as well as that of two X-ray bursts. Also, we study the variability of the 2 to 10 keV emission.
## 2 Observations
The NFI include the Low-Energy and the Medium-Energy Concentrator Spectrometer (LECS and MECS, see Parmar et al. 1997 and Boella et al. 1997 respectively) with effective bandpasses of 0.1-10 and 1.8-10 keV, respectively. Both are imaging instruments. The MECS was used in the complete configuration of three units (unit 1 failed one month after the present observation). The other two NFI are the Phoswich Detector System (PDS; active between $`12`$ and 300 keV; Frontera et al. 1997) and the High-Pressure Gas Scintillation Proportional Counter (HP-GSPC; active between 4 and 120 keV; Manzo et al. 1997).
A target-of-opportunity observation (TOO) was performed with the NFI between April 6.7 and 7.2, 1997 UT (i.e., 40.8 ks time span). The trigger for the TOO was the first recognition that the source was bursting (Ubertini et al. 1997). The net exposure times are 8.2 ks for LECS, 23.1 ks for MECS, 18.0 ks for HP-GSPC and 20.5 ks for PDS. GS 1826-238 was strongly detected in all instruments and two $``$150 s long X-ray bursts were observed.
We applied extraction radii of 8′ and 4′ for photons from LECS and MECS images, encircling at least $`95`$% of the power of the instrumental point spread function, to obtain lightcurves and spectra. Long archival exposures on empty sky fields were used to define the background in the same extraction regions. These are standard data sets made available especially for the purpose of background determination. All spectra are rebinned so as to sample the spectral full-width at half-maximum resolution by three bins and to accumulate at least 20 photons per bin. The latter will ensure the applicability of $`\chi ^2`$ fitting procedures. A systematic error of 1% is added to each channel of the rebinned LECS and MECS spectra, to account for residual systematic uncertainties in the detector calibrations (e.g., Guainazzi et al. 1998). For spectral analyses, the bandpasses were limited to 0.1–4.0 keV (LECS), 2.2–10.5 keV (MECS), 4.0–30.0 keV (HP-GSPC) and 15–200 keV (PDS) to avoid photon energies where the spectral calibration of the instruments is not yet complete. In spectral modeling, an allowance was made to leave free the relative normalization of the spectra from LECS, PDS and HP-GSPC to that of the MECS spectrum, to accommodate cross-calibration uncertainties in this respect. Use was made of the publicly available response matrices (version September 1997).
## 3 The persistent emission
Fig. 1 shows the lightcurve of the persistent emission of GS 1826-238 in various bandpasses. On time scales of a few hundred seconds, the flux appears constant except for immediately after the occurrence of the two bursts. We searched for a modulation on time scales of about 2.1 h in the MECS data (which are the most sensitive) and find none. The $`3\sigma `$ upper limit on the semi amplitude is 1.6%. Thus, we cannot, in X-rays, confirm the optical modulation which had a semi amplitude of 6%. The power spectrum of the same data (excluding the burst intervals) is shown in Fig. 2. A broken power law function was fitted to these data. The Poisson level, which is a free parameter, has been subtracted in Fig. 2. Formally, the fit is unacceptable ($`\chi ^2=134`$ for 72 dof). This may be due to narrow features at 0.2-0.3 Hz and 1-2 Hz but the statistical quality of the data do not allow a detailed study of those. The break frequency of the broken power law is $`0.115\pm 0.011`$ Hz, the power law index is $`0.07\pm 0.10`$ below and $`1.02\pm 0.12`$ above the break frequency. The high-frequency index is consistent with the index found from Ginga data taken in 1988 between 0.1 and 500 Hz (Tanaka 1989). The integrated rms power of the noise between 0.002 and 10 Hz is $`20\pm 2`$%, that for the Ginga data between 0.02 and 500 Hz is $`30`$% (Barret et al. 1995). If one assumes the same break frequency for the Ginga data, we expect an rms of 17% for these between 0.002 and 10 Hz which is very similar to the MECS result. The break frequency is comparable with values found for LMXB atoll sources and is one order of magnitude below values found for the bright LMXB Z sources (see Wijnands & Van der Klis 1999). We are unable to assess the history or variability of the low-frequency index or break frequency.
A broad-band spectrum was accumulated, averaged over the complete observation except the burst intervals, making use of the LECS, MECS, HP-GSPC, and PDS data. The spectrum was fitted with two models: black body radiation with unsaturated Comptonization (Titarchuk 1994), this is a model which is rather successful in describing other low-luminosity LMXBs as well (e.g., Guainazzi et al. 1998, In ’t Zand et al. 1999); and black body radiation plus a power law component with an exponential cut off, this model was used by Del Sordo et al. (1998) for NFI data below 100 keV on GS 1826-238. The results are given in Table 1, a graph is shown in Fig. 3 for the Comptonized model.
A power law fit to the 60-150 keV PDS data is acceptable ($`\chi _\mathrm{r}^2=0.7`$ for 4 dof) and reveals a photon index of $`3.3\pm 0.4`$ which is close to that found for the 60-300 keV OSSE data by Strickman et al. (1996) of $`3.1\pm 0.5`$.
The average 0.1 to 200 keV flux is $`f_{0.1200\mathrm{keV}}=(1.93\pm 0.10)\times 10^9`$ erg cm<sup>-2</sup>s<sup>-1</sup>.
We compare the results with those obtained by Del Sordo et al. (1998) on NFI data taken half a year later on the same source. Del Sordo et al. find for the black body temperature $`0.94\pm 0.05`$ keV, for the cut off energy $`49\pm 3`$ keV, for the power law index $`1.34\pm 0.04`$, and for $`N_\mathrm{H}4.6\times 10^{21}`$ cm<sup>-2</sup>. These values are consistent with ours. Furthermore, Del Sordo et al. (1998) quote a 2 to 10 keV flux of $`5.6\times 10^{10}`$ erg cm<sup>-2</sup>s<sup>-1</sup> which is only $``$4% larger than what we find. This indicates that the flux and spectrum of GS 1826-238 did not change substantially over half a year.
The optical counterpart is reported to exhibit $`E_{BV}=0.4\pm 0.1`$ (Motch et al. 1994, Barret et al. 1995). Follows that $`A_\mathrm{V}=1.2\pm 0.3`$ and $`N_\mathrm{H}=(2.2\pm 0.5)\times 10^{21}`$ cm<sup>-2</sup> (according to the conversion of $`A_\mathrm{V}`$ to $`N_\mathrm{H}`$ by Predehl & Schmitt 1995). An interpolation from the HI maps in Dickey & Lockman (1990) reveals the same value for $`N_\mathrm{H}`$. This value is inconsistent with the values for the two models of the NFI-measured spectrum (Table 1). We tried to accommodate $`2.2\times 10^{21}`$ cm<sup>-2</sup> with these models. If $`N_\mathrm{H}`$ is frozen and the other parameters are left free, the Comptonized model remains a better description of the data with $`\chi _\mathrm{r}^2=1.498`$ (133 dof) than the black body plus cut-off power-law model with $`\chi _\mathrm{r}^2=3.358`$ (138 dof). The values of the other parameters in the Comptonized model are within the error margins as indicated in Table 1 except for $`kT_\mathrm{W}`$ which is marginally different at $`0.496\pm 0.004`$ keV. Nevertheless, $`\chi _\mathrm{r}^2=1.498`$ is an unacceptable fit.
## 4 The burst emission
Figs. 4 and 5 show the time profiles of the two bursts in a number of bandpasses from MECS and PDS data at a time resolution of 1 s. There are no observations of the bursts with the LECS and we omit HP-GSPC data since this instrument has an energy range which overlaps that of the others. As far as can be judged (there are data gaps, probably due to telemetry overflow), the profiles are clean fast-rise exponential-decay shapes. The e-folding decay times per bandpass (see Table 2) are identical for both bursts. They are also long though not unprecedented, if compared to many other bursters. Furthermore, the rise time of the bursts is relatively large (5 to 8 s).
Each burst was divided in five time intervals (see Figs. 4 and 5). Relative to the peak time, the intervals are equal except for the first interval. The last interval of each burst covers 1000 s to study the slow decay of the flux to the persistent level. The persistent emission was not subtracted in these spectra while the background was. We fitted the MECS spectra in these intervals and in the non-burst data with a black body radiation model with different temperatures plus a power law function whose shape (i.e., photon index) is frozen over all intervals. Furthermore, a single level of interstellar plus circumstellar absorption was fitted to all data through $`N_\mathrm{H}`$. PDS 15-30 keV data were included for the rise and first two decay time intervals of each bursts, as well as for the non-burst times up to 50 keV. The fit was reasonable with $`\chi _\mathrm{r}^2=1.24`$ for 470 dof. Results of the fit are given in Table 3. For illustrative purposes, a graph is presented in Fig. 6 of the photon count rate spectra for 2 intervals of the second burst and the non-burst data.
This modeling of the burst spectral evolution shows that during the brightest parts of the bursts (between 0 and 113 s after the burst peaks) the black body radius remains constant within an error margin of roughly 10% while the temperature decreases from 2 to 1 keV. There is no evidence for photospheric expansion.
The photon count rate of the black body radiation in the PDS should be negligible above 30 keV (i.e., it is about $`6\times 10^3`$ in 30-60 keV times that in 12 to 30 keV, for a black body with $`kT=2.2`$ keV). However, as can be seen in Fig. 5, there is substantial burst emission between 30 and 60 keV. In fact, the average 30-60 keV photon count rate of the burst in the first 11 s after the burst peak is of order half times that in 12-30 keV. This suggests that the burst emission may be Comptonized like the persistent emission, although we are not able to verify that spectrally due to insufficient statistics.
## 5 Discussion
The thermal nature of the burst spectra with few keV temperatures and cooling are typical for a type I X-ray burst (e.g., Lewin et al. 1995, and references therein). Such a burst is thought to be due to a thermonuclear ignition of helium accumulated on the surface of a neutron star. The unabsorbed bolometric peak flux of the black body radiation is estimated at $`(2.7\pm 0.5)\times 10^8`$ erg cm<sup>-2</sup>s<sup>-1</sup>. This translates into a peak luminosity of $`(3.3\pm 0.6)\times 10^{38}d_{10\mathrm{kpc}}^2`$ erg s<sup>-1</sup>. Since we do not find evidence for photospheric expansion in the bursts it is indicated that the burst peak luminosity is below the Eddington limit which is $`1.8\times 10^{38}`$ erg s<sup>-1</sup> for a 1.4 M neutron star. Therefore, we expect the distance to be smaller than $`7.4\pm 0.7`$ kpc or $`8`$ kpc. Barret et al. (1995) infer from the photometry of the optical counterpart that the lower limit to the distance is 4 kpc. The relatively low galactic latitude of the source ($`6\stackrel{}{.}1`$) does not provide better constraints on the distance. For a distance between 4 to 8 kpc, the X-ray luminosity is between $`3.5\times 10^{36}`$ and $`1.4\times 10^{37}`$ erg s<sup>-1</sup> which are fairly typical values for LMXB X-ray bursters.
The two bursts presented here show traces of Comptonization, like in X 1608–52 (Nakamura et al. 1989), 1E1724–308 in Terzan 2 (Guainazzi et al. 1998) and SAX J1748.9–2021 in NGC 6440 (In ’t Zand et al. 1999). The PDS time profiles suggest that the level of Comptonization matches the burst quite closely. The time delay is less than a few seconds which suggests that the Comptonizing cloud is within $`10^{11}`$ cm. Unfortunately, this is not a strong constraint because a 2 h binary orbit implies a system size perhaps one order of magnitude smaller than that.
Many parameters of the two bursts are equal within narrow error margins: the durations within $`0.8\pm 2.8`$%, the peak temperatures within $`7\pm 3`$%, the peak emission areas within $`2\pm 5`$%, and the bolometric fluences within $`6\pm 6`$%. This suggests that the physical circumstances for triggering the bursts (i.e., the neutron star surface temperature and the composition of accreted matter) are the same on the two occasions and, together with the prolonged regular bursting and constant persistent flux as measured with WFC, testifies to a rather strong stability of the accretion process. This suggests a stable accretion disk. In how far this is uncommon among low-luminosity LMXBs remains to be seen. The knowledge about such LMXBs is as yet incomplete.
The broad-band spectral measurements of the persistent as well as burst emission enable a fairly accurate determination of $`\alpha `$ which is defined as the bolometric fluence of the persistent emission between two bursts and that of the latter burst. The time between the two bursts is 23,007 s. We are confident that no bursts were missed during the data gaps because this time is consistent with the quasi-periodicity of the burst recurrence as found from near-simultaneous WFC observations with period 5.8 hr and full-width at half maximum of 0.4 hrs (Ubertini et al. 1999). Of all 70 WFC-detected bursts from GS 1826-238, no two were closer to each other than 19,238 s. The fluence of the second burst is $`(7.6\pm 0.5)\times 10^7`$ erg cm<sup>-2</sup>. The constant persistent emission implies a bolometric fluence between the two bursts of $`(4.14\pm 0.23)\times 10^5`$ erg cm<sup>-2</sup>. Therefore, $`\alpha =54\pm 5`$. This confirms the value found from the WFC analysis ($`60\pm 7`$, Ubertini et al. 1999).
###### Acknowledgements.
We thank the BeppoSAX team at Nuova Telespazio (Rome) for planning and carrying out the observation presented here. BeppoSAX is a joint Italian and Dutch program.
|
no-problem/9907/cond-mat9907317.html
|
ar5iv
|
text
|
# 1 SPQB device with Andreev probe; the area 𝑆_𝑞 of the “intrinsic” loop of the qubit is equal to the area 𝑆_𝑓 of the NS-QUID loop.
Andreev Spectroscopy for Superconducting Phase Qubits M.V. Feigel’man<sup>1</sup>, V.B. Geshkenbein<sup>1,2</sup>, L.B. Ioffe<sup>1,3</sup>, and G. Blatter<sup>2</sup> <sup>1</sup> L. D. Landau Institute for Theoretical Physics, Moscow 117940, Russia
<sup>2</sup> Institute fur Theoretische Physik, ETH-Hönggerberg, CH-8093, Switzerland
<sup>3</sup> Department of Physics, Rutgers University, Piscataway, NJ 08855, USA
Solid state implementations of qubits are challenging, as macroscopic devices involve a large number of degrees of freedom and thus are difficult to maintain in a coherent state. This problem is less accute in designs based on superconducting (SC) electronics, which can be divided into two broad classes: the “charge” qubits encode different states through the charge trapped on a SC island , while in the “phase” qubits the states differ *mostly* by the value of the phase $`\phi `$ of a superconducting island in a low-inductance SQUID loop . As a consequence of long-range Coulomb forces, the charge qubit interacts strongly with the environment and with other qubits. On the contrary, the phase qubit is more effectively decoupled from the environment (a *pure* phase qubit with states differing *only* by the value of $`\phi `$ has practically zero interaction with the environment). Although pure phase qubits can be fabricated from $`d`$-wave superconductors the task is technologically demanding. Here, we concentrate on the most simple version of a superconducting phase qubit (SPQB), a Josepshon loop made from a few submicron sized SC islands connected via similar Josephson junctions and placed in a frustrating magnetic field; such a device with 4 junctions is sketched in Fig. 1. Two degenerate states naturally appear in such a loop if the flux $`\mathrm{\Phi }_q`$ of the external field through the qubit loop is exactly $`\mathrm{\Phi }_0/2=hc/4e`$. Using a gauge $`A_x=0`$, $`A_y=Hx`$, the classical minima of the Josephson energy are attained when the phase on the island $`G`$ (Fig. 1) takes the value $`\phi ^\pm =\pm \pi /2`$ (relative to the phase at the point $`O`$); below we refer to these states as $`|`$ and $`|`$.
In order to reduce any parasitic coupling to the environment, the inductance $`L`$ of the loop shall be small with $`LI_c10^3\mathrm{\Phi }_0`$. Ignoring charging effects, the system prepared in one of these states will stay there forever; quantum effects appear when the charging energies are accounted for. They are determined by the capacitances $`C`$ of the junctions and we require them to be smaller than the Josephson energy, $`e^2/C\mathrm{}I_c/e`$. The tunnelling rate between the two classically degenerate ground states is estimated as $`\mathrm{\Omega }\sqrt{eI_c/\mathrm{}C}\mathrm{exp}(a\sqrt{\mathrm{}I_cC/e^3})`$, where $`a`$ is of order 1. We assume values of $`I_c`$ between $`10100`$ nA and capacitances of order of a few fF, resulting in a characteristic Josephson plasma frequency $`\omega _{pl}100`$ GHz and a tunneling frequency $`\mathrm{\Omega }110`$ GHz. Once tunnelling is taken into account, the true eigenstates become $`|0=\frac{1}{\sqrt{2}}(|+|)`$ and $`|1=\frac{1}{\sqrt{2}}(||)`$, separated by an energy gap $`\mathrm{}\mathrm{\Omega }`$. A deviation of the magnetic flux $`\mathrm{\Phi }_q`$ through the loop from a value $`\mathrm{\Phi }_0/2`$ removes the degeneracy of the states $`|`$ and $`|`$. The Hamiltonian of our qubit written in the basis $`|,|`$ takes the form $`H=h_x\sigma _x+h_z\sigma _z`$, where $`h_z=\frac{\mathrm{}I_c}{2e}(2\mathrm{\Phi }_q\mathrm{\Phi }_0)`$ and $`h_x=\mathrm{}\mathrm{\Omega }`$. Varying the effective fields $`h_z`$ and $`h_x`$ we can perform all necessary operations on the qubit. Changing the external flux through the loop produces a variation in $`h_z`$. The coupling parameter $`h_x`$ can be smoothly modified through a variation of the gate potential applied to the island $`G`$ (cf. ). Alternatively, short circuiting a junction with an external capacitor $`C_{ext}10`$ $`C`$ leads to an abrupt blocking of the tunneling channel (switching off $`h_x`$).
The first task to be addressed in the study of a SPQB is the development of a convenient probe testing for coherent Rabi oscillations in the device; this can be accomplished by a measurement of the phase-sensitive subgap Andreev conductance. The low-$`T`$ conductance of a NS boundary is determined by the time the incident electron and the Andreev-reflected hole can interfere constructively . The phase coherent electron diffusion in the normal wire of a “fork” geometry leads to periodic conductance oscillations as a function of the magnetic flux penetrating the region $`S_f`$ of the fork (with period $`\mathrm{\Phi }_0`$). This is due to the magnetic field controlling the superconducting phase difference between the two NS contacts of the fork, thus influencing the electron interference pattern.
The experiment discussed here is governed by similar physics. Consider first the subgap conductance between the active SC island of the qubit and the dirty normal metal wire, connected to the island via a high-resistance tunnel barrier with normal state conductance $`\sigma _Te^2/\mathrm{}`$. If the qubit phase $`\phi `$ does not fluctuate in time (i.e., $`\mathrm{\Phi }_q\mathrm{\Phi }_0/2`$, $`h_zh_x`$), the Andreev conductance at $`T0`$, $`V0`$ is $`\sigma _A^{cl}=\sigma _T^2R_D`$, where $`R_D=\rho L\sigma _T^1`$ is the resistance of a dirty wire of length $`L`$. At voltages $`eV\mathrm{}D/L^2`$ the differential subgap conductance $`dI_A/dV=\sigma _A^{cl}(V)\sigma _T^2\widehat{C}(2eV)`$, where $`\widehat{C}(E)`$ is the space-integrated Cooperon amplitude . In a single-wire geometry and in the absence of decoherence inside the normal wire (i.e., at $`T=0`$), the Cooperon amplitude $`\widehat{C}(E)=\rho \sqrt{\mathrm{}D/E}`$ at $`E\mathrm{}D/L^2`$. For the qubit tuned to resonance, $`h_zh_x`$, the time-dependent fluctuations of the SC order parameter destroy the coherence between multiple Andreev reflections at the NS boundary, thereby suppressing the subgap conductance. Quantitatively, their effect on the subgap conductance $`\sigma _A(V)`$ is
$$\frac{\sigma _A(V)}{\sigma _A^{cl}(V)}=\frac{_0^{2eV}𝑑EP(E)\widehat{C}(2eVE)}{\widehat{C}(2eV)}.$$
(1)
Here, $`P(E)=\frac{1}{2\pi }e^{iEt}K(t)𝑑t`$ and $`K(t)=e^{i(\phi (0)\phi (t))}`$ is the intrinsic correlation function of the SPQB. In the case of weak decoherence, $`K(t)`$ is given by $`K(t)=e^{i\mathrm{\Omega }t}e^{\mathrm{\Gamma }|t|}`$, with $`\mathrm{\Omega }`$ the tunnelling frequency and $`\mathrm{\Gamma }`$ the intrinsic decoherence rate. The derivation of Eq. (1) parallels the one presented in . In the case of completely coherent Rabi oscillations ($`\mathrm{\Gamma }0`$), the conductance $`\sigma _A(V)`$ vanishes at $`2eV<\mathrm{}\mathrm{\Omega }`$ and exhibits a square-root singularity $`\sigma _A(V)1/\sqrt{2eV\mathrm{}\mathrm{\Omega }}`$ above the threshold, i.e., it behaves as a tunneling conductance into a BSC superconductor at $`2eV>\mathrm{\Delta }_{BCS}`$. In the opposite limit of incoherent quantum tunnelling of the phase ($`\mathrm{\Gamma }\mathrm{\Omega }`$), the ratio (1) decreases as $`eV/\mathrm{}\mathrm{\Gamma }`$ at low voltages.
Next, consider the full interference experiment for the device shown in Fig. 1. The idea is to measure the amplitude $`I_A^{(12)}`$ in the oscillations of the Andreev current $`I_A=I_A^{(1)}+I_A^{(2)}+I_A^{(12)}`$ as a function of the magnetic field $`H`$ (where the superscripts <sup>(i)</sup> refer to the contribution from the contacts $`i=1`$ and $`i=2`$, see Fig. 1). The total phase determining the interference current $`I_A^{(12)}`$ is equal to $`\varphi _A=2\pi HS_f/\mathrm{\Phi }_0+\phi `$. Away from the degeneracy field $`H_n=(n+1/2)\mathrm{\Phi }_0/S_q`$, the phase $`\phi `$ in our device is determined by the minimization of the Josephson energy and we obtain $`\phi =\pi (\{HS_q/\mathrm{\Phi }_0+1/2\}1/2)`$, where $`\{x\}`$ is the fractional part of $`x`$. Choosing a geometry with $`S_q=S_f`$, we find for $`HS_q/\mathrm{\Phi }_0[n,n+1/2]`$ an interference current $`I_A^{(12)}=j_{12}\mathrm{cos}(3\pi HS_q/\mathrm{\Phi }_0)`$, whereas for $`HS_q/\mathrm{\Phi }_0[n+1/2,n+1]`$ the sign changes: $`I_A^{(12)}=j_{12}\mathrm{cos}(3\pi HS_q/\mathrm{\Phi }_0)`$. Thus, the whole semiclassical $`I_A^{(12)}(H)`$ dependence has a period $`\mathrm{\Phi }_0/S_q`$, with upward cusps at $`H=H_n`$. These cusps are due to the penetration of a flux quantum $`\mathrm{\Phi }_0`$ into the qubit loop at the degeneracy points $`H_n`$. At these fields the interference contribution $`I_A^{(12)}`$ vanishes (in the geometry with $`S_q=S_f`$) and the semiclassical Andreev current is given by the sum $`I_A^{(1)}+I_A^{(2)}`$. However, at $`H=H_n`$ the quantum fluctuations in the phase $`\phi `$ become large and suppress the Andreev current $`I_A^{(2)}`$ to the “active” island G of the SPQB. This implies that close to the degeneracy points $`|HH_n|H_0\mathrm{}\mathrm{\Omega }/E_J`$ a very narrow dip is superimposed upon the above-mentioned cusp in the $`I_A(H)`$ dependence. The minimum value of the current in this dip coincides with the current $`I_A^{(1)}`$ to the “ground”. The voltage dependence of the dip $`dI_A/dV`$ can be related, via Eq. (1), to the autocorrelation function $`K(t)`$ of the SPQB.
The proposed measurement is “non invasive”, i.e., it does not, by itself, destroy the coherent dynamics of the phase $`\phi `$. This is due to the fact that at low $`V`$ this experiment does not measure the value of $`\phi `$. In order not to destroy the quantum tunneling of $`\phi `$ during the measurement, the subgap conductance should be low, i.e. $`\sigma _A^14e^2/h=6.5`$ k$`\mathrm{\Omega }`$. However, it should not be too low to be measurable with good accuracy at low temperatures (i.e., $`\sigma _A^110`$ M$`\mathrm{\Omega }`$). Well-controlled metallic thin films with a long decoherence time can be produced with a sheet resistance $`0.1`$ k$`\mathrm{\Omega }`$; the use of such films require $`\sigma _T^1`$ in the range $`330`$ k$`\mathrm{\Omega }`$ in order to place $`\sigma _A^1`$ within a range $`0.110`$ M$`\mathrm{\Omega }`$.
|
no-problem/9907/astro-ph9907439.html
|
ar5iv
|
text
|
# The Ages of Pre-main-sequence Stars
## 1 Introduction
The placement of an observed pre-main-sequence star in the theoretical Hertzsprung–Russell diagram ($`\mathrm{log}L`$ against $`\mathrm{log}T_{\mathrm{e}ff}`$) is notoriously difficult because of its sensitivity to distance and reddening (see for example Gullbring et al. 1998). In addition any contribution to the light from the accretion disc itself must be subtracted and obscuration by circumstellar material accounted for (see for example Hillenbrand 1997). Here, by examining where theory predicts a particular object ought to lie at a given age, we investigate what properties of a pre-main-sequence star can be determined if these difficulties can be overcome.
The process by which stars form from their constituent interstellar material is as relevant to all branches of astrophysics, from planets to cosmology, as their subsequent evolution. However our understanding and, without doubt, our predictive power lag well behind. This is partly because stars which are in the process of formation are more difficult to observe. The relative rapidity of the star formation process means that there are no nearby pre-main-sequence stars and the fact that they form in denser regions of the interstellar medium favours observations at wavelengths longer than optical. It is only recently that such objects have begun to be observed in statistically significant numbers (Cohen & Kuhi 1979). From a theoretical point of view, difficulties arise because much of the process is dynamical and so does not lend itself well to the one-dimensional models normally employed in stellar evolution. On the other hand we can model the hydrostatic inner regions using the methods normally employed in stellar evolution and so, with appropriate boundary conditions, approximate a forming star. Indeed this kind of pre-main-sequence theory can be said to have begun alongside stellar evolution itself with the work of Henyey, Lelevier & Levee (1955), restricted to radiative solutions, and Hayashi (1961), with convection. These pioneers were able to describe how a spherical cloud of gas, already in hydrostatic equilibrium, contracts down to the main sequence as it releases its own gravitational energy.
The question of how the initial hydrostatic sphere forms is further complicated by two major effects. First, dynamical processes must be important in addition to thermal and nuclear and, second, these can no longer be expected to be spherically or even oblate-spheroidally symmetric. Larson (1969) modelled the spherically symmetric collapse of a gas cloud that is not yet in hydrostatic equilibrium. He showed how the central regions collapse first to form a hydrostatic core on to which the rest of the cloud accretes. But this core cannot behave like Hayashi’s pre-main-sequence stars because its surface is no longer exposed to space and the boundary conditions are different. Larson introduced shock conditions at the surface of a near hydrostatic core. Building heavily on this, Stahler, Shu and Taam (1980a, b, 1981) were able to follow the evolution of the accreting core. Such a core, shrouded in its own accreting envelope, remains invisible as long as it accretes. Stahler et al. assumed that accretion and obscuration cease at the same time when the surrounding material is somehow blown away. Their stars then descend the classic Hayashi tracks until they develop radiative envelopes and move on to the corresponding Henyey track. However, if the accreting material does not obscure the entire stellar surface, we are able to see the star whilst it is still accreting. It is this latter situation that we model in this work. It is likely to arise because material accreting from far off will have too much angular momentum to fall radially on to the central core. Instead it will form an accretion disc in a plane perpendicular to the angular momentum axis and fall on to the core only as viscosity allows a small amount of material to carry the angular momentum outwards. If the disc reaches to the stellar surface then the material will accrete only in an equatorial band. If, on the other hand, the central core possesses a magnetic field strong enough to disrupt the inner parts of the disc, matter might finally flow in along field lines accreting at relatively small magnetic poles or thin accretion curtains. Similar processes are known to operate in magnetic cataclysmic variables (Warner 1995 for a review). In any of these cases most of the stellar surface is left exposed and free to radiate like a normal star.
A comprehensive study of such exposed protostellar cores was made by Mercer-Smith, Cameron and Epstein (1984). Because more than the usual insight is needed to elucidate what they actually did this work has largely been forgotten. However it turns out that their accreting tracks qualitatively differ from ours and so it is important to identify exactly why this is. Their calculations begin with a hydrostatic core of $`0.0015M_{}`$ of apparently $`2R_{}`$ based on the dynamical collapse calculations of Winkler and Newman (1980). We shall argue in section 3 that, at this mass, the core is still embedded in a rapidly collapsing cloud and that something like $`0.1M_{}`$ and $`3R_{}`$ gives a more realistic representation of the central core when non-spherical accretion begins in earnest. But we shall also show that the subsequent evolution is not overly sensitive to this initial state. More significantly, Mercer-Smith et al. require that at least one quarter of the accretion luminosity be radiated uniformly over the whole stellar surface, while we claim that it can all be radiated locally in a disc boundary layer or localised shocks. This, coupled with their extreme accretion rates of typically $`10^5M_{}\mathrm{yr}^1`$, most probably accounts for the huge discrepancy between their and our tracks, manifested by the fact that their standard model is at a much higher effective temperature for a given mass than ours. As a consequence, our models evolve smoothly even if the accretion rate is abruptly changed while theirs relax to a normal Hayashi track rapidly over about $`100`$yr when accretion is halted.
A careful analysis of the effects of accretion on stellar structure has been made by Siess & Forestini (1996) varying a number of the physical properties of the accreted material relative to the stellar surface from angular momentum content to internal energy and find that reasonable values of these parameters have little affect on the stellar structure. Siess, Forestini & Bertout (1997) then went on to use their formalism to follow a small number of evolutionary sequences. They confirmed the lack of sensitivity to their various parameters except for the dependence on the fraction, $`\alpha `$, of the accretion boundary-layer energy released below the stellar photosphere. Large values of this parameter are similar to Mercer-Smith et al.’s formalism while our models correspond to $`\alpha =0`$. Siess et al.’s models with $`\alpha =0.01`$ are indeed very similar to our tracks when the accretion history is comparable.
We present several evolution tracks for pre-main-sequence stars accreting from various initial conditions to quantify the accuracy to which age can be determined. Most of the tracks are for solar metallicity of $`Z=0.02`$. However measurements of metallicity in Orion’s star forming regions, although highly uncertain, indicate that $`Z=0.001`$ may be more appropriate (Rubin et al. 1997). Such low metallicity is also typical of star forming regions in the Large Magellanic Cloud. We therefore discuss a set of low-metallicity tracks which demonstrate how both mass and age determinations from colour–magnitude diagrams depend critically on a knowledge of metallicity. Because the stellar mass function dictates that the bulk of stars have final masses on the low side we restrict our presentation here to accreting objects of less than $`2M_{}`$. We can expect almost all stars in the star-forming regions with which we may wish to compare properties to lie below this mass. Also, as stressed by Palla and Stahler (1993), the contraction timescales for massive stars are short compared with accretion timescales so that the accreting tracks will tend to follow the zero-age main-sequence and the effective pre-main-sequence life of massive stars is dominated by their early, low-mass evolution. Indeed at higher masses the accretion timescale becomes long compared with the nuclear timescale and it is difficult to separate pre- and post-main-sequence evolution for some stars.
We find that masses can be fairly well established if the metallicity is known but that ages are very dependent on the accretion history and the initial state of the star particularly below $`5\times 10^6`$yr. However before we can begin to discuss age determination we must first establish to what this age is relative.
## 2 The Zero Age
It is very often unclear how to define the zero-age point for a forming star and of course it is rather uninformative to quote an age $`t`$ without explaining exactly what we mean by $`t=0`$. For evolved stars the zero-age main-sequence (ZAMS) provides a convenient starting point from which we can both begin the evolution and measure the age of the star. The ZAMS must then be defined. Historically a star was started in a state of hydrostatic and thermal equilibrium with a uniform initial composition. In reality a star never actually passes through this zero-age state because some nuclear burning takes place while a newly formed star is still contracting to the main sequence. In practice, because the thermal-evolution timescale of pre-main-sequence stars is several orders of magnitude shorter than the post-main-sequence nuclear timescale, very little of the initial hydrogen is burnt and a uniform hydrogen abundance throughout the star is a reasonable approximation. This is not so for the catalytic elements of the CNO cycle in sufficiently massive stars because these elements are driven towards equilibrium during pre-main-sequence evolution. Even so, it is possible to define a zero-age main sequence (see for example Tout et al. 1996) that roughly corresponds to the minimum luminosity attained as a star evolves from a pre- to post-main-sequence phase. Nor is the assumption of uniform abundance true for the elements involved in the pp chain, notably deuterium and He<sup>3</sup>. However, on the zero-age main sequence, the pp chain is complete in transforming hydrogen to He<sup>4</sup> at the stellar centres and so the abundances of D and He<sup>3</sup> are in equilibrium for given temperatures and number densities and subsequently need not be followed explicitly. On the other hand deuterium burning is a major source of energy in pre-main-sequence stars and is important throughout this work.
Because we can define a ZAMS reasonably uniquely a good way to measure pre-main-sequence ages would be backwards from the ZAMS. However this is not acceptable if one wishes to measure the time elapsed since the birth of a star, where relatively small changes in age lead to large excursions in the H–R diagram. A similar problem is encountered with the upper parts of the red giant, and particularly asymptotic giant, branch but in these cases we regard absolute age as relatively useless preferring such quantities as degenerate core mass as a measure of the evolutionary state (Tout et al. 1997).
The concept of a stellar birthline in the H–R diagram was introduced by Stahler (1983) as the locus of points at which stars forming from a spherically accreting cloud would first become visible. In the model of Stahler, Shu and Taam (1980a,b, 1981) this occurs when deuterium ignites in the protostellar core and some ensuing wind blows away the remainder of the accreting cloud which has, up to this point, shrouded the star itself from view. With such a theory, a perfect place to fix the zero age of a pre-main-sequence star would be the onset of deuterium burning. Deuterium burning provides pressure support for the star for a time comparable with the Kelvin–Helmholtz timescale $`\tau _{\mathrm{KH}}`$, on which it contracts once deuterium is exhausted. This timescale is similar to the entire time taken to contract to this point from any initial state so that by the time a star begins to contract again, below the deuterium burning sequence, it is already relatively old and has a reasonably well defined age. Here we concern ourselves with accretion through a disc. In this case most of the stellar photosphere is exposed while accretion is still taking place. Nor do we assume that accretion ceases at the onset of deuterium burning. Under such circumstances there is no reason why stars should not appear above Stahler’s birthline and it is no longer possible to define a birthline as a locus of maximum luminosity at which pre-main-sequence stars appear. However, the interruption of contraction when deuterium ignites means that we are much more likely to see stars on and below the deuterium burning sequence than above it. By definition Stahler’s birthline is more or less coincident with the deuterium burning sequence and this explains the consistency of observations with the idea of a birthline. Unfortunately this apparent birthline is not the place where stars are born and so an age measured from a zero-age deuterium burning sequence is too young by an unknown amount which is normally at least as much as the deuterium burning lifetime.
D’Antona and Mazzitelli (1994) take another approach which is to begin evolution at a point in the H–R diagram of sufficiently high luminosity, or equivalently at sufficiently large radius on a Hayashi track, that $`\tau _{\mathrm{KH}}`$ is much less than some acceptable error in the age at any later time. This error might be chosen to be about $`100`$yr. Such a definition leads to a well-defined age at any point on a track corresponding to a constant mass. For comparison, figure 1 shows such a set of pre-main-sequence tracks for $`M=0.1`$, $`0.2`$, $`0.5`$, $`1`$ and $`2M_{}`$ and isochrones fitted to $`50`$ models in this range. We describe our models in detail in the following sections but note that, because we use very similar physics, they do not differ greatly from those of D’Antona and Mazzitelli.
However stars do continue to accrete long after their photospheres are exposed and they can be placed in an H–R diagram. A star of about $`1M_{}`$ is most unlikely to have reached this mass while $`\tau _{\mathrm{KH}}`$ was still small or indeed even before deuterium exhaustion. For this reason we take the zero-age point of each of our tracks to be a point at which the protostellar core has the mass and radius of a typical self-gravitating fragment of a protostellar cloud and model the subsequent evolution with ongoing accretion. We then investigate how changing these initial conditions alters the subsequent isochrones in the H–R diagram to get an idea of how well we can constrain the age of an observed pre-main-sequence star relative to its birth as a self-gravitating accreting body.
## 3 The stellar models
We construct our stellar models using the most recent version of the Eggleton evolution program (Eggleton 1971, 1972, 1973). The equation of state, which includes molecular hydrogen, pressure ionization and coulomb interactions, is discussed by Pols et al. (1995). The initial composition is taken to be uniform with a hydrogen abundance $`X=0.7`$, helium $`Y=0.28`$, deuterium $`X_\mathrm{D}=3.5\times 10^5`$ and metals $`Z=0.02`$ with the meteoritic mixture determined by Anders and Grevesse (1989). Hydrogen burning is allowed by the pp chain and the CNO cycles. Deuterium burning is explicitly included at temperatures too low for the pp chain. Once the pp chain is active hydrogen is assumed to burn to He<sup>4</sup> via deuterium and He<sup>3</sup> in equilibrium. The burning of He<sup>3</sup> is not explicitly followed. Opacity tables are those calculated by Iglesias, Rogers and Wilson (1992) and Alexander and Ferguson (1994). An Eddington approximation (Woolley and Stibbs 1953) is used for the surface boundary conditions at an optical depth of $`\tau =2/3`$. This means that low-temperature atmospheres, in which convection extends out as far as $`\tau 0.01`$ (Baraffe et al. 1995), are not modelled perfectly. However the effect of this approximation on observable quantities is not significant in this work (see for example Kroupa and Tout 1997).
We assume that material is accreted from a disc on to a thin equatorial region of the star so that normal photospheric boundary conditions are appropriate over most of its surface. This would also be true even if the inner edge of the disc is magnetically disrupted and the material funnelled to a few spots or narrow accretion curtains whose areas represent a relatively small fraction of the stellar surface. Because our models are one-dimensional we must apply these same boundary conditions over the whole surface. Similarly we must assume that accreted material is rapidly mixed over this same complete surface so that, on accretion of mass $`\delta M`$, we can add a spherical shell of mass $`\delta M`$ with composition equal to the initial, or ambient, composition. We note that the photospheric boundary conditions effectively fix the thermodynamic state of the accreted material to those conditions over the radiating stellar surface. This is equivalent to the assumption that boundary layer shocks or, in the case of magnetically funnelled accretion, shocks at or just above the stellar surface remove any excess entropy from the accreting material and so is not unduly restrictive. Ideally we would like to treat this problem in two-dimensions. We could then apply different boundary conditions over the equatorial band or polar spots where accretion is actually taking place. With current computational power and techniques such models may not be too far off (Tout, Cannon and Pichon, private communication).
## 4 Initial conditions
We wish to take as an initial model a typical protostellar core of mass $`M_0`$ that is self gravitating within a cloud and that has reached hydrostatic but not yet thermal equilibrium out to a radius $`R_0`$. Additional material beyond $`R_0`$ may be gravitationally bound to the star but not yet accreted. We assume that the core is spherically symmetric out to $`R_0`$ and that beyond this radius material sinks on to a disc from which it is accreted in a thin equatorial band or other relatively small part of the stellar photosphere.
The technique we use to construct the initial model is fairly standard. We take a uniform composition zero-age main-sequence model of mass $`M_0`$ and add in an artificial energy generation rate $`ϵ_\mathrm{c}`$ per unit mass uniformly throughout the star. Initially $`ϵ_\mathrm{c}`$ is negligible but we gradually increase it so that the star is slowly driven back up its Hayashi track. In a sense $`ϵ_\mathrm{c}`$ mimics the thermal luminosity that would be released if the star were contracting down the Hayashi track. These objects, however, are in thermal equilibrium. We continue to increase $`ϵ_\mathrm{c}`$ until the radius of the object is considerably more than $`R_0`$. At this point we may add or subtract mass freely, while maintaining hydrostatic and thermal equilibrium, and so vary $`M_0`$. In this way we can reach masses below the hydrogen burning limit that would not have a zero-age main-sequence state of their own. We then switch off the artificial energy generation and allow the star to contract down its Hayashi track supported by the usual gravitational energy release. When $`R=R_0`$ we have our initial model.
We choose a protostellar core of $`M_0=0.1M_{}`$ and $`R_0=3R_{}`$ as our standard initial model. This choice of the initial mass and radius of the pre-main-sequence star is necessarily somewhat arbitrary because it depends on the pre-collapse conditions and the dynamics of the collapse process. We choose a mass and radius taken to represent a young star at the end of the collapse and spherical infall phase of evolution at a time when it first becomes optically visible. This will include the initial protostellar core that forms from the collapse phase plus any mass that is accreted on to this core before the infall becomes significantly aspherical. This happens when the infalling material has enough angular momentum to force it to collapse towards a disc rather than be accreted directly by the protostellar core. Any further accretion from this point will be through this circumstellar disc.
The gravitational collapse of a molecular cloud forms a first protostellar core when the density becomes large enough to trap the escaping IR radiation (e.g. Larson 1969). This sets a minimum mass for opacity limited fragmentation at about $`0.01M_{}`$ (Low & Lynden-Bell 1976, Rees 1976). This minimum mass will be increased by the material from further out with low angular momentum (along the rotation axis) plus the matter that has its angular momentum removed/redistributed by gravitational torques (e.g. Larson 1984) in the disc on timescales short compared with the free-fall time. The accretion of low-angular momentum matter probably increases the protostar’s mass by a factor of 3 (for initially uniform density collapse). The accreted disc material can be estimated as that with dynamical times significantly less than the original free-fall time. Thus, material within $`2050`$au should be accreted within $`10^3`$ years, which corresponds to disc sizes several times larger than the first core. For initially solid body rotation and uniform cloud density this translates to a mass at least three times larger. We thus estimate our initial mass as about $`0.1M_{}`$. This compares well to the mass of a protostellar core from a spherical collapse within a fraction of a free-fall time (Winkler & Newman 1980) and to the mass within $`50`$au in a collapse including rotation (Boss 1987, see also Lin & Pringle 1990). An initial mass of $`0.1M_{}`$ is also comparable to the observed lower limit for stellar masses and still allows for significant mass increase through subsequent accretion.
The choice of the initial stellar radius is perhaps more constrained. Estimates of this radius depend on the dynamics of the collapse, but are generally in the range of $`2.5`$ to $`3R_{}`$ (Stahler 1988, Winkler & Newman 1980). We have chosen the value of $`3R_{}`$ as given by Winkler & Newman (1980) for an accreting protostar of $`0.1`$ to greater than $`0.5M_{}`$. We investigate variations in $`R_0`$ and $`M_0`$ and find that the precise choice is not terribly critical anyway.
## 5 Standard models
From the standard initial conditions $`M_0=0.1M_{}`$ and $`R_0=3R_{}`$ we evolve a set of thirteen pre-main-sequence stars accreting at constant rates of $`10^{5.5}`$, $`10^{5.75}`$, $`10^6`$, $`10^{6.25}`$, $`10^{6.5}`$, $`10^{6.75}`$, $`10^7`$, $`10^{7.25}`$, $`10^{7.5}`$, $`10^{7.75}`$, $`10^8`$, $`10^{8.5}`$ and $`10^9M_{}\mathrm{yr}^1`$. This range of accretion spans that necessary to produce stars of between $`0.1`$ and $`3M_{}`$ within $`10^6`$yr. The lower rates are similar to those observed in classical T Tauri stars (e.g. Gullbring et. al. 1998), while the higher rates correspond to a time average when episodic FU Ori-type events are responsible for the bulk of the mass accretion. These outbursts, with accretion rates of about $`10^4M_{}\mathrm{yr}^1`$ lasting about $`100`$yr and probably recurring every $`1,000`$yr (Hartmann 1991, Kenyon 1999), are necessary to reconcile the very slow accretion, observed in classical T Tauri stars (Gullbring et al. 1998) and in the younger Class I objects (Muzerolle, Hartmann & Calvet 1998), with the higher envelope infall rate (Kenyon et al. 1990). FU Ori events have been successfully modelled as outbursts of high disc-accretion (Hartmann 1991, Bell et al. 1995, Kenyon 1999). The timescales on which the accretion rate changes are small as far as the stellar evolution of the underlying pre-main-sequence star is concerned, much shorter than its Kelvin-Helmholtz timescale and so are not likely to alter the progress of stellar evolution. On the other hand the thermal timescale of the outer layers is much shorter and so a fluctuating accretion rate does have an affect that we should investigate but one which we cannot follow with our existing machinery nor one that is likely to alter the luminosity and radius of the star as a whole.
These tracks are plotted in figure 2 as thin lines where, for clarity, we terminate them at a mass of $`2M_{}`$. The zero-age main sequence and zero-age deuterium burning sequence are added to this figure as dots logarithmically placed in mass. The evolution of a pre-main-sequence star is governed by three competing processes and their associated timescales. These are gravothermal contraction, accretion and nuclear burning. In the early stages, when the thermal timescale $`\tau _{\mathrm{KH}}`$ is much shorter than the accretion timescale $`\tau _{\dot{M}}`$, the star will evolve down a Hayashi track. If $`\tau _{\dot{M}}\tau _{\mathrm{KH}}`$ it will tend to evolve across to increasing effective temperature to reach a Hayashi track of correspondingly larger mass. Nuclear burning interrupts the contraction once the central temperature is sufficient to ignite a particular reaction and for as long as there is sufficient fuel that the nuclear-burning timescale $`\tau _\mathrm{N}>\tau _{\mathrm{KH}}`$. This first occurs when deuterium ignites and each track is seen to evolve up the deuterium-burning sequence by an amount that depends on how much mass can be accreted before the deuterium is exhausted. Because these stars are all fully convective, all the deuterium in the star is available for burning, as is any additional deuterium that is accreted at the surface. Subsequently the tracks are again determined by the competition between $`\tau _{\dot{M}}`$ and $`\tau _{\mathrm{KH}}`$ but while $`\tau _{\dot{M}}`$ is only increasing linearly $`\tau _{\mathrm{KH}}`$ is growing almost exponentially with time so that accretion eventually dominates, driving the stars to higher temperature along tracks parallel to the main sequence. At the lowest accretion rates stars descend Hayashi tracks to the zero-age main sequence and then evolve along it because $`\tau _{\dot{M}}\tau _\mathrm{N}`$. Above $`0.3M_{}`$ stars develop a radiative core on and just above the main sequence. This causes a contracting pre-main-sequence star to leave its Hayashi track and move to higher temperature before reaching the zero-age main sequence. At the highest accretion rates $`\tau _{\dot{M}}\tau _{\mathrm{KH}}`$ and the stars remain above the main sequence until $`M>2M_{}`$. Each of these features is qualitatively described by Hartmann, Cassen & Kenyon (1997) and a direct comparison with their work can be found in Kenyon et al. (1998).
Isochrones, or loci of equal age measured from the zero-age point at $`M=M_0`$ and $`R=R_0`$, are drawn as thick lines across the tracks at ages of $`10^4`$, $`10^5`$, $`10^6`$, $`2\times 10^6`$, $`5\times 10^6`$, $`10^7`$ and $`10^8`$yr. The $`10^4`$yr isochrone lies so close to the initial point, and is consequently so prone to small changes in $`M_0`$ or $`R_0`$, that it is futile to claim measurements of age below $`10^5`$yr even if the initial conditions and accretion rate can be estimated. The $`10^8`$yr isochrone closely follows the zero-age main sequence. Loci where the masses are $`0.2`$, $`0.5`$, $`0.8`$, $`1.0`$ and $`2.0M_{}`$ are interpolated with thick lines. The actual pre-main-sequence track of a $`0.1M_{}`$ star is drawn as a thick line to complete the figure.
In figure 3 we overlay both the isochrones and the equal-mass loci on the pre-main-sequence tracks and isochrones of figure 1 for comparison. During the Hayashi contraction the accreting equal-mass loci follow very closely the tracks of non-accreting pre-main-sequence stars so that determination of a pre-main-sequence star’s precise position in the Hertzsprung–Russell diagram gives an equally precise estimate of its mass, irrespective of age and accretion rate. However, once the radiative core forms, an error in the mass determination is introduced. By comparing the change in the slope of the equal-mass loci with the non-accreting tracks we see that accretion delays the effects of the establishment of the radiative core. We note that, up to the point at which the radiative core forms, the fully convective structure has meant that stars evolve essentially homologously. For this reason the approximations used by Hartmann et al. (1997) have remained valid and we expect their tracks to be a good representation. Once this homology is lost their equation (6) is no longer valid nor is their assumption that luminosity is a unique function of mass and radius. The tracks would begin to deviate radically and it becomes much more important to follow the full evolution as we do here.
Below $`0.2M_{}`$ and ages greater than $`10^6`$yr the isochrones are quite similar too but they deviate drastically for larger masses with the accreting stars appearing significantly older than they are by factors of two or more according to the non-accreting pre-main-sequence isochrones. This is because, for a given Kelvin-Helmholtz timescale, a lower-mass Hayashi-track star is smaller in radius so that, as mass is added and the star moves to higher temperature it always remains smaller and appears older than a star that originally formed with its current mass. As an example, a pre-main-sequence star accreting at $`10^{6.75}M_{}\mathrm{yr}^1`$ would appear to be over $`10^7`$yr old at $`1M_{}`$ when it is only $`5\times 10^6`$yr old, while one accreting at $`10^6M_{}\mathrm{yr}^1`$ at $`1M_{}`$ would appear to be $`5\times 10^6`$yr old when it is only $`2\times 10^6`$yr old. Then, because of the delayed appearance of a radiative core, more massive stars begin to appear younger again. For instance, our star accreting at $`10^7M_{}\mathrm{yr}^1`$ would appear to remain at $`10^7`$yr from about $`1.3`$ to $`1.74M_{}`$, during which time it ages from $`1.3`$ to $`1.65\times 10^7`$yr. At ages of less than $`10^6`$ years the position in the Hertzsprung–Russell diagram depends far more on the starting point than on whether or not the star is accreting but generally the age will be overestimated by up to the estimate itself.
## 6 Changing $`R_0`$ and $`M_0`$
To illustrate the sensitivity of the evolutionary tracks, and the corresponding mass and age determinations, to our choice of initial model we consider two alternative starting points. First, keeping $`M_0=0.1M_{}`$, we reduce $`R_0`$ until the initial protostellar core is just beginning to ignite deuterium. This occurs at about $`R_0=1.1R_{}`$. Second we reduce $`M_0`$ to $`0.05M_{}`$ and change $`R_0`$ to $`2.4R_{}`$ so as to keep the same mean density. The evolutionary tracks followed by pre-main-sequence stars accreting at the same accretion rates, in the range $`10^9`$ to $`10^{5.5}M_{}\mathrm{yr}^1`$, together with isochrones and equal-mass loci are plotted in figures 4 and 6 respectively. A non-accreting pre-main-sequence track for $`0.05M_{}`$ in figure 6 is seen to run asymptotically down the main sequence. Because its mass is below the hydrogen burning limit of about $`0.08M_{}`$ this downward progress to lower luminosities and temperatures will not be halted and this star will end up as a degenerate brown dwarf.
The isochrones and equal-mass loci are overlaid with those corresponding to our standard tracks in figures 5 and 7. In neither case are the equal-mass loci significantly affected. Thus the deviation in these loci from non-accreting pre-main-sequence tracks can be solely attributed to the accretion. On the other hand, it is inevitable that the isochrones are affected but, in both cases, the difference is small compared with the overall effect of accretion. In the case of reduced $`R_0`$ the difference is a constant offset of $`10^6`$yr. This is just the time taken by a $`0.1M_{}`$ star to contract from $`3`$ to $`1.1R_{}`$. Thus ages of about $`5\times 10^6`$yr would be accurately estimated to within $`20`$ per cent and those of about $`10^7`$yr to within $`10`$ per cent etc. This error represents the extreme if we can be sure that all stars are born before igniting deuterium. If $`R_0`$ were increased beyond $`3R_{}`$ the initial thermal timescale would be so short that the time taken to contract to the point of deuterium ignition would not be noticeably different.
From figure 7 we can see that the deviations from our standard isochrones are even smaller when $`M_0=0.05M_{}`$. In this case, what is important is the time taken to accrete the additional $`0.05M_{}`$ and so the most slowly accreting tracks are most affected. Thus the $`10^9M_{}\mathrm{yr}^1`$ track reaches $`0.1M_{}`$ after $`5\times 10^7`$yr leading to a relatively large absolute difference in the $`10^8`$yr isochrone at $`0.1M_{}`$. However, as this difference is always relative to the accretion rate, it rapidly becomes insignificant as we go up in mass.
In both these cases it is important to note that the actual tracks followed for a given accretion rate become very similar to our standard ones with the relative time between two points on a track being the same. It is just the time taken to reach an equivalent point that alters the isochrones. We deduce that accretion rate has a much more significant effect on the position in the H–R diagram than do the initial conditions.
## 7 Variable accretion rates
The models we have presented so far have accreted at a constant rate throughout their pre-main-sequence life. This is unlikely to be the case in reality. Even so we might hope that a star of a given mass and accretion rate might be found at the intersection of the appropriate accreting track and equal-mass locus of figure 2 irrespective of its accretion history. However we find that this is not the case because a pre-main-sequence star remembers its past. We illustrate the effect of variable accretion rates by considering three paths, beginning at the same point, that converge to an accretion rate of $`10^7M_{}\mathrm{yr}^1`$ when the star’s mass reaches $`0.5M_{}`$ and continue to accrete at that rate, constant thereafter. These tracks are plotted in figure 8. The first has the standard constant accretion rate of $`10^7M_{}\mathrm{yr}^1`$. The second accretes at a rate that decreases linearly from $`10^6M_{}\mathrm{yr}^1`$ at $`t=0`$ to $`10^7M_{}\mathrm{yr}^1`$ at $`0.5M_{}`$ while the third has an accretion rate that increases linearly from nothing to $`10^7M_{}\mathrm{yr}^1`$. At $`0.5M_{}`$ we find that the stars are well separated in luminosity. This is to be expected because a star will take a Kelvin-Helmholtz timescale to adjust its structure to a new accretion rate. As discussed in section 5, it is this same timescale that is balanced with the accretion timescale that is directly responsible for the deviation of the tracks. This timescale balance will be maintained until nuclear burning becomes important, in this case on the zero-age main sequence. Consequently the stars are never given enough time to thermally relax and their early accretion history can be remembered throughout their pre-main-sequence evolution.
At $`1M_{}`$ the mass can still be estimated accurately by comparison with the standard accreting tracks of figure 2 but note that this differs from the mass that would be estimated if we were to compare with non-accreting tracks. At $`0.5M_{}`$ age estimates from non-accreting tracks would be between 30 and 60 per cent too old and at $`1M_{}`$ between 2 and 3 times too old for each of these stars. This reflects the general nature and magnitude of the difference between non-accreting and our standard accreting pre-main-sequence stars, comparison with which would give a better estimate in each of these particular cases (within 10 per cent at $`0.5M_{}`$ and 20 per cent at $`1M_{}`$).
We emphasize again that we cannot estimate the current accretion rate from the position in the H–R diagram but if we know this current rate then placement in an H–R diagram does give us information about the accretion history. This behaviour is in accord with equation (6) of Hartmann et al. (1997), with $`\alpha =0`$. As long as $`\dot{M}/M`$ dominates $`\dot{R}/R`$, this equation predicts $`R`$ as a function of $`M`$ and $`\dot{M}`$ only. Our track with decreasing $`\dot{M}`$ always has $`\dot{R}/R`$ somewhat greater than $`\dot{M}/M`$ because it has reached $`M=0.5M_{}`$ with the two terms in balance at higher accretion rates.
## 8 Changing metallicity
Finally we consider the effect of different metallicities. In general reducing the metallicity moves the zero-age main sequence to hotter effective temperatures and slightly higher luminosities (see for example Tout et al. 1996). This is due to decreased opacity when there are fewer metal atoms providing free electrons. This shift is reflected throughout the pre-main-sequence evolution. Figure 9 shows the same tracks and isochrones as figure 1 but for a metallicity of $`Z=0.001`$. These models have an initial helium abundance of $`Y=0.242`$ and hydrogen $`X=0.757`$ to account for the less-processed interstellar medium from which such stars must be forming. In practice we should correspondingly increase the deuterium abundance too but, because this is very uncertain anyway, we leave it at $`X_\mathrm{D}=3.5\times 10^5`$ so as not to convolute the differences between the two metallicities. Apart from the shift, the tracks are qualitatively similar except for the disappearance of the second hook just above the ZAMS in the more massive star tracks. At $`Z=0.02`$ this is due to the CNO catalytic isotopes moving towards equilibrium in the stellar cores before hydrogen burning begins in earnest.
Figure 10 overlays these tracks and isochrones with those for $`Z=0.02`$. We can see directly that an error of a factor of two or more would be made in the mass estimate and a factor of ten or so in the age if a pre-main-sequence star of metallicity $`Z=0.001`$ were compared with models made for $`Z=0.02`$. Clearly, if stars are indeed still forming at such low metallicities, it is very important to be sure of the precise value before making any comparisons. For a rough estimate of how the tracks move with metallicity we interpolate these two sets of tracks together with a similar set for $`Z=0.01`$. We find the difference in mass
$$\delta M=M(Z=0.02)M(Z)$$
(1)
between tracks of metallicity $`Z`$ and those of solar metallicity that pass through a given point $`(L,T_{\mathrm{eff}})`$ in the Hertzsprung–Russell diagram to be
$$\delta M0.164\left(\mathrm{log}_{10}\frac{Z}{0.02}\right)^{0.7}\left(\frac{L}{L_{}}\right)^{0.25}\left(\frac{T_{\mathrm{eff}}}{10^{3.6}\mathrm{K}}\right)^6$$
(2)
to within 20 per cent before the evolution turns away from the Hayashi tracks.
Figure 11 shows the accretion tracks starting from the standard initial conditions of $`M_0=0.1M_{}`$ and $`R_0=3R_{}`$, together with the associated isochrones and equal-mass loci for $`Z=0.001`$. Figure 12 overlays these with the non-accreting tracks and corresponding isochrones. Similar comments can be made concerning the mass and age determinations as for $`Z=0.02`$ from figure 3. Note however that, at this low metallicity, all the stars have developed a radiative core before reaching $`2M_{}`$.
## 9 Conclusions
If the metallicity of a star forming region is known then the masses on the Hayashi tracks can be fairly accurately determined. As noted by Siess et al. (1997) accretion delays the formation of a radiative core, which consequently begins further down the Hayashi track at a given mass. However the locus of equal mass points will subsequently move to higher luminosities than a non-accreting star of the same mass. Thus the mass determined by comparison with the Henyey portion of any track can be either an under or an overestimate. Figure 13 shows the relative error that might be made in estimating the mass over the region of interest in the H–R diagram. In the Hayashi region accretion generally leads to an overestimate of the age in a comparison with non-accreting tracks while it can lead to an underestimate during the Henyey phase. At any time these errors in age could be a factor of two or more. Figure 14 illustrates the error distribution for age estimates. In addition, not knowing the zero-age mass and radius of the star can lead to an absolute offset in age of up to about $`10^6`$yr so that any age estimate of less than about $`2\times 10^6`$yr cannot be trusted. Ages larger than this can be significantly in error if accretion is taking place at an unknown rate but, once accretion has ceased, the age can be expected to correspond to non-accreting isochrones within a Kelvin–Helmholtz timescale. Thus the absolute error would be about equal to the age at which accretion became insignificant. In all other cases great care must be taken when estimating ages.
Apart from the initial comparatively small offsets, even if the initial conditions of a set of stars are known to be the same, relative ages are equally affected by accretion history. As a particular example, we may wish to decide whether two components of a pre-main-sequence binary star are coeval. If, as pre-main-sequence stars often do, one or both lies in the temperature range between $`10^{3.55}`$ and $`10^{3.75}`$K where the error in age is likely to be more than a factor of two we can expect a significant difference in estimated age even in a coeval system. This binary example can be extended to star clusters. Accretion can lead to an apparent mass-dependent age-spread in otherwise coeval systems when non-accreting pre-main-sequence tracks are used to estimate ages. At any time, if all the stars in a cluster are coeval and began with the same initial core mass, the low-mass stars must have accreted less and hence have lower disc-accretion rates than those of higher mass which must have accreted more material. Thus accretion does not greatly affect the age determination of low-mass stars while higher-mass stars are more affected. Comparison with non-accreting tracks makes these appear older while on Hayashi tracks and younger on the Henyey tracks (see figure 14). Thus, intermediate-mass pre-main-sequence stars can look older than their low-mass counterparts (by up to a factor of five) while yet higher-mass stars can appear younger again. Figure 15 illustrates this point by plotting the estimated age against the estimated mass for all points along the $`5\times 10^6`$yr isochrone fitted to our standard accreting tracks. Though this is a particular case for a particular set of models this qualitative behaviour would be true of any coeval sample that is still undergoing accretion. Indeed mass-dependent ages have been recorded in several young stellar clusters where the lowest-mass stars appear youngest with increasing ages for the intermediate mass stars and lower ages again for the higher mass stars (Hillenbrand 1997; Carpenter et. al. 1997). These mass-dependent ages may reflect ongoing disc-accretion rather than a dispersion in formation time and age determinations in these clusters should be reevaluated in the light of this work.
If the metallicity is not known the situation becomes even worse. For instance, as mentioned in the introduction, the metallicity of some extragalactic star forming regions, and possibly even Orion, may be as low as $`Z=0.001`$. This would lead to an overestimate in mass by a factor of two or more and an overestimate in age by about a factor of ten if a comparison were inadvertently made with solar metallicity tracks.
## ACKNOWLEDGMENTS
CAT and IAB are very grateful to PPARC for advanced fellowships. CAT also thanks NATO, the SERC and the University of California in Santa Cruz for a fellowship from 1990-1 when much of the foundation for this work was laid and the Space Telescope Science Institute for an eight month position during which it was continued. Many thanks go to Jim Pringle for mild but chronic goading and for many ideas and suggestions along the way.
|
no-problem/9907/hep-lat9907028.html
|
ar5iv
|
text
|
# The relevance of center vortices Talk presented by Ph. de Forcrand
## 1 Non-perturbative effects in the center-projected theory
The standard approach to identify center vortices proceeds through gauge fixing. In , we fixed an $`SU(2)`$ ensemble to Direct Maximal Center (DMC) gauge, by iteratively maximizing
$$Q(\{U_\mu \})\underset{x,\mu }{}\left(\text{Tr}U_\mu (x)\right)^2.$$
(1)
After factorizing gauge-fixed links as $`U_\mu (x)=\mathrm{sign}(\text{Tr}U_\mu (x))\times U_\mu ^{}(x)`$ we studied the properties of the ensemble $`\{U_\mu ^{}(x)\}`$, which by construction contains no center vortices. We showed that all non-perturbative features had disappeared: confinement, chiral symmetry breaking, and non-trivial topology. We have now looked at the center-projected theory $`\{\mathrm{sign}(\text{Tr}\mathrm{U}_\mu (\mathrm{x}))\}`$, to see if it inherits the non-perturbative properties of the original. As in , we observe that the string tension is consistent with its $`SU(2)`$ value. We measure the quark condensate $`\overline{\psi }\psi (m_q)`$ as a probe of chiral symmetry breaking. Fig.1 shows that it clearly extrapolates linearly to a non-zero value as $`m_q0`$. Furthermore, it diverges as $`1/m_q`$ for very small quark masses, revealing the presence of a few extremely small eigenvalues which may be caused by the non-trivial topological content of the $`SU(2)`$ configuration. Note the similarity of Fig.1 with the quenched condensate observed with domain-wall fermions . However, the associated quasi-zero modes appear to be strongly localized and not chiral (i.e., $`\overline{\psi }\gamma _5\psi 0`$).
## 2 Unambiguous center-vortex cores
The local implementation of the DMC gauge-fixing Eq.(1) is ambiguous, leading to many local maxima. The properties of $`P`$-vortices obtained from different Gribov copies can be dramatically different . Here we define an unambiguous gauge condition. Note first that DMC is equivalent to maximizing $`_{x,\mu }\mathrm{Tr}_{\mathrm{adj}}U_\mu (x)`$ since $`\mathrm{Tr}_{\mathrm{adj}}U=2\left(\mathrm{Tr}\mathrm{U}\right)^21`$. The idea is thus to smooth the center-blind, adjoint component of the gauge field as much as possible, then to read the center component off the fundamental gauge field. Therefore, Maximal Center Gauge is just another name for adjoint Landau gauge.
The problem of Gribov copies in the fundamental Landau gauge was solved in by computing the covariant Laplacian $`\mathrm{\Delta }_{xy}=2d\delta _{xy}_{\pm \widehat{\mu }}U_{\pm \widehat{\mu }}(x)\delta _{x\pm \widehat{\mu },y}`$ and its lowest-lying eigenvector $`\stackrel{}{v}`$. At each site, $`v(x)`$ has 2 complex color components. The Laplacian gauge condition consists of rotating $`v(x)^{}`$ along direction $`(1,1)`$ at all sites. We follow this construction for the adjoint representation. The covariant Laplacian is now constructed from adjoint links $`U^{ab}=\frac{1}{2}\text{Tr}[U\sigma ^aU^{}\sigma ^b],a,b=1,2,3`$. It is a real symmetric matrix. The lowest-lying eigenvector $`\stackrel{}{v}`$ has 3 real components $`v_i,i=1,2,3`$ at each site $`x`$. One can apply a local gauge transformation $`g(x)`$ to rotate it along some fixed direction, e.g., $`\sigma _3`$. Note, however, that this does not specify the gauge completely: Abelian rotations around this reference direction are still possible. Here they are of the form $`e^{i\theta (x)\sigma _3}`$. What we have achieved at this stage is a variation of Maximal Abelian Gauge which is free of Gribov ambiguities. This Laplacian Abelian Gauge has been proposed in , which also shows that monopoles can not only be identified through the DeGrand-Toussaint procedure in the Abelian projected theory, but should be directly identifiable by the condition $`|v(x)|=0`$ for smooth fields. Abelian monopole worldlines appear naturally as the locus of ambiguities in the gauge-fixing procedure: the rotation to apply to $`v(x)`$ cannot be specified when $`|v(x)|=0`$.
To fix to center gauge, we must go beyond Laplacian Abelian Gauge and specify the Abelian rotation $`e^{i\theta (x)\sigma _3}`$. This is done most naturally by considering the second-lowest eigenvector $`\stackrel{}{v^{}}`$ of the adjoint covariant Laplacian, and requiring that the plane $`(v(x),v^{}(x))`$ be parallel to, say, $`(\sigma _3,\sigma _1)`$ at every site $`x`$. This fixes the gauge completely, except where $`v(x)`$ and $`v^{}(x)`$ are collinear. Collinearity occurs when $`\frac{v_1}{v_1^{}}=\frac{v_2}{v_2^{}}=\frac{v_3}{v_3^{}}`$, i.e. 2 constraints must be satisfied. Thus, gauge-fixing ambiguities have codimension 2: in 4$`d`$, they are 2$`d`$ surfaces. They can be considered as the center-vortex cores for the following reasons:
First, note the analogy with fluid dynamics: in that context, a vortex refers to a helical flow, with the vortex core at the center of the helix. At the core, the centripetal acceleration vanishes, so that velocity and acceleration are collinear. Indeed, this collinearity condition has been used to identify vortex cores in 3$`d`$ fluid flow .
Consider now the intersection of our 2$`d`$ center-vortex core with some plane $`(\mu ,\nu )`$ at a point $`x_0`$. As one describes a small loop around $`x_0`$ in the plane $`(\mu ,\nu )`$, the rotation $`\theta (x)`$ necessary to maintain $`(v(x),v^{}(x))`$ parallel to $`(\sigma _3,\sigma _1)`$ varies by $`2\pi `$, reflecting the gauge singularity at $`x_0`$. But this is a rotation of the adjoint field, so that the gauge rotation of the fundamental field as one goes around the small loop will be $`e^{i\frac{1}{2}2\pi \sigma _3}=\mathrm{𝟏}`$. A small Wilson loop around $`x_0`$ will have trace $`1`$ in the fundamental representation. This shows that center-vortex cores are aptly named, since they are indeed dual to $`1`$ small Wilson loops. If the gauge field is smooth, these $`1`$ loops will also be identified by the usual procedure, consisting of extracting $`\mathrm{sign}(\text{Tr}\mathrm{U}_\mu (\mathrm{x}))`$ and computing the $`Z(2)`$ plaquette. The so-called $`P`$-vortices constructed that way are indeed almost dual to the center-vortex cores, but not exactly. This is because they are obtained by a somewhat arbitrary, non-linear recipe. In our construction, unlike in DMC, the center-vortex cores where gauge fixing is ambiguous are the fundamental objects.
Our Laplacian Center Gauge solves the Gribov problem. It does not require an underlying lattice, but can be studied in the continuum like the original Laplacian gauge . And it exhibits the center-vortex cores as an intrinsic property of the gauge field, independently of the gauge condition chosen: one could specify to rotate $`v(x)`$ and $`v^{}(x)`$ at every site along arbitrary, $`x`$-dependent directions rather than $`\sigma _3`$ and $`\sigma _1`$. The center-vortex cores will be unchanged.
Still, some arbitrariness remains in two respects:
$`(i)`$ Another discretization of the covariant Laplacian could be used, with higher-derivative, irrelevant terms. This will affect the location and density of the center-vortex cores.
$`(ii)`$ Another choice of covariant eigenvectors $`\stackrel{}{v},\stackrel{}{v^{}}`$ could be made. While it seems natural to choose for $`\stackrel{}{v}`$ the lowest-lying eigenmode to maximize the smoothness of the gauge, the choice of $`\stackrel{}{v^{}}`$ appears less crucial. The third eigenvector of the Laplacian could be taken as well, or even the second eigenvector of a Laplacian modified as per $`(i)`$. Under a different choice of $`\stackrel{}{v^{}}`$, the center-vortex cores will change, but not the monopoles, defined by $`|v(x)|=0`$. The 1$`d`$ monopole worldlines remain embedded in the 2$`d`$ center-vortex cores. In fact, one may view the center-vortex cores as the sheet spanned by the Dirac string of the monopoles in the adjoint representation.
We have applied Laplacian Center Gauge fixing and center projection to an ensemble of $`SU(2)`$ configurations. As in , the string tension, the quark condensate and the topological charge all vanish upon removal of the $`P`$-vortices. The $`Z(2)`$ string tension is consistent with its $`SU(2)`$ value, and the $`Z(2)`$ quark condensate behaves as in Fig.1. The main difference is that the $`P`$-vortex density is higher than with DMC ($`11\%`$ vs $`5.5\%`$). The same effect was observed for the monopole density in Laplacian Abelian Gauge .
We have also applied our procedure to classical configurations. Fig.2 shows a cooled two-instanton configuration. Note the double loop of vortex cores, which shows interesting signs of self-intersection, as required in the continuum . The green area shows the $`Z(2)`$ action density, coming from $`P`$-vortices: while overall agreement is quite good between vortex cores and $`P`$-vortices, the latter are more sensitive to UV fluctuations and show some spurious structure. Fig.3 shows a caloron, i.e. a large instanton at finite temperature. As explained in , it decomposes into a pair of monopoles, identifiable by their action and topological charge densities. The center-vortex cores recognize these monopoles: they pierce them and $`|v|`$ vanishes precisely at their center, as shown by the change of color (red=$`v,v^{}`$ parallel; blue=anti-parallel). Again the $`P`$-vortices give a slightly modified picture.
Finally, our procedure readily generalizes to $`SU(N)`$: complete gauge-fixing is achieved by rotating the first $`(N^22)`$ eigenvectors of the adjoint Laplacian along some reference directions. Ambiguities arise whenever these $`(N^22)`$ eigenvectors \[each with $`(N^21)`$ real components\] become linearly dependent. This again defines codimension-2 center-vortex cores.
|
no-problem/9907/cond-mat9907376.html
|
ar5iv
|
text
|
# Non-linear electrical conduction and broad band noise in the charge-ordered rare earth manganate Nd0.5Ca0.5MnO3
\[
## Abstract
Measurements of the dc transport properties and the low-frequency conductivity noise in films of charge ordered Nd<sub>0.5</sub>Ca<sub>0.5</sub>MnO<sub>3</sub> grown on Si subtrate reveal the existence of a threshold field in the charge ordered regime beyond which strong non linear conduction sets in along with a large broad band conductivity noise. Threshold-dependent conduction disappears as $`TT_{CO}`$,the charge ordering temperature. This observation suggests that the charge ordered state gets depinned at the onset of the non-linear conduction.
\]
Rare-earth manganites with a general chemical formula Re<sub>1-x</sub>Ae<sub>x</sub>MnO<sub>3</sub>( where Re is a trivalent rare-earth and Ae is a divalent alkaline earth cation) show a number of interesting phenomena like Colossal Magnetoresistance (CMR) and Charge Ordering (CO). These compounds belong to the ABO<sub>3</sub> type perovskite oxides where Re and Ae ions occupy the A site and Mn occupies the B site. It has been known for some time that these manganites (depending on the size of the average A-site cationic radius $`<r_A>`$ ) can charge order, for certain values of x. The nature of the CO state depends on the value of $`<r_A>`$ and it is stabilized if the value of $`<r_A>`$ is smaller.The CO transition is associated with a lattice distortion as well as orbital and spin ordering.
Recent experiments have established that CO state is strongly destabilized by a number of different types of perturbations. An applied magnetic field of sufficient magnitude can lead to a collapse of the CO gap $`\mathrm{\Delta }_{CO}`$ and melting of the CO state . The CO phenomenon is stabilized by lattice distortion. A perturbation to the distortion can also destabilize the CO state . Recently it has been reported that application of an electric field , optical radiation , or x-ray radiation melts the CO state in Pr<sub>0.7</sub>Ca<sub>0.3</sub>MnO<sub>3</sub>. It is not clear, however, what causes destabilization of the CO state in these cases and whether the underlying mechanism is same for all perturbations.
Electric field induced melting of the CO state leads to a strong non-linear conduction as seen in the bulk as well as in films . This raises a very important question whether there is a threshold field associated with the non-linear conduction. In a driven system pinned by a periodic potential there exists a threshold force beyond which the system is depinned . If the system is charged and the driving force comes from an electric field then this shows as a threshold field or bias for the onset of a non-linear conduction.Existence of a threshold field would imply that the melting of the CO state by an applied electric field can actually be a depining phenomena. We investigated this in films of the CO system Nd<sub>0.5</sub>Ca<sub>0.5</sub>MnO<sub>3</sub> by careful measurement of field dependent dc transport at various temperatures and also followed it up with a measurement of electrical noise (voltage fluctuation) as a function of applied dc bias. We made the following important observations :
(1) There indeed exists a threshold field ($`E_{th}`$) below the CO temperature $`T_{CO}`$ and for $`E>E_{th}`$ a strong nonlinear conduction sets in.
(2) $`E_{th}`$ strongly depends on $`T`$ and $`E_{th}`$ 0 as $`TT_{CO}`$.
(3) For $`T<T_{CO}`$, a large voltage fluctuation ($`<\delta V^2>/V^2`$) appears at the threshold field. Both $`E_{th}`$ and $`<\delta V^2>/V^2`$ reaches a maximum at $`T`$ 90K ($`0.4T_{CO}`$).
(4) The spectral power distribution of the voltage fluctuation is broad band and has nearly 1/f character.
In Nd<sub>0.5</sub>Ca<sub>0.5</sub>MnO<sub>3</sub>, a system with relatively small $`<r_A>`$, the CO transition takes place from a high temperature charge disordered insulating phase to a charge ordered insulating phase (COI). Charge ordering in this system has been studied by us in details previously . Poly-crystalline films of Nd<sub>0.5</sub>Ca<sub>0.5</sub>MnO<sub>3</sub> (average thickness$``$ 1000 nm) were deposited on Si(100) single crystal substrates by nebulized spray pyrolysis of organometallic compounds. The details of sample preparation and characterisation (including X-ray) have been given elsewhere . Contacts were made by sputtering gold on the films and connecting the current and voltage leads on the gold contacts by silver paint. The I-V characteristics was measured by dc current biasing and the voltage between the voltage leads was measured by a nano-voltmeter . For measuring the electrical noise, the fluctuating component of the voltage $`\delta V`$ was amplified by $`5X10^3`$ times by a low noise pre-amplifier. The output of the pre-amplifier was sampled by an ADC card and the data were directly transferred to the computer. The temperature was controlled to within 10 mK for both the measurements.
The films have a $`T_{CO}`$$``$250K as seen from the resistivity data.The resistivity was measured at a measuring current of 3 nA, which is much lower than the current where non-linear conductivity sets in. The experiment was conducted down to 80K where the sample resistance becomes more than 100M$`\mathrm{\Omega }`$, the limit of our detection electronics.
In figure 1, we show the typical I-V curves at few characteristic temperatures.At all the temperatures (except that at 220K) there is a clear signature of a threshold voltage $`V_{th}`$ beyond which the current rises significantly signalling the onset of strong nonlinear conduction. (The separation of electrodes in our experiment is 2 mm, so that $`E_{th}=5V_{th}`$ volts/cm). I-V curves show two components of conduction: a normal component which exists at all V and a strongly non-linear component starting at $`V>V_{th}`$. The normal component although not exactly linear in I-V, has much less non linearity. We fit our I-V data using the following empirical expression which allows us to separate out the two components :
$$I=f_1(VV_{th})+f_2(V)=C_1(VV_{th})^{n_1}+C_2V^{n_2}$$
(1)
where $`f_1`$, a function of $`(VV_{th})`$, is the component of current that has a threshold associated with it and $`C_1=0`$ for $`V<V_{th}`$. The component $`f_2`$ is the normal conduction component. $`C_1`$,$`C_2`$,$`n_1`$,$`n_2`$ are constants for a given temperature. The data at all temperatures can be well fitted to eqn.1 for $`T>`$ 90K as shown by the solid lines in figure 1. The dashed and dashed-dotted lines give the contributions of each of the terms. For $`T<`$ 90K certain additional features show up (see data at 81K) in the I-V data which give impression that there may be multiple thresholds. In figure 2(a) we have plotted the threshold voltage $`V_{th}`$ as a function of $`T`$ as obtained from eqn.1. It can be seen that $`V_{th}0`$ as $`TT_{CO}`$. Within the limitations of our detectibility, we could see a finite nonzero $`V_{th}`$ upto $`T170K0.7T_{CO}`$.Beyond this temperature it is difficult to distinguish between the two conduction components.
The relative contributions of $`f_1`$ and $`f_2`$ to the total current (expressed as the ratio $`f_1/f_2`$ evaluated at $`I=1\mu A`$) has been plotted as a function of T in figure 2(b). At T¡¡T<sub>CO</sub>, the non-linear component is orders of magnitude larger than the normal conduction component and they are comparable as T$`T_{CO}`$. The exponent $`n_1`$ is strongly temperature dependent and from a value $`2`$ at 160K it reaches a value more than 5 at $`T100K`$. The exponent $`n_2`$ does not have much of a temperature dependence and is $`1.11.4`$ for T$`180K`$.
In pinned driven system one often sees onset of broad band noise as the system is depinned at the threshold voltage . We find that such is indeed the case in this system. In figure 3 we show the magnitude of the voltage fluctuation$`<\mathrm{\Delta }V^2>/V^2`$ as a function of the applied bias V at $`T=100K`$ along with the I-V curve. The arrow indicates $`V_{th}`$. It is clear that the voltage fluctuation has a non monotonous dependence on V and reaches a peak at $`VV_{th}`$. This fluctuation has been seen at all $`T<0.7T_{CO}`$ where we can detect measurable $`V_{th}`$. The peak values of the fluctuation measured at different T are shown in figure 2(c). The fluctuation $`0`$ as $`TT_{CO}`$ and has a peak at 90K where $`V_{th}`$ also shows a peak.
Frequency dependences of the spectral power $`S_V(f)`$ measured at 100K with biases V¡V<sub>th</sub>, V$``$V<sub>th</sub> and V¿V<sub>th</sub> are shown in figure 4 . We have plotted the data as $`f.S_V/V^2`$ vs. $`f`$. For a pure 1/f noise ($`S_V`$ $``$ 1/f), this should be a straight line parallel to the f-axis. It can be seen that the predominant contribution to noise has 1/f character.In addition,there is another broad band contribution riding on the main 1/f contribution. At higher V the spectra becomes more of 1/f nature.
The onset of strong non-linear conduction at a threshold voltage and the accompanied broad band noise has been seen in solids like NbSe<sub>3</sub>,TaS<sub>3</sub> which show depinning of charge density waves (CDW) by a threshold field . Though the physics of CDW and CO states are entirely different, the underlying phenomenological description of depining can be similar. Electron diffraction (ED) and electron microscopy studies on a CO system (La<sub>0.5</sub>Ca<sub>0.5</sub>MnO<sub>3</sub>) have shown that the CO is associated with formation of stable pairs of Mn<sup>3+</sup>O<sub>6</sub> stripes. The Mn<sup>3+</sup>O<sub>6</sub> octahedra in the stripes are strongly distorted by the Jahn-Teller (JT) distortion .It is possible that the stability of the CO system depends on the stability of the stripes which can be pinned. The strong JT distorted pairs of the Mn<sup>3+</sup>O<sub>6</sub> octahedra can act as periodic pinning sites due to local strain field. From our data for $`T<`$ 90K, it seems there are changes occuring below 90K. We are not clear about the changes. We only note that in magnetic studies we found that strong irreversibility sets in below 80K .
To conclude, the present study demonstrates that there is a threshold field associated with the onset of non-linear conduction in the CO system along with the existence of a broad band noise. The observation is taken as evidence of depinning of CO state as the origin of non-linear conduction in these solids.
FIGURE CAPTIONS
(1) FIG. 1. I-V curves at different temperatures, solid line shows the total I, dashed and dashed-dotted lines show the components f<sub>1</sub> and f<sub>2</sub>.
(2) FIG. 2 Temperature variation of (a) resistivity, (b) magnitude of threshold voltage, (c) relative contributions of f<sub>1</sub> and f<sub>2</sub> and (d) noise magnitude at the threshold voltage
(3) FIG. 3 The noise magnitude and I-V characteristics at 100K. The arrow indicates the threshold voltage.
(4) FIG. 4 Frequency spectrum of the noise at 100K for different bias values.
|
no-problem/9907/cond-mat9907319.html
|
ar5iv
|
text
|
# Numerical study on Anderson transitions in three-dimensional disordered systems in random magnetic fields
## I Introduction
Since the pioneering work by Anderson, the metal-insulator transition driven by disorder, which is called the Anderson transition(AT), has attracted much attention for many years. The critical behavior of the AT is conventionally classified, depending on the symmetry of hamiltonians, into three universality classes: the orthogonal, the unitary and the symplectic classes. Systems invariant under spin rotation as well as time reversal form the orthogonal class. The unitary class is characterized by the absence of the time reversal symmetry. Systems invariant under time reversal but having no spin rotation symmetry belong to the symplectic class.
In the last decade, there has been considerable progress in the numerical study of the AT in three dimensions(3D) by the finite-size scaling analysis for quasi-1D systems . In the early stage, it was not easy to confirm numerically for the 3D orthogonal class that the critical exponent is insensitive to the choice of the probability distribution of random potential . This discrepancy in exponents for different distributions of random potential has been removed by improving the accuracy of numerical calculations and by taking into account the corrections to scaling . With such a high-accuracy analysis, it has been concluded that the critical exponent for the orthogonal system can be distinguished from that for the unitary system. These recent developments confirm the universality of critical exponents as well as the validity of the conventional classification of universality classes in AT. It should be noted, however, that in most cases, such analyses have been restricted to the AT near the band center in the presence of a random scalar potential, where the scaling analysis works fairly well. In contrast, for the AT away from the band center, no systematic scaling behavior has been observed.
The AT in a magnetic field has been studied extensively, mainly in connection with the quantum Hall effect. Accordingly, in most cases, the magnetic field was assumed to be uniform in space and the disorder was introduced by a random scalar potential. On the other hand, in recent years, there has also been considerable interest in the transport properties of a system subject to a spatially random magnetic field. The random magnetic field introduces randomness as well as the absence of invariance under time reversal in a system. In fact, it has been shown that in 3D the AT occurs in the presence of the random magnetic field and without a random scalar potential.
The AT in a random magnetic field is driven by the coherent scattering due to a fluctuating vector potential. A nontrivial feature of this coherent scattering by a fluctuating vector potential has been pointed out in a theory of strongly correlated spin systems. Much work has also been done on transport properties in 2D in a random magnetic field, in particular in connection with the theory of the fractional quantum Hall effect in a high magnetic field. It is thus an important issue to understand how the effect of coherent scattering in a strongly fluctuating random vector potential will show up in the AT.
The magnetic field breaks the time reversal symmetry and thus all systems in the magnetic field should belong to the unitary class. In fact, it has been demonstrated numerically in 3D that in the presence of a random scalar potential, the critical exponent takes a universal value, irrespective of whether the magnetic field is uniform or random. The AT with a random potential and in a uniform magnetic field has been re-analyzed recently and the critical exponent for the localization length has been determined to be $`1.43\pm 0.06`$.
The AT, in 3D, in the presence of a random vector potential and without a random scalar potential, has also been investigated based on the finite-size scaling. The data suggested that the mobility edge is very close to the band edge. The exponent for the localization length has been estimated to be $`\nu 1`$ which is considerably smaller than that in the case with an additional random scalar potential and in a uniform magnetic field. This seemed to indicate that in 3D the AT driven solely by a random vector potential might exhibit critical behavior different from that observed in other unitary systems, for example systems having additional random scalar potential. Apparently, this questions the validity of the conventional classification of universality classes in AT. On the other hand, it should be recalled that the finite-size scaling analysis did not work for the AT near the effective band edge. It is thus important to re-examine the applicability of the scaling ansatz to the AT driven solely by the random magnetic field in which the mobility edge lies quite close to the band edge.
In this paper, we report on a high-precision numerical finite-size scaling analysis for the AT in the random magnetic field. In order to clarify the origin of the above mentioned discrepancy between the critical exponent of the AT far away from the band center induced solely by randomness in a vector potential and the exponent obtained for other unitary systems, we have considered systems both with and without an additional random potential. We also evaluate the fractal dimension of the wave functions at the critical point based on the equation-of-motion method.
The paper is organized as follows. In the next section, the hamiltonian which we adopt is introduced. The finite-size scaling study on the critical phenomena is presented in section 3. In section 4, the fractal dimensionality of the wave function is discussed by means of the equation-of-motion method. Section 5 is devoted to summary and discussion.
## II Model
The model is defined by the Hamiltonian
$$H=V\underset{<i,j>}{}\mathrm{exp}(\mathrm{i}\theta _{i,j})C_i^{}C_j+\underset{i}{}\epsilon _iC_i^{}C_i,$$
(1)
where $`C_i^{}(C_i)`$ denotes the creation(annihilation) operator of an electron at the site $`i`$ of a 3D cubic lattice. Energies $`\{\epsilon _i\}`$ denote the random scalar potential distributed independently and uniformly in the range $`[W/2,W/2]`$. The Peierls phase factors $`\mathrm{exp}(\mathrm{i}\theta _{i,j})`$ describe a random vector potential or magnetic field. We confine ourselves to phases $`\{\theta _{i,j}\}`$ which are distributed independently and uniformly in $`[\pi ,\pi ]`$. The hopping amplitude $`t`$ is assumed to be the energy unit, $`V=1`$. The phases $`\{\theta _{i,j}\}`$ are related to the magnetic flux, for example, as
$$\theta _{i,i+\widehat{x}}+\theta _{i+\widehat{x},i+\widehat{x}+\widehat{y}}+\theta _{i+\widehat{x}+\widehat{y},i+\widehat{y}}+\theta i+\widehat{y},i=2\pi \varphi _i/\varphi _0,$$
(2)
where $`\varphi _i`$ and $`\varphi _0=hc/|e|`$ denote the magnetic flux through the plaquette $`(i,i+\widehat{x},i+\widehat{x}+\widehat{y},i+\widehat{y})`$ and the unit flux, respectively. Here $`\widehat{x}(\widehat{y})`$ stands for the unit vector in the $`x(y)`$-direction. Note that in the present system, the condition that the magnetic flux through a closed surface is zero is satisfied.
## III Finite-Size Scaling Study
We consider quasi-1D systems with cross section $`M\times M`$ . The Schrödinger equation $`H\psi =E\psi `$ in such a bar-shaped system can be rewritten using transfer matrices $`T_n(2M^2\times 2M^2)`$
$$\left(\begin{array}{c}\psi _{n+1}\\ \psi _n\end{array}\right)=T_n\left(\begin{array}{c}\psi _n\\ \psi _{n1}\end{array}\right),T_n=\left(\begin{array}{cc}EH_n\hfill & I\hfill \\ I\hfill & 0\hfill \end{array}\right)$$
(3)
($`n=1,2,\mathrm{}`$) where $`\psi _n`$ and $`H_n`$ denote the set of coefficients of the state $`\psi `$ and the Hamiltonian of the $`n`$th slice, respectively. The identity matrix is denoted by $`I`$. The off-diagonal parts of the transfer matrix $`T_n`$ can be expressed by the identity matrix because the phases in the transfer-direction can be removed by a gauge transformation . The logarithms of the eigenvalues of the limiting matrix $`T`$
$$T\underset{n\mathrm{}}{lim}[(\underset{i=1}{\overset{n}{}}T_i)^{}(\underset{i=1}{\overset{n}{}}T_i)]^{1/2n}$$
(4)
are called the Lyapunov exponents. The smallest Lyapunov exponent $`\lambda _M`$ along the bar is estimated by a technique which uses the product of these transfer matrices . The relative accuracies for the smallest Lyapunov exponents achieved here is $`0.2\%`$ for $`M10`$ and $`0.25\%0.3\%`$ for $`M=12`$. The localization length $`\xi _M`$ along the bar is given by the inverse of the smallest Lyapunov exponent, $`\xi _M=1/\lambda _M`$.
The assumption of one-parameter scaling for the renormalized localization length $`\mathrm{\Lambda }_M\xi _M/M`$ implies
$$\mathrm{\Lambda }_M=f(\xi /M),$$
(5)
where $`\xi =\xi (E,W)`$ is the relevant length scale in the limit $`M\mathrm{}`$. Near the mobility edge $`E_c(W)`$, $`\xi `$ diverges with an exponent $`\nu `$ as $`\xi x^\nu `$ with $`x=(EE_c)/E_c`$. If the transition is driven by the disorder $`W`$ at a constant energy, $`x=(W_cW)/W_c`$. At the mobility edge, $`\mathrm{\Lambda }_M`$ becomes scale-invariant. The quantity $`\mathrm{\Lambda }_M`$ is a smooth function of $`E`$ and $`W`$, and we can expand it as a function of $`x`$ as
$`\mathrm{\Lambda }_M`$ $`=`$ $`\mathrm{\Lambda }_c+{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}A_n(M^{1/\nu }x)^n.`$ (6)
By fitting our data to the above function, we can determine the critical exponent $`\nu `$ and the mobility edge accurately. In practice, we truncated the series (6) at the third order$`(n=3)`$ and used the standard $`\chi ^2`$-fitting procedure. The error bars are estimated by using the Hessian matrix and the confidence interval is chosen to be $`95.4\%`$.
For the the transition at the band center in the presence of a strong random scalar potential, a clear scaling has been observed for presently achievable sizes, $`6M12`$. In fact, all the data (84 points) for $`M=6,8,10`$, and $`12`$ in the range $`17.8W19.8`$ can be successfully fitted by the fitting function (6) up to the 3rd order, which has six fitting parameters including the critical point and the critical exponent. We have estimated the critical disorder and the exponent $`\nu `$ to be $`W_c=18.80\pm 0.04`$ and $`\nu =1.45\pm 0.09`$ . The renormalized localization length $`\mathrm{\Lambda }_c`$ at the critical point is $`0.558\pm 0.003`$. The error bars of these estimations are at least a factor of 3 smaller than those of the previous estimates .
In contrast, in the absence of the random scalar potential ($`W=0`$) or in the presence of an additional weak random scalar potential $`(W=1)`$, for which the critical point lies near the band edge, we have found that the correction to scaling is not negligible. Near the band edge, the density of states changes rapidly as a function of energy. We have thus performed high-accuracy transfer matrix calculations for narrower energy range $`|EE_c|0.025`$ around the critical point for $`W=0`$ and $`W=1`$ . In both cases($`W=0`$ and $`W=1`$), we have found that the estimation of the critical exponent tends to increase with the system-sizes. In order to extrapolate the critical exponent for $`W=0`$, we have made calculations for larger system sizes $`M=14`$ and $`M=16`$. Here we show, in Table I, the summary of the results for $`W=0`$ obtained by the fittings with different sizes up to $`M=16`$. The relative accuracy in $`\xi _M^1`$ achieved for $`M=14`$ and $`M=16`$ is 1% for each sample and 7 and 5 realizations of random phases are considered, respectively. The scaling regime is assumed to be $`[4.39,4.44]`$ as in ref. REFERENCES. It is clear that the critical point exists around $`E_c4.41`$ (see figure 1). In Table 1, we can see that the exponent $`\nu `$ tends to increase with the system-sizes and is likely to saturate around $`\nu 1.48`$. Within the error bars, estimated values of $`\nu `$ for $`M12`$ are consistent with $`1.45\pm 0.09`$ obtained for the band center as well as $`1.43\pm 0.06`$ estimated in the uniform magnetic field . No evidence has been found for $`\nu 1`$ which was suggested by calculations with low accuracy . The present results support the universality of the critical exponent in the unitary systems. The positions of the critical points and the values of $`\mathrm{\Lambda }_c`$ estimated with different combinations of system-sizes are fluctuating for $`M12`$ (Table I). The value of $`\mathrm{\Lambda }_c=0.558\pm 0.003`$ at the band center seems to lie inside the range of this fluctuation. Conventionally, the value of $`\mathrm{\Lambda }_c`$ is also expected to be universal in unitary systems. Our results obtained here seem to be consistent with this universality of $`\mathrm{\Lambda }_c`$.
The mobility edge trajectory in the presence of the random magnetic field is shown in figure 2. Each critical point (mobility edge) is estimated based on numerical data by the transfer matrix method with $`M=610`$. It should be noted that there exist extended states for energies larger than the critical energy $`E_c4.41`$ for $`W=0`$. This type of reentrant phenomena in the energy-disorder plane has been commonly observed for systems with the uniform distribution of random scalar potential . It is interpreted that the enhancement of extended states for a weak additional random scalar potential is due to the enhancement of density of states at that energy regime.
## IV Equation-of-Motion Method
We now turn our attention to the properties of wave function just at the AT in random magnetic fields. It is well known that at the AT, the wave function shows multifractal structure which leads to the scale invariant behavior of conductance distributions and the energy level statistics.
The direct way to investigate the wave functions is to diagonalize the Hamiltonian. This, however, is numerically very intensive. Instead, we calculate here the time evolution of wave packets to extract the information of fractal dimension. We first prepare the initial wave packet $`|0`$ close to AT by diagonalizing a small cluster located at the center of the system. The time evolution of the state at time $`t`$ is then obtained by
$`|t+\mathrm{\Delta }t=U(\mathrm{\Delta }t)|t`$
where $`U(\mathrm{\Delta }t)`$ is the time evolution operator. In order to perform effectively the numerical calculation, we approximate $`U(\mathrm{\Delta }t)`$ by the products of exponential operators as
$$U(\mathrm{\Delta }t)=\mathrm{e}^{\mathrm{i}H\mathrm{\Delta }t/\mathrm{}}=U_2(p\mathrm{\Delta }t)U_2((12p)\mathrm{\Delta }t)U_2(p\mathrm{\Delta }t)+\mathrm{O}(\mathrm{\Delta }t^5)$$
(7)
with $`p=(22^{1/3})^1`$ and
$$U_2(\mathrm{\Delta }t)\mathrm{e}^{\mathrm{i}H_1\mathrm{\Delta }t/2\mathrm{}}\mathrm{}\mathrm{e}^{\mathrm{i}H_{q1}\mathrm{\Delta }t/2\mathrm{}}\mathrm{e}^{\mathrm{i}H_q\mathrm{\Delta }t/\mathrm{}}\mathrm{e}^{\mathrm{i}H_{q1}\mathrm{\Delta }t/2\mathrm{}}\mathrm{}\mathrm{e}^{\mathrm{i}H_1\mathrm{\Delta }t/2\mathrm{}},$$
(8)
where $`H_1,\mathrm{},H_q`$ are decomposition of the original Hamiltonian $`H=_iH_i`$ which are simple enough to diagonalize analytically.
The square displacement of the wave packets is defined by
$`r^2(t)=t|r^2|t.`$
In metallic phase, $`r^2(t)`$ is proportional to $`Dt`$ where $`D`$ is the diffusion coefficient. In the insulating phase, it saturates to the square of localization length, $`\xi ^2`$. At AT, the anomalous diffusion
$`r^2(t)t^{2/d}=t^{2/3}`$
is expected. The fractal dimension $`D_2`$ is estimated from the autocorrelation function
$`C(t)={\displaystyle \frac{1}{t}}{\displaystyle _0^t}dt^{}|t^{}|0|^2`$
where $`C(t)`$ is expected to decay as
$`C(t)t^{D_2/d}.`$
In Fig. 3, we show the results of $`C(t)`$ for the transition at the center of the band in the presence of a strong random scalar potential($`W=18.8V`$). By diagonalizing a small cluster of $`7\times 7\times 7`$ located at the the center of the system, we follow the time evolution of wave packets in $`101\times 101\times 101`$ systems. Geometric average of $`C(t)`$ over 10 random field and potential configurations are performed. By fitting the data for $`t>40\mathrm{}/V`$, the fractal dimensionality $`D_2`$ is estimated to be
$`D_2=1.52\pm 0.18`$
considerably smaller than the space dimension $`d=3`$. The above value is consistent with the estimate of 3D system at AT in a strong uniform magnetic field.
## V Discussions
In summary, we have investigated in detail the AT in a random magnetic field based on the transfer matrix method with considerably high accuracy. In particular, whether or not the AT driven solely by the random vector potential ($`W=0`$) exhibits different critical behavior from other unitary systems has been discussed. In order to clarify the above point, we have performed the scaling analysis for the three critical points, namely $`E=0`$, $`W=0`$, and $`W=1`$ (figure 2). For the transition at the band center ($`E=0`$) in the presence of a strong additional random potential, a clear scaling behavior has been observed and the exponent $`\nu `$ has been estimated to be $`1.45\pm 0.09`$. This coincides with the value obtained for a unitary system in a uniform magnetic field . It has been found, on the other hand, that the correction to scaling is not negligible in the presently achievable sizes for the transitions near the band edge($`W=1`$ and $`W=0`$). The exponents estimated for $`W=0`$ by larger system sizes are consistent with those obtained for other unitary systems within the error bars. From the size dependence of $`\nu `$, in contrast to the suggestion in ref.REFERENCES, no evidence has been found for $`\nu 1`$. These results indicate the universality of $`\nu `$ in the unitary class and hence support the conventional classification of the AT by universality classes due to symmetry.
The mobility edge trajectory has been also obtained in the presence of the random magnetic field. It’s qualitative shape turns out to be similar to those obtained for other systems with the uniform distribution of random scalar potential.
We have also studied the diffusion of electrons at the AT in the presence of a random magnetic field. By solving the time-dependent Schrödinger equation numerically, we examine the time evolution of wave packets at the AT. From the asymptotic behavior of the autocorrelation function, we have extracted the fractal dimensionality of the critical wave function at the band center.
## Acknowledgments
The authors thank M. Batsch, A. MacKinnon, I. Zharekeshev and K. Slevin for valuable discussions. The numerical calculations were performed on a FACOM VPP500 of Institute for Solid State Physics, University of Tokyo and in computer facilities of I. Institut für Theoretische Physik, Universität Hamburg. This work was supported in part by the EU-project FHRX-CT96-0042 and by the Deutsche Forschungsgemeinschaft via Project Kr627/10 and the Graduiertenkolleg “Nanostrukturierte Festkörper”. One of the authors (T.K.) thanks Alexander von Humboldt Foundation for financial support during his stay at University of Hamburg where the present work has been started.
|
no-problem/9907/math9907088.html
|
ar5iv
|
text
|
# On a Map From Pure Braids To Knots
## 1. Definition and properties of the short-circuit map.
We define the “short-circuit” map $`𝒮_n`$ from the pure braid group on $`2n+1`$ strands $`𝒫_{2n+1}`$ to the monoid of the isotopy classes of oriented knots $`𝒦`$ as pictured on Figure 1. The strands of the braid are joined together in turn at the bottom and at the top.
We think of knots as of non-compact, or “long” knots here. These maps are compatible with the inclusions $`𝒫_{2n+1}𝒫_{2n+3}`$ so they extend to a map $`𝒮:𝒫_{\mathrm{}}𝒦.`$ Here by $`𝒫_{\mathrm{}}`$ we understand the inductive limit of the sequence of inclusions $`𝒫_i𝒫_{i+1}`$.
The construction and, as we will see later, some properties of the map $`𝒮`$ resemble those of the plat closure which sends braids with even number of strands to links. (For the definition and properties of the plat closure see \[B1, B2\].) Indeed, if $`t_n`$ denotes the $`2n`$-strand braid pictured on Figure 2, then for any $`x𝒫_{2n+1}`$ the (unoriented) knot $`𝒮(x)`$ is equivalent to the knot, obtained by taking the image of $`x`$ in $`𝒫_{2n+2}`$ under the standard inclusion, multiplying by $`t_{n+1}`$ on the left (i.e. on the top) and taking the plat closure.
However, if we are interested in knots rather than links the map $`𝒮`$ is more convenient than the plat closure. The most obvious difference is the behaviour under stabilization maps and tensor products. Adding two unbraided strands to a braid changes its image under the plat closure by adding an unknotted and unlinked component, while the image of the short-circuit map does not change. As for tensor (external) products, the plat closure sends a product of braids to the distant union of their plat closures, while under short-circuiting the tensor product of braids is sent to the connected sum of the corresponding knots.
To make the last statement more precise, we may define the tensor product of two pure braids with odd numbers of strands as follows.
Let $`i:𝒫_{2n+1}𝒫_{2(n+m)+1}`$ be the standard inclusion onto the first $`2n+1`$ strands and $`i^{}:𝒫_{2m+1}𝒫_{2(n+m)+1}`$ be the inclusion onto the last $`2m+1`$ strands. Then we can define a product
$$𝒫_{2n+1}𝒫_{2m+1}𝒫_{2(n+m)+1}$$
by sending a pair $`(b_1,b_2)`$, where $`b_1𝒫_{2n+1}`$ and $`b_2𝒫_{2m+1}`$ to
$$i(b_1)i^{}(b_2)𝒫_{2(n+m)+1}.$$
With this definition it is clear that
$$𝒮_n(b_1)\mathrm{\#}𝒮_m(b_2)=𝒮_{n+m}(b_1b_2).$$
The restriction to an odd number of strands is by no means crucial. If $`b𝒫_{2n}`$ we can define an analogue of the short-circuit map as a suitably oriented plat closure of the braid $`t_nb`$. This definition is equally good for the purposes of our paper and has certain advantages. Namely, this version of the short-circuit closure respects the usual tensor product of braids; also, in this set-up Theorem 1 below becomes tautological.
Nevertheless, we prefer to work with braids on odd number of strands. It follows from Theorem 1 that any knot which can be realized as a plat closure of a $`2n`$-stranded braid can be obtained by short-circuiting some pure braid on $`2n1`$ strands. This generalizes the well-known fact that a 2-bridge knot can be represented by a braid in $`𝒫_3`$. In this sense, the short-circuit map for $`𝒫_{\mathrm{odd}}`$ is more “economic”. We repeat, however, that in our context this is a matter of taste.
### 1.1. Filtration by the number of strands and the bridge number.
Any filtration on the infinite pure braid group $`𝒫_{\mathrm{}}`$ is sent by $`𝒮`$ to a filtration on knots. The most obvious filtration on $`𝒫_{\mathrm{}}`$ to consider is the filtration “by the number of strands”
$$𝒫_1𝒫_3𝒫_5\mathrm{}𝒫_{\mathrm{}}.$$
###### Theorem 1.
The filtration on knots by $`𝒮(𝒫_{2n+1})`$ is the filtration by knots with bridge number less than or equal to $`n+1`$.
To prove Theorem 1 it is enough to show that the minimal number of maxima of the height function in a realization of a knot in $`𝐑^3`$ as a long knot is the bridge number minus 1; this will be done in Section 2.
The bridge number minus 1 is an additive knot invariant (see \[Sch\]) so, the filtration by $`𝒮(𝒫_{2n+1})`$ gives rise to an additive grading on $`𝒦`$.
### 1.2. Structure of the short-circuit map.
First we introduce some notation. By $`A_{i,j}`$ where $`ij`$ are positive integers we denote the standard generators of $`𝒫_{\mathrm{}}`$. By $`\varphi _i^n`$ we mean the homomorphism $`𝒫_{2n}𝒫_{2n+1}`$ which doubles the $`i`$th strand. Homomorphisms $`\varphi _i^n`$ respect the standard inclusions of the pure braid groups so as $`n`$ tends to infinity the limit $`\varphi _i:𝒫_{\mathrm{}}𝒫_{\mathrm{}}`$ is well-defined.
Let $`H^T𝒫_{\mathrm{}}`$ be the subgroup generated by $`A_{i,i+1}`$ and $`\varphi _i(A_{i,j})`$ for all even $`i`$ and all $`ji`$. Similarly we define the subgroup $`H^B`$ with the only difference that $`i`$ is required to be odd. The subgroup $`H^T`$ acts on $`𝒫_{\mathrm{}}`$ on the left and this action preserves the fibres of $`𝒮`$, see Figure 3.
Similarly, $`H^B`$ act on $`𝒫_{\mathrm{}}`$ on the right, also preserving the fibres.
###### Theorem 2.
The short-circuit map identifies the monoid of knots $`𝒦`$ with the quotient set $`H^T\backslash 𝒫_{\mathrm{}}/H^B`$.
This theorem is a version of the main theorem of \[B2\] which describes the equivalence classes of plat closures. The proof we sketch in Section 3 is simplified by the fact the we are only interested in knots. Note also that Birman’s theorem as stated in \[B2\] concerns unoriented knot and link types, whereas our theorem concerns oriented knot types.
### 1.3. Lower central series and Vassiliev invariants.
One can easily check that Vassiliev knot invariants pull back under the short-circuit map to Vassiliev invariants of braids. The action of $`H^T`$ and $`H^B`$ on $`𝒫_{\mathrm{}}`$ induces an action on Vassiliev braid invariants which, clearly, preserves the type. (Here we do not assume the invariants to be normalized, i.e. do not require them to take a prescribed value on the trivial braid.) Thus the finite type knot invariants can be identified with those finite type pure braid invariants which are fixed by the two-sided action of $`H^T`$ and $`H^B`$.
Sometimes it is more convenient, however, to think of Vassiliev invariants in the dual setting. Recall that a knot (pure braid) is called $`n`$-trivial if it cannot be distinguished from the the trivial knot (braid) by invariants of order less than $`n`$. For pure braids $`n`$-triviality is well-understood: $`b𝒫_k`$ is $`n`$-trivial if and only if $`b\gamma _n𝒫_k`$ \- the $`n`$-th term of the lower central series of $`𝒫_k`$.
Let $`𝒦_n𝒦`$ be the set of $`n`$-trivial knots.
###### Theorem 3.
Short-circuiting sends the filtration of $`𝒫_{\mathrm{}}`$ by the lower central series to the filtration by $`n`$-trivial knots:
$$𝒮(\gamma _n𝒫_{\mathrm{}})=𝒦_n.$$
This allows to formulate problems from the theory of Vassiliev knot invariants in purely group-theoretic terms. For example, finite type knot invariants separate the unknot if and only if any orbit of the two-sided action of $`H^T`$ and $`H^B`$, apart from the orbit of the trivial braid, intersects only a finite number of terms of the lower central series. Another way to state this is to consider the nilpotent topology on $`𝒫_{\mathrm{}}`$ (with basis the cosets of $`\gamma _n𝒫_{\mathrm{}}`$ for all $`n`$). Then finite type invariants separate the unknot if and only if the set $`H^TH^B=\{tb|tH^T,bH^B\}`$ is closed in the nilpotent topology.
The proof of Theorem 3 follows closely the same arguments as in \[St\]. It is even simplified in some ways in our setting. For example, if $`x`$ and $`y`$ are two braids, then $`𝒮(x)\mathrm{\#}𝒮(y)=𝒮(xtyb)=𝒮(xty)=𝒮((t^1xtx^1)xy)`$, which is equivalent to $`𝒮(xy)`$ modulo a commutator. Inductively, braid product and connected sum are equivalent, modulo commutators of higher order, which is the main idea behind the results in \[St\].
## 2. Bridge number for long knots.
Here we will see that the minimal number of maxima $`b_L`$ of the “height function” in the realization of a knot in $`𝐑^3`$ as a long knot is less by 1 than the minimal number of maxima of the height function in the compact realization $`S^1𝐑^3`$ of the same knot, i.e. than the bridge number $`b`$.
For a long knot with $`b_L`$ maxima of the height function it is obvious that there exist a compact embedding of the same knot with $`b_L+1`$ maxima, see Figure 4.
Conversely, let $`k`$ be a compact knot $`S^1𝐑^3`$ with $`b`$ maxima and $`b`$ minima which can be taken to be non-degenerate. We construct a long knot $`k^{}`$ with $`b1`$ maxima which is equivalent to $`k`$ as follows.
Choose a point on $`k`$ which is not critical for the height function to be the origin in $`𝐑^3`$. Let $`A`$ be the maximum and $`B`$ the minimum between which the chosen point lies; by $`AB`$ we denote the closed segment of $`k`$ which lies between $`A`$ and $`B`$ and passes through the origin.
Let $`F(t):𝐑𝐑^3`$ be a curve which intersects each horizontal plane once and such that its intersection with the knot $`k`$ is exactly the segment $`AB`$. We can assume that the curve $`F`$ is parametrized by the $`z`$-coordinate in $`𝐑^3`$, i.e. $`F(t)=(F_x(t),F_y(t),t)`$, and that $`F`$ is a smooth function of $`t`$ everywhere apart from the points where $`F(t)=A`$ or $`F(t)=B`$.
Consider a map $`\mathrm{\Phi }:𝐑^3𝐑^3`$ given by
$$\mathrm{\Phi }(x,y,z)=(xF_x(z),yF_y(z),z).$$
The transformation $`\mathrm{\Phi }`$ preserves the horizontal planes, so it does not change the number of maxima and minima of the height function on the knot $`k`$. It is clear that there exist such $`R>0`$ that the intersection of the image of the embedding $`\mathrm{\Phi }(k)`$ with the cylinder $`x^2+y^2<R^2`$ is an interval, which is embedded with exactly one minimum and one maximum of the height function. Strictly speaking, the embedding $`\mathrm{\Phi }(k)`$ is only piecewise-smooth, however, we can smooth it out in such a way that its intersection with the cylindrical neighbourhood of the $`z`$-axis of radius $`R`$ is an interval, which intersects the $`z`$-axis in the origin only and which is embedded with exactly one minimum and one maximum of the height function, see Figure5(a).
Thus in what follows we can assume that $`k`$ has the above form.
Now we compactify $`𝐑^3`$ to $`S^3`$ by an interval adding a point at infinity to each horizontal plane and two points $`z=\pm \mathrm{}`$. Denote by $`VS^3`$ a copy of $`𝐑^3`$ obtained by throwing out the closure of the $`z`$-axis. The intersection of the knot $`k`$ with $`V`$ is a long knot, which is equivalent to $`k`$ if we choose the orientation of $`V`$ to be compatible with that of $`𝐑^3`$. In the coordinates centred at the point at infinity whose $`z`$-coordinate is zero, this long knot looks as on Figure 5(b). Obviously, it is equivalent to the knot $`k^{}`$ that differs from $`k`$ only inside the cylindrical neighbourhood of the $`z`$-axis (which is pictured as the outside part of the cylinder on Figure 5(b)) and has exactly $`b1`$ maxima and $`b1`$ minima.
## 3. Short-circuit map as a two-sided quotient map.
We say that a smooth long knot $`k(t):𝐑𝐑^3`$ is a Morse knot if the height function on it: (a) has only a finite number of critical points, all of which are non-degenerate; (b) tends to $`\pm \mathrm{}`$ as $`t\mathrm{}`$; in other words, we assume that all knots “point downwards”. Two Morse knots are Morse equivalent if one can be deformed into the other through Morse knots.
Let $`k`$ be a Morse knot and $`x`$ be a point on $`k`$ which is non-critical for the height function. We will say that a knot $`k^{}`$ is obtained from $`k`$ by insertion of a hump at $`x`$ if $`k`$ and $`k^{}`$ coincide outside some small neighbourhood of $`x`$ and inside this neighbourhood they differ as on Figure 6.
###### Lemma 3.1.
Any two knots obtained from the same Morse knot by insertion of a hump are Morse equivalent.
###### Proof.
The lemma is clearly true if there are no critical points of the height function between the points $`x_1`$ and $`x_2`$ where we insert humps. In case there is one critical point between $`x_1`$ and $`x_2`$ the lemma follows
from the argument on Figure 7. This also proves the lemma in the general case. ∎
Let $`b_1𝒫_{2n+1}`$ and $`b_2𝒫_{2m+1}`$ and, as before, denote by $`i(b_k)`$ the image of the standard inclusion of $`b_k`$ into $`𝒫_{2N+1}`$, $`Nn,m`$.
###### Lemma 3.2.
If $`𝒮_n(b_1)`$ and $`𝒮_m(b_2)`$ are in the same isotopy class in $`𝒦`$ there exists $`Nn,m`$ such that $`𝒮_N(i(b_1))`$ and $`𝒮_N(i(b_2))`$ are Morse equivalent.
###### Proof.
Let
$$f^T(t)=(f_x^T(t),f_y^T(t),f_z^T(t))$$
where $`T[0,1]`$ and $`t𝐑`$ be a homotopy between $`𝒮_n(b_1)`$ and $`𝒮_m(b_2)`$, that is, for each $`T`$ the map $`f^T(t):𝐑𝐑^3`$ defines a long knot and $`f^0(t)=𝒮_n(b_1)`$ and $`f^1(t)=𝒮_m(b_2)`$.
In $`[0,1]\times 𝐑`$ consider the subset $`W`$ of pairs $`(T,t)`$ such that $`\frac{}{t}f_z^T(t)=0.`$ Without loss of generality we can assume that $`W`$ is a union of smooth compact non-singular curves whose boundary is either empty or belongs to $`(\{0\}\{1\})\times 𝐑`$ and that there are only a finite number of tangencies of $`W`$ with horizontal lines of the form $`\{T\}\times 𝐑`$. In addition we require these tangencies to take place at different values of the parameter $`T`$; see Figure 8. These assumptions imply, in particular, that for all but a finite number of values of $`T`$ the knot $`f^T(t)`$ is Morse and that the perestroikas at the bifurcation values of $`T`$ are generic, i.e. are insertions (or removals) of humps.
If there are no points of tangency of $`W`$ with horizontal lines the knots $`𝒮_n(b_1)`$ and $`𝒮_m(b_2)`$ are Morse equivalent and $`n=m=N`$.
Otherwise, choose the point of tangency of $`W`$ with a horizontal line which corresponds to the insertion of a hump with the smallest value of $`T`$. It is clear that we can connect it with the lower boundary line $`\{0\}\times 𝐑`$ by a segment $`s`$ of a curve which is disjoint from $`W`$ and whose tangent is nowhere horizontal, see Figure 8(a).
In the neighbourhood of each point of $`s`$ we can modify the knots $`f^T(t)`$ by inserting humps, this changes $`W`$ as shown on Figure 8(b). Notice that the number of points where $`W`$ has a horizontal tangent has decreased by one and the knot $`f^0(t)=𝒮_n(b_1)`$ was changed by an insertion of a hump.
Thus, proceeding inductively, we eliminate all insertions of humps. In the same way we eliminate the removals of humps with the only difference that we connect them to the upper boundary line and proceed from the bifurcation with the largest value of $`T`$ downwards.
The result is that we construct a Morse equivalence between $`𝒮_n(b_1)`$, possibly with several humps inserted, and $`𝒮_m(b_2)`$, also with some extra humps. However, from Lemma 3.1 we know that $`𝒮_n(b_1)`$ and $`𝒮_m(b_2)`$ with humps inserted are Morse equivalent to $`𝒮_N(i(b_1))`$ and $`𝒮_N(i(b_2))`$ respectively (here $`N`$ is the number of maxima of the modified knots) and this proves the lemma.
Let $`b_1𝒫_{2N+1}`$ and $`b_2𝒫_{2N+1}`$ represent the same knot. Lemma 3.2 allows us to assume that the knots $`𝒮_N(b_1)`$ and $`𝒮_N(b_1)`$ are Morse equivalent.
Given a deformation of $`𝒮_N(b_1)`$ to $`𝒮_N(b_1)`$ through Morse knots we are going to construct a one-dimensional family of braids $`f^T:[0,1]𝒫_{2N+1}`$ such that $`f^0=b_1`$, $`f^1=b_2`$ and which is not continuous only at a finite number of values of the parameter, where the “jump” can be expressed as the multiplication by some element of $`H^T`$ or $`H^B`$.
The braid $`f^0`$ is obtained by “suspending” the knot $`𝒮_N(b_1)`$ by maxima and minima, see Figure 9. Here we choose the points $`\alpha _i`$ and $`\beta _i`$ in such a way that the deformation of $`𝒮_n(b_1)`$ into $`𝒮_n(b_2)`$ takes place entirely between the horizontal planes in which $`\alpha _i`$ and $`\beta _i`$ are situated. Of course, $`f^0`$ is the same braid as $`b_1`$. Think of the double lines which connect maxima and minima with the points $`\alpha _i`$ and $`\beta _i`$ respectively as of very narrow rubber strips. Then, if we deform the knot keeping the points $`\alpha _i`$ and $`\beta _i`$ fixed, the suspended knot also deforms and gives the braid $`f^T`$.
It may happen in the process of deformation that some rubber strips intersect the knot or intersect each other. Without loss of generality we can assume that these events take place near a finite number of distinct values of $`T`$.
Suppose that the rubber strip which connects a maximum with points $`\alpha _i`$ and $`\alpha _{i+1}`$ intersects the knot between $`T=T_0`$ and $`T=T_0+ϵ`$. Then one can find $`x,y𝒫_{2N+1}`$ such that:
(a) $`f^{T_0}=xy`$ and $`f^{T_0+ϵ}=x\varphi _i^N(A_{i,j}^{\pm 1})y`$ for some $`j`$;
(b) $`x=\varphi _i^N(x^{})`$ for some $`x^{}𝒫_{2N}`$.
Thus
$$f^{T_0+ϵ}=x\varphi _i^N(A_{i,j}^{\pm 1})x^1f^{T_0}=\varphi _i^N(x^{}A_{i,j}^{\pm 1}x_{}^{}{}_{}{}^{1})f^{T_0}.$$
Notice that conjugation by $`x^{}`$ maps $`A_{i,j}`$ to a product of $`A_{i,j_m}`$ for some set of $`j_m`$, so $`\varphi _i^N(x^{}A_{i,j}^{\pm 1}x_{}^{}{}_{}{}^{1})`$ lies in $`H^T`$.
Similarly, if the rubber strip is attached to the minimum, $`f^{T_0}`$ is multiplied on the right by some braid from $`H^B`$. In case two rubber strips intersect each other we have to multiply by a product of two braids of such form; as above, the product will lie in $`H^T`$ or $`H^B`$. (If one rubber strip is attached to a minimum and the other one to a maximum this product will automatically lie in the intersection $`H^TH^B`$.)
Finally, when the isotopy is finished and all minima and maxima have arrived back to their places what may happen is that some rubber strips may be twisted. This corresponds to multiplications by some $`A_{i,i+1}`$ on the left for $`i`$ even and on the right for $`i`$ odd.
Acknowledgments. We would like to thank Mario Eudave for finding an important reference, Sofia Lambropoulou and other organizers of the conference “Knots in Hellas ’98” who gave us a chance to meet, and Natig Atakishiev with whose pen a part of this paper was written. The second author was partially supported by the Naval Academy Research Council.
|
no-problem/9907/cond-mat9907305.html
|
ar5iv
|
text
|
# Exact critical exponent for the shortest-path scaling function in percolation
## Abstract
It is shown that the critical exponent $`g_1`$ related to pair-connectiveness and shortest-path (or chemical distance) scaling, recently studied by Porto et al., Dokholyan et al., and Grassberger, can be found exactly in 2d by using a crossing-probability result of Cardy, with the outcome $`g_1=25/24`$. This prediction is consistent with existing simulation results. \[Published as J. Phys. A. 32, L457-459 (1999)\]
An important quantity describing percolation clusters is the chemical distance or shortest path . There has been considerable effort studying its scaling properties for distances small compared with the size of the cluster (i.e., ) including recent work by Porto et al. and Dokholyan et al. . Very recently, Grassberger has shown that these scaling properties can be analyzed efficiently by studying the growth of two nearby clusters, a method first suggested in the work of Dokholyan et al. . In this note I show that a scaling relation for the growth of two clusters can be combined with a previous result of Cardy to find an exponent of the shortest-path behavior exactly.
In , Grassberger considered the function $`N(t)`$ (which I call $`N_2(t)`$) giving the probability that two clusters grown from nearby seeds survive at least to time $`t`$, where clusters are grown by a Leath-type algorithm and $`t`$ is the number of generations or equivalently the chemical distance from the seeds to the growth sites. (To survive up to that time means that both clusters survive and remain distinct.) Another interpretation of $`N_2(t)`$ is that it gives the probability that two sites appear to belong to two different infinite clusters, when the environment is probed up to a chemical distance $`t`$ from the two sites. (As discussed in , two points a finite distance apart in fact belong to two different infinite clusters with probability zero.) At the critical point, $`N_2(t)`$ is presumed to behave as a power-law
$$N_2(t)t^\mu $$
(1)
as $`t\mathrm{}`$. Grassberger also considered the probability $`p(t)`$ that the two clusters coalesce exactly at time $`t`$; $`p(t)`$ is proportional to $`\rho (𝐱𝐲,2t)`$, where $`𝐱`$ and $`𝐲`$ are the locations of the seed points and $`\rho (𝐱,t)`$ is the pair-connectiveness function, which behaves as
$$\rho (𝐱,t)\frac{1}{t^{1+2\beta /\nu _t}}\varphi (r/t^z)$$
(2)
in the scaling limit. The scaling function $`\varphi (\zeta )`$ is presumed to behave as $`\zeta ^{g_1}`$ for $`\zeta 0`$ , so $`\rho (𝐱,t)r^{g_1}`$ for constant $`t>>r^{1/z}`$. Grassberger showed that these arguments imply $`p(t)t^\lambda `$ with
$$\lambda =1+\frac{2\beta }{\nu _t}+zg_1$$
(3)
and furthermore argued that $`\mu =\lambda 1`$. The relation (3) was first given (in a slightly different notation) by Dokholyan et al. .
Based upon an analogy to self-avoiding random walks, Porto et al. conjectured that $`g_1`$ is related to $`d_{\mathrm{min}}=1/z=\nu _t/\nu `$ by
$$g_1=d_{\mathrm{min}}\beta /\nu (\text{conjecture})$$
(4)
which they found to be supported, within the $`5`$% error bars, by numerical measurements. This conjecture also appears in . Inserting eq. (4) into eq. (3) implies
$$\lambda =2+\frac{\beta }{\nu _t}(\text{conjecture}).$$
(5)
However, from precise simulations of $`p(t)`$ and $`N_2(t)`$, Grassberger found strong numerical evidence against the above conjecture (and provided theoretical arguments against it as well). He found, in 2d,
$$\mu =1.1055(10),\lambda =2.1055(10),g_1=1.041(1)$$
(6)
which are numerically inconsistent with the predictions $`\mu =1.09213(5)`$ and $`g_1=1.0264(3)`$ that follow from eqs. (4) and (5) and the known values $`\beta =5/36`$, $`\nu =4/3`$, and $`d_{\mathrm{min}}=1.1306(3)`$ , where numbers in parentheses following numerical data represent statistical errors in the last digit(s).
Here I show that $`g_1`$ can be found exactly by relating $`N_2(t)`$ to a crossing problem solved recently by Cardy . Cardy has shown that for a rectangular system of dimensions $`L_v\times L_h`$, with periodic boundary conditions in the vertical direction, the probability of having at least $`k`$ clusters cross in the horizontal direction behaves, for large aspect ratio $`R=L_h/L_v`$, as
$$P_k(R)e^{a_kR}$$
(7)
with $`a_1=5\pi /24`$ and $`a_k=(2\pi /3)(k^21/4)`$ for $`k>1`$. (The formula for $`k=1`$ is different because for one cluster it is not necessary to also have a crossing cluster on the dual lattice, while for $`k>1`$ crossing clusters there must be $`k`$ crossing dual-lattice clusters.) The probability that at least two clusters (or, to the same order, exactly two clusters) cross the rectangle is given by $`P_2\mathrm{exp}(5\pi R/2)`$.
Crossing problems in critical percolation are believed to be conformally invariant, because under a conformal transformation, in which all elements only expand or contract, the crossing properties of each element should remain unchanged . One can transform the rectangle into an annulus by putting the four corners of the rectangle at $`z=0,2\pi R,2\pi R+2\pi i`$ and $`2\pi i`$ on the complex-$`z`$ plane, and letting $`z^{}=e^z`$. The result on the $`z^{}`$-plane is an annulus with an inner radius of $`1`$ and an outer radius of $`r=e^{2\pi R}`$. The top and bottom edges of the rectangle close together, exactly matching the periodic boundary conditions. Assuming conformal invariance of the crossing probability, it follows from (7) that the probability $`p_k`$ that at least $`k`$ clusters cross between the inner and outer boundaries of the annulus is given by
$$p_k(r)r^{a_k/(2\pi )}$$
(8)
or $`p_1(r)r^{5/48}`$, $`p_2(r)r^{5/4}`$, $`p_3(r)r^{35/12}`$, etc. Now, one can associate $`p_2(r)`$ with the quantity $`N_2(t)`$ of eq. (1) by transforming from chemical distance $`t`$ to the radial distance $`r`$ using $`rt^z`$. This yields
$$N_2(t)t^{5z/4}$$
(9)
which implies by (1)
$$\mu =\frac{5z}{4}=\frac{5\nu _t}{4\nu }=1.1056(3)$$
(10)
and by (3) gives
$$g_1=\frac{5}{4}\frac{2\beta }{\nu }=\frac{25}{24}=1.041666\mathrm{}.$$
(11)
These predictions are consistent with Grassberger’s measurements (6) as well as Porto et al.’s determination $`g_1=1.04(5)`$. However, (11) is apparently inconsistent with the conjecture (4), since it would imply
$$d_{\mathrm{min}}=\frac{5}{4}\frac{\beta }{\nu }=\frac{55}{48}=1.1458333\mathrm{}(\text{conjecture})$$
(12)
which differs from the measured values of $`d_{\mathrm{min}}`$, 1.1306(3) and 1.130(4) .
Another way of looking at Grassberger’s numerical results is that they serve to confirm the ideas of conformal invariance and Cardy’s formula for $`k=2`$ to high precision. Note that Cardy’s formula for $`k=1,2`$ and 3 has also been verified numerically by Shchur and Kosyakov . Indeed, eq. (9) can be generalized for the probability $`N_k(t)`$ that $`k`$ clusters remain alive and distinct up to time $`t`$,
$$N_k(t)p_k(t^z)t^{za_k/(2\pi )}$$
(13)
so that $`N_3(t)t^{35z/12}t^{2.580}`$ etc. Grassberger has also measured this quantity for $`k=3`$ and $`k=4`$, and the behavior he finds is consistent with the above predictions .
I note finally that the relation $`p_1(r)r^{5/48}`$ following (8) is just the statement that the probability a cluster grows to a radius greater or equal than $`r`$ is $`p_1(r)r^{Dd}`$. The latter formula follows from $`P_s=_s^{\mathrm{}}sn_ss^{2\tau }`$ with $`s=r^D`$ with $`D=91/48`$, and the hyperscaling relation $`\tau 1=d/D`$. Transforming from the annulus to a rectangle yields Cardy’s result (7) for $`k=1`$, $`P_1(R)e^{2\pi (dD)R}`$ (valid for $`d=2`$ only).
In conclusion, I have shown that the density of growth sites on 2-d percolation clusters behaves as $`r^{25/24}`$ for large time and small $`r`$ .
The author thanks P. Grassberger, J. Cardy and S. Havlin for comments. This material is based upon work supported by the U. S. National Science Foundation Grant No. DMR-9520700.
|
no-problem/9907/astro-ph9907440.html
|
ar5iv
|
text
|
# On the Evolution of Cosmological Type Ia Supernovae and the Gravitational Constant
## I Introduction
Type Ia supernovae (SNeIa) are supposed to be one of the best examples of standard candles. This is because, although the nature of their progenitors and the detailed mechanism of explosion are still the subject of a strong debate, their observational light curves are relatively well understood and, consequently, their individual intrinsic differences can be easily accounted for. Therefore, thermonuclear supernovae are well suited objects to study the Universe at large, especially at high redshifts $`(z0.5)`$, where the rest of standard candles fail in deriving reliable distances, thus providing an unique tool for determining cosmological parameters or discriminating among different alternative cosmological theories.
Using the observations of 42 high redshift Type Ia supernovae and 18 low redshift supernovae (Riess et al. 1998; Perlmutter et al. 1999), both the Supernova Cosmology Project and the High-$`z`$ Supernova Search Team found that the peak luminosities of distant supernovae appear to be $`0.2`$ magnitude fainter than predicted by a standard decelerating universe $`(q_0>0)`$. Based on this, the Supernova Cosmology Project derived $`\mathrm{\Omega }_\mathrm{M}=0.28_{0.12}^{+0.14}`$ at $`1\sigma `$, for a flat universe, thus forcing a non-vanishing cosmological constant. However this conclusion lies on the assumption that there is no mechanism likely to produce an evolution of the observed light curves over cosmological distances. In other words: both teams assumed that the intrinsic peak luminosity and the time scales of the light curve were exactly the same for both the low-$`z`$ and the high-$`z`$ supernovae.
More recently Riess et al. (1999a,b) have found evidences of evolution between the samples of nearby supernovae and those observed at high redshifts by comparing their respective risetimes, thus casting some doubts about the derived cosmological parameters. In particular Riess et al. (1999a,b) find that the sample of low-$`z`$ supernovae has an average risetime of $`19.98\pm 0.15`$ days whereas the sample of high-$`z`$ supernovae has an average risetime of $`17.50\pm 0.40`$ days. The statistical likelihood that the two samples are different is high $`(5.8\sigma )`$. Riess et al. (1999b) also analyze several potential alternatives to produce, within a familiy of theoretical models, an evolution with the observed properties: distant supernovae should be intrinsically fainter and at the same time should have smaller risetimes. All the families of models studied so far have the inverse trend: decreasing peak luminosities correspond to longer risetimes.
On the other hand, and from the theoretical point of view, it is easy to show that a time variation of the gravitational constant, in the framework of a Scalar-Tensor cosmological theory, can reconcile the observational Hubble diagram of SNeIa with an open $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ universe.While we were writing this paper we became aware of a similar idea independently proposed by Amendola et al. (1999). The starting point is simple: assume that all thermonuclear supernovae release the same amount of energy $`(E)`$. In a simple model of light curve (Arnett 1982) the peak luminosity is proportional to the mass of nickel synthetized, which in turn, to a good approximation, is a fixed fraction of the Chandrasekhar mass $`(M_{\mathrm{Ni}}M_{\mathrm{Ch}})`$, which depends on the value of gravitational constant: $`M_{\mathrm{Ch}}G^{3/2}`$. Thus we have $`EG^{3/2}`$, and if one assumes a slow decrease of $`G`$ with time, distant supernovae should be dimmer. Moreover, the time scales of supernovae also depend on the Chandrasekhar mass. Let us elaborate on this last point. According to the analytic model of light curve of Arnett (1982), the width of the peak of the light curve of SNeIa is given by:
$$\tau \left(\frac{M_{\mathrm{ej}}^3}{M_{\mathrm{inc}}}\right)^{1/4}$$
(1)
where $`M_{\mathrm{ej}}`$ is the ejected mass and $`M_{\mathrm{inc}}`$ is the incinerated mass. Within our current knowledge of the mechanisms of explosion of SNeIa both masses can be considered proportional to the Chandrasekhar mass, and therefore we have $`\tau M_{\mathrm{Ch}}^{1/2}`$ or, equivalently, $`\tau G^{3/4}`$. Since the risetime for distant supernovae is obtained from semi-empirical models, that is a template light curve which takes into account the decline rate and the width of the peak, one can then also assume this dependence on $`G`$ for the risetime. This expression has the right properties since distant supernovae have smaller peak luminosities and, at the sime time, smaller risetimes, as required by observations.
## II The effects of a varying $`G`$
Despite the beauty and successes of the simplest version of General Relativity (GR), the possibility that $`G`$ could vary in space and/or time is well motivated. Its study can shed new light into fundamental physics and cosmology and it seems natural in Scalar-Tensor theories of gravity (STTs) such as Jordan-Brans-Dicke (JBD) theory or its extensions.
To make quantitative predictions we will consider cosmic evolution in STTs, where $`G`$ is derived from a scalar field $`\varphi `$ which is characterized by a function $`\omega =\omega (\varphi )`$ determining the strength of the coupling between the scalar field and gravity. In the simplest JBD models, $`\omega `$ is just a constant and $`G\varphi ^1`$, however if $`\omega `$ varies then it can increase with cosmic time so that $`\omega =\omega (z)`$. The Hubble rate $`H`$ in these models is given by:
$$H^2\left(\frac{\dot{a}}{a}\right)^2=\frac{8\pi \rho }{3\varphi }+\frac{1}{a^2R^2}+\frac{\mathrm{\Lambda }}{3}+\frac{\omega }{6}\frac{\dot{\varphi }^2}{\varphi ^2}H\frac{\dot{\varphi }}{\varphi },$$
(2)
this equation has to be complemented with the acceleration equations for $`a`$ and $`\varphi `$, and with the equation of state for a perfect fluid: $`p=(\gamma 1)\rho `$ and $`\dot{\rho }+3\gamma H\rho =0`$. The structure of the solutions to this set of equations is quite rich and depends crucially on the coupling function $`\omega (\varphi )`$ (see Barrow & Parsons 1996). Here we are only interested in the matter dominated regime: $`\gamma =1`$. In the weak field limit and a flat universe the exact solution is given by:
$$G=\frac{4+2\omega }{3+2\omega }\varphi ^1=G_0(1+z)^{1/(1+\omega )}.$$
(3)
In this case we also have that $`a=(t/t_0)^{(2\omega +2)/(3\omega +4)}`$. This solution for the flat universe is recovered in a general case in the limit $`t\mathrm{}`$ and also arises as an exact solution of Newtonian gravity with a power law $`Gt^n`$ (Barrow 1996). For non-flat models, $`a(t)`$ is not a simple power-law and the solutions get far more complicated. To illustrate the effects of a non-flat cosmology we will consider general solutions that can be parametrized as Eq. but which are not simple power-laws in $`a(t)`$. In this case, it is easy to check that the new Hubble law given by Eq. becomes:
$$H^2(z)=H_0^2\left[\widehat{\mathrm{\Omega }}_M(1+z)^{3+1/(1+\omega )}+\widehat{\mathrm{\Omega }}_R(1+z)^2+\widehat{\mathrm{\Omega }}_\mathrm{\Lambda }\right]$$
(4)
where $`\widehat{\mathrm{\Omega }}_M`$,$`\widehat{\mathrm{\Omega }}_R`$ and $`\widehat{\mathrm{\Omega }}_\mathrm{\Lambda }`$ follow the usual relation: $`\widehat{\mathrm{\Omega }}_M+\widehat{\mathrm{\Omega }}_R+\widehat{\mathrm{\Omega }}_\mathrm{\Lambda }=1`$ and are related to the familiar local ratios ($`z0`$): $`\mathrm{\Omega }_M8\pi G_0\rho _0/(3H_0^2)`$, $`\mathrm{\Omega }_R=1/(a_0RH_0)^2`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=\mathrm{\Lambda }/(3H_0^2)`$ by:
$`\widehat{\mathrm{\Omega }}_M`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Omega }_M}{g}}\left({\displaystyle \frac{4+2\omega }{3+2\omega }}\right);\widehat{\mathrm{\Omega }}_\mathrm{\Lambda }={\displaystyle \frac{\mathrm{\Omega }_\mathrm{\Lambda }}{g}};\widehat{\mathrm{\Omega }}_R={\displaystyle \frac{\mathrm{\Omega }_R}{g}}`$ (5)
$`g`$ $``$ $`1+{\displaystyle \frac{1}{(1+\omega )}}{\displaystyle \frac{1}{6}}{\displaystyle \frac{\omega }{(1+\omega )^2}}`$ (6)
Thus the GR limit is recovered as $`\omega \mathrm{}`$. The luminosity distance $`d_L=d_L(z,\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda },\omega )`$ is obtained as usual from the (line-of-sight) comoving coordinate distance: $`r(z)=𝑑z^{}/H(z^{})`$, with the trigonometric or the hyperbolic sinus to account for curvature (Peebles 1993). In the limit of small $`z`$ we recover the usual Hubble relation: $`y=H_0r=z(1+\widehat{q}_0)z^2/2`$ where a new deceleration $`\widehat{q}_0`$ parameter is related to the standard one by:
$$\widehat{q}_0=\frac{q_0}{g}+\frac{\widehat{\mathrm{\Omega }}_M}{2(1+\omega )}.$$
(7)
One can see from this equation that even for relative small values of $`\omega `$ the cosmological effect is small. For example for $`\mathrm{\Omega }_M0.2`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }0.8`$ we have $`q_00.7`$ while $`\widehat{q}_0`$ is around $`\widehat{q}_00.4`$ for $`\omega 1`$. Note nevertheless that this effect, although small, tends to decrease the acceleration and therefore it partially decreases the effect in the peak luminosity of SNeIa caused by an increasing $`G`$. In summary, Eq. parametrizes the change in $`G`$ as a function of $`\omega `$ while Eqs.\[4-6\] parametrize the corresponding cosmic evolution.
As mentioned in the introduction, we are assuming that thermonuclear supernovae release a similar amount of energy $`EG^{3/2}`$. Thus using Eq., we have:
$$\frac{E}{E_0}=\left(\frac{G}{G_0}\right)^{3/2};MM_0=\frac{15}{4}\mathrm{log}\left(\frac{G}{G_0}\right)=\frac{15}{4(1+\omega )}\mathrm{log}\left(1+z\right),$$
(8)
were $`M`$ is the absolute magnitude and the subscript 0 denotes the local value. Therefore we have the following Hubble relation:
$$m(z)=M_0+5\mathrm{log}d_L(z,\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda },\omega )+\frac{15}{4(1+\omega )}\mathrm{log}\left(1+z\right)$$
(9)
which reduces to the standard relation as $`\omega \mathrm{}`$. ¿From the last term alone we can see that $`\omega 5`$ can reduce the apparent luminosity by $`\mathrm{\Delta }m0.2`$, which is roughly what is needed to explain the SNeIa results without a cosmological constant. For illustrative purposes figure 1 shows the above relation for two representative cosmological models, including the effects of $`\omega `$ in $`d_L`$, for $`\omega =\pm 5`$ (dotted lines) and the standard $`(\omega =\mathrm{})`$ case (solid line).
The effect of a varying $`G`$ on the time scales of SNeIa can be obtained from Eq.. Since $`\tau G^{3/4}`$, the ratio of the local time scale, $`\tau _0`$, to the faraway one is:
$$\frac{\tau }{\tau _0}\frac{G}{G_0}^{3/4}=1+z^{\frac{3}{4(1+\omega )}}.$$
(10)
and, to make some quantitative estimates, we can use the mean evolution found by Riess et al. (1999a,b). From their figure 1 we obtain the following widths of the light curve when the supernova is 2.5 magnitudes fainter than the peak luminosity: $`\tau _0=45.0\pm 0.15`$ (at $`z0`$) and $`\tau =43.8\pm 0.40`$ (at $`z0.5`$), were the errors in the widths have been ascribed solely to the errors in the risetimes. Thus, from Eq. we obtain $`\omega 10.25_{3.65}^{+9.25}`$ ($`2\sigma `$ errors). Therefore, a very small variation of the gravitational constant can account for the reported differences in the SNeIa time scales. However these limits on $`\omega `$ should be considered as weak, in the sense that since most SNeIa are discovered close to its peak luminosity the width of the light curve is poorly determined. These values are shown as horizontal dashed ($`1\sigma `$) and continuous ($`2\sigma `$) lines in Fig. 2 where the confidence contours (at the 99%, 90%, 68% — solid lines — 5% and 1% confidence level — dotted lines) in the $`(\omega ,\mathrm{\Omega }_\mathrm{\Lambda })`$ plane for a flat $`\mathrm{\Omega }_R=0`$ universe (left panel) and in the $`(\omega ,\mathrm{\Omega }_M)`$ plane for the case $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ (right panel) are shown.
## III Discussion and Conclusions
In astrophysics and cosmology the laws of physics (and in particular the simplest version of GR) are extrapolated outside its observational range of validity. It is therefore important to test for deviations of these laws at increasing cosmological scales and times (redshifts). SNeIa provide us with a new tool to test how the laws of gravity and cosmology were in farway galaxies ($`z0.5`$). In particular, current limits on the (parametrized) Post Newtonian formalism mostly restrict to our very local Universe (see Will 1993). The observational limits on $`\dot{G}/G`$ come from quite different times and scales (see Barrow & Parsons 1996 for a review), but mostly in the local and nearby enviroments at $`z0`$ (solar system, binary pulsars, white dwarf cooling, neutron stars) typical bounds give $`\dot{G}/G\begin{array}{c}<\hfill \\ \hfill \end{array}10^{11}10^{12}`$ yr<sup>-1</sup>, or $`\omega \begin{array}{c}>\hfill \\ \hfill \end{array}10100`$. However, STTs predict $`\omega =\omega (\varphi )`$. That is, $`\omega `$ is not required to be a constant, so that $`\omega `$ can increase with cosmic time, $`\omega =\omega (z)`$, in such a way that it could approach the GR predictions ($`\omega \mathrm{}`$) at present time and still give significant deviations at earlier cosmological times. In this sense bounds from primordial nucleosynthesis could provide an important test. Current bounds on $`\omega `$ from nucleosynthesis are comparable to the local values but these bounds are model dependent and also involve very large extrapolations.
Our analysis indicates that if we adopt the constraints derived from the width of the light curves of SNeIa then our best fit to the data requires $`\omega 10`$ (or equivalently $`\dot{G}/G10^{11}`$ yr<sup>-1</sup> or $`10\%`$ in $`G`$). This value is slightly smaller than some of the the current constraints at $`z0`$, but it corresponds to higher redshifts $`z0.5`$ and could be accomodated in STTs with $`\omega =\omega (\varphi )=\omega (z)`$. If this is the case, at the $`2\sigma `$ confidence level we obtain $`0.0\begin{array}{c}<\hfill \\ \hfill \end{array}\mathrm{\Omega }_\mathrm{\Lambda }\begin{array}{c}<\hfill \\ \hfill \end{array}1.0`$ and the Hubble diagram of SNeIa poorly constrains $`\mathrm{\Omega }_\mathrm{M}\begin{array}{c}<\hfill \\ \hfill \end{array}1`$. At the $`1\sigma `$ confidence level we obtain $`0.2\begin{array}{c}<\hfill \\ \hfill \end{array}\mathrm{\Omega }_\mathrm{\Lambda }\begin{array}{c}<\hfill \\ \hfill \end{array}0.8`$ and $`\mathrm{\Omega }_\mathrm{M}\begin{array}{c}<\hfill \\ \hfill \end{array}0.7`$. If we do not take into account the restrictions derived from the width of the light curves then our conclusions are much weaker: the observational data and the theory can be reconciled in the framework of a cosmological theory with a varying $`G`$ with no cosmological constant $`(\mathrm{\Omega }_\mathrm{\Lambda }=0)`$ only if $`\omega \begin{array}{c}>\hfill \\ \hfill \end{array}1.5`$. If we further require a flat $`\mathrm{\Omega }_\mathrm{R}=0`$ universe then $`1.5\begin{array}{c}<\hfill \\ \hfill \end{array}\omega \begin{array}{c}<\hfill \\ \hfill \end{array}3.0`$ is needed.
Obviously more work is needed both regarding other observational consequences of STTs and on the physics of supernovae. In particular, an improvement of our knowledge of the physics of thermonuclear supernovae would provide us with an unique tool to test fundamental laws of physics over cosmological distances. In addition it should be stressed that new observations of distant supernovae, or other standard candles, at higher redshifts ($`z>1`$) could constrain even more the current limits on the variation of the fundamental constants.
|
no-problem/9907/astro-ph9907299.html
|
ar5iv
|
text
|
# Detection of the Entropy of the Intergalactic Medium: Accretion Shocks in Clusters, Adiabatic Cores in Groups
## 1 Introduction
The complex thermodynamic evolution of the hot, X-ray emitting, gas in clusters of galaxies is at the forefront of current efforts to understand these largest virialized systems. X-ray observations of cluster number counts, luminosity functions and temperature distributions indicate little apparent evolution in clusters back to redshifts as high as $`0.7`$ (e.g., Henry 1997, 2000; Rosati et al. 1998). These results provide one of the strongest challenges to high density cosmological models in which cluster evolution is expected to be occuring rapidly at low redshifts.
However, these tests are strongly dependent on the thermodynamic evolution of the intracluster medium (ICM, see Borgani et al. 1999 and references therein). In particular, the X-ray properties of X–ray halos depend on the entropy profile in the ICM. An ubiquitous minimum entropy, or entropy floor, in the external pre-infall gas, would break the self–similar behaviour of purely gravitational models, in agreement with the X–ray data. In terms of the global X-ray observables luminosity ($`L`$) and temperature ($`T`$), self-similar models predict $`LT^2`$, while $`LT^\alpha `$, with $`\alpha 3`$, is observed (David et al. 1993, Mushotzky & Scharf 1997, Allen & Fabian 1998, Arnaud & Evrard 1998, Markevitch 1998), with evidence for a further steepening at group scales (Ponman et al. 1996; Helsdon & Ponman 2000, Xue & Wu 2000). Not only can the inclusion of an entropy minimum successfully reproduce the observed $`LT^3`$ relationship, but it can also explain the flat density distribution observed in the cores of clusters and the low evolution of the $`L`$$`T`$ relation at high redshifts (see Tozzi & Norman 2000, hereafter TN). Recently an entropy floor has been detected in the core of groups (Ponman, Cannon & Navarro 1999, hereafter PCN; Lloyd–Davies, Ponman & Cannon 2000) providing direct evidence for this entropy excess in objects with temperatures between $`1`$ and $`3`$ keV. Evidence for a breaking of self–similarity also comes from the observation of a dramatic change in the chemical and spatial distribution properties of the gas at the scale of groups, below the observed temperature of 1 keV (Renzini 1997, 1999; Helsdon & Ponman 2000).
In hierarchical models of structure formation the local ICM is simply the high redshift IGM accreted into cluster and group scale potential wells. An examination of the equation of state of the IGM based on observations of the high redshift Ly$`\alpha `$ forest (Schaye et al. 1999; Ricotti, Gnedin & Shull 2000) yields an average entropy level which is at least an order of magnitude lower than that observed in the core of low temperature clusters, and that needed to explain the local properties of X–ray clusters and groups. Physical processes which could raise the entropy of the early IGM are SNe feedback, linked to the history of star formation, or radiative and mechanical processes driven by quasars (see Valageas & Silk 1999; Wu, Fabian & Nulsen 1999; Menci & Cavaliere 2000). It is, however, very difficult to model such processes a priori.
Our approach is instead to start from the properties of the ICM as observed in groups and clusters, and trace back the fundamental processes that drive the thermodynamic evolution of the gas. In order to simplify the possible scenarios, we consider two forms of entropy injection. First, the excess entropy may be interpreted as the fossil residual of an initial entropy floor imprinted in the external, pre–collapse IGM before the epoch of accretion. In this case, the heating occurs when only the small, sub–galactic scales have collapsed, and the gas is heated at about the background density. Processes like starbursts can be very efficient in transporting large amount of energy out of the host galaxies (Strickland & Stevens 2000). In the second scenario, the entropy is the result of heating following the collapse of the baryons in group–sized potential wells ($`M>10^{13}M_{}`$). In this case the gas is heated at the average density reached in the virialized halos, which can be two orders of magnitude larger than the ambient density. These two scenarios imply a different energetic budget, since for a given entropy level, a larger energy per particle is required at higher densities. We hereafter refer to these two extreme situations as the external and internal heating cases, respectively.
In either scenario the enhanced gas density expected around actively accreting, massive clusters of galaxies (typically a factor of $`10`$ with respect to the background value) could make it detectable in emission in the X–ray band even at distances larger than the shock radius, depending on its temperature and density. This gas has not yet been accreted or shocked, hence its entropy is indicative of the initial value present in the external IGM. The entropy level inside the shock radius will be much higher than the external level, as the result of strong shock heating driven by the accretion process. The entropy profile should then decrease towards the cluster center with a power law which results from the combination of shock heating and previous or ongoing non–gravitational heating.
At the low end of the mass scale, an external entropy floor gives rise to a flat, extended, low surface brightness emission, without any shocked accretion. In this case the gas has been accreted adiabatically, and the entropy remains at its initial value everywhere, even in the inner regions. With internal non-gravitational heating the profile can be more complex. Thus the investigation of lower luminosity, lower mass systems will be a useful complement to that of the accreting gas surrounding rich clusters.
There are therefore several potentially observable phenomena which allow us to probe the entropy histories of clusters: accretion shocks, external warm gas around clusters, highly extended isentropic emission in groups, and variations in the interior gas entropy profile for both clusters and groups. The combined observations of these features over a range of mass scales can test the internal versus external scenarios.
In the present work we investigate the predictions of the external and, to a lesser extent, the internal heating scenarios, and present observational strategies and feasibilities for studying their physical consequences. In §2 we discuss entropy based models and accretion processes for X-ray galaxy clusters. In the external heating scenario (§2.1) we demonstrate that the regions around rich clusters are particularly important. In the internal heating scenario (§2.2) the slope of the inner entropy profile can provide an indication of internal energy injection which followed the accretion. In §3 we present observational strategies to investigate the described scenarios. We consider the detection of accretion shock at the ICM/IGM interface; the expected properties of the accretion shock regions are described for a range of entropy levels in the external gas. We then assess the feasibility of observing such accretion shock regions in real systems, using simulated XMM (see Dahlem et al. 1999) observations scaled to a nearby cluster (Abell 2029). We consider also strategies and feasibilities for observing groups, where an insentropic distribution of gas is expected. In §4 we briefly discuss stellar processes as a source of entropy, showing the impact of the proposed observations on the study of the nature of the heating sources. In §5 we present our conclusions.
## 2 Entropy-based models of X-ray clusters and groups
The evolution of the ICM is governed by both dynamics (and the underlying cosmology) and gas thermodynamics. A complete treatment of the physics of the gas necessarily includes shock heating and adiabatic compression (see the 1D models of Bertschinger 1985, Knight & Ponman 1997, Takizawa & Mineshige 1998, and the 3D numerical simulations of Evrard 1990, Roettiger et al. 1993, Metzler & Evrard 1994, Bryan & Norman 1998), and radiative cooling (see Lewis et al. 1999). An expanding accretion shock at the interface of the inner virialized gas with a cooler, adiabatically–compressed, external medium, located approximately at the virial radius of the cluster, is a longstanding prediction from such gravitationally–driven models. However, as discussed in the Introduction, gravitationally–driven models predict X-ray properties which scale self-similarly with mass and fail to reproduce X-ray observations of clusters.
The presence of a minimum entropy in the pre-collapse IGM has been advocated for some time as a way to naturally break the purely self-similar behaviour (Kaiser 1991, Evrard & Henry 1991). More recently, a minimum entropy has been detected (PCN) interior to clusters and all of its consequences have been re–visited. Models based on minimum entropy are able to explain the detailed shape of the $`L`$$`T`$ relation, predict its evolution, and help in explaining the cores and temperature profiles observed in clusters. In particular, we will use the model presented in TN that although having the limitation of being one–dimensional, does allow a semi–analytic treatment of shock heating, adiabatic compression and radiative cooling. With all of these processes being modulated by the cosmology and dark matter properties. The only free parameter in the model is, in the case of external heating, the initial entropy value. In the case of internal heating, the free parameter space is necessarily larger, depending on the epoch and distribution of the heating sources within the halo. Here we will limit the study of the internal scenario to a simple reference case, outlining only the most prominent features, without giving an exhaustive investigation of the many possible heating models.
In the following we describe the two scenarios, referring the reader to the work of TN and discuss in greater detail the resulting entropy profiles.
### 2.1 External heating
In the external heating scenario, a non–negligible initial entropy in the IGM introduces a mass scale where strong accretion shocks no longer form. Below this mass (at the scale of groups) is an effectively adiabatic regime, where gas is just compressed into the potential wells at constant entropy. The observed $`LT^3`$ relationship is essentially produced by the resulting flattening of the density distribution in cluster cores, when shocks turn off completely (see Balogh, Babul & Patton 1999, Tozzi & Norman 1999 (TN)).
After the accretion, the entropy of each accreted shell of gas is kept constant, as long as the density is low and the cooling time (defined as $`t_{cool}\rho ^1T^{1/2}`$ for $`kT>2`$ keV) is correspondingly large. Cooling can be important in the central regions, depending also on the initial entropy level. Indeed, if the initial entropy is too low, cooling is the dominant process, leading to an excessive amount of cooled baryons. In this strong cooling regime (actually disfavoured by many observations and not considered further here), the semianalytical model breaks down. As stated above, further (internal) non–gravitational heating processes are not included in our external scenario.
The mass scale of shock formation is governed by the value of the IGM entropy $`S\mathrm{log}(K)`$, where $`KkT/\mu m_H\rho ^{2/3}`$ (here we assume that $`\mu =0.59`$ for a primordial IGM). The value of $`K=K_{}`$ that produces a good fit to the local $`L`$$`T`$ relation is $`K_{}=(0.2\pm 0.1)\times 10^{34}`$ erg cm<sup>2</sup> g<sup>-5/3</sup> (see TN), which needs to be in place by at least the turn–around epoch for each shell. For a given entropy level, the temperature of the gas at the universal background density is derived as $`kT3.2\times 10^2K_{34}\times 10^{34}(1+z)^2`$ keV (assuming the standard nucleosynthesis baryon density value). In terms of energetic budget, this value corresponds to a lower limit of $`kT_{min}0.1(K_{34}/0.4)`$ keV per particle in a $`\mathrm{\Lambda }`$CDM universe ($`kT_{min}0.04(K_{34}/0.4)`$ keV in a tilted CDM with $`\mathrm{\Omega }_0=1`$). Note that this value is within the energy budget expected from SNe heating (see Loewenstein 2000).
The observed entropy floor in the core of groups is about $`K_{cl}0.08h^{1/3}\times 10^{34}`$ erg cm<sup>2</sup> g<sup>-5/3</sup> (corresponding to $`SkT/n_e^{2/3}80h^{1/3}`$ keV cm<sup>2</sup> using the definition of PCN, within an uncertainty of a factor of 2; see PCN, Lloyd–Davies, Ponman & Cannon 2000). We stress that in small mass halos, the central entropy excess is always expected to be smaller but close to the initial value at the epoch of accretion since the accretion proceeded adiabatically, and the cooling has been inhibited by the initial entropy level itself. Possibly, the observed value of $`K_{cl}`$ may be different from that in the external IGM being accreted at $`z=0`$ (the one we would want to detect) if substantial evolution in the entropy occurred since the accretion epoch. Indeed, since groups form typically at early lookback times, their central entropy could actually be lower than the present–day level in the external IGM if $`K_{}`$ grows with the cosmic epoch. We will consider this when discussing possible scenarios for actual observations.
Interestingly, the simplest assumption $`K_{}K_{cl}=constant`$ actually reproduces the observed break in scale invariance rather well. At low masses ($`10^{13}10^{14}h^1M_{}`$) the emitting gas will be very extended, without any discontinuity in the entropy since there are no shocks which would otherwise separate the accreting IGM from the accreted ICM. Indeed, data suggest that the total luminosity of a group can vary by a factor of 2 or 3 after the inclusion of the undetected, low surface brightness emission extending up to the virial radius (see Helsdon & Ponman 2000). This is consistent with the prediction from the analytic model in TN, where, for a given temperature, the total luminosity (i.e., including the emission from all the gas within the shock radius) can exceed by a factor larger than 3 the luminosity included within $`100h^1`$ kpc, a radius often used to define the luminosity of poor groups. As we will discuss in §3, detecting the missed emission from loose groups is one of the scientific aims of our proposed observational scheme.
Moving to higher mass scales, the initial entropy level has less effect on the outer cluster regions. The average entropy is dominated by the entropy produced in strong shocks. However, the initial entropy still contributes to building a central density core, even if the effect of cooling starts to be important in eroding the entropy plateau. An important point is that, in contrast to low mass systems, the gas density is still significant at radii larger than the virial radius. Specifically, at distances larger than the shock radius itself, the overdensity of the accreting gas will be $`10`$ with respect to the background density. The entropy level of this gas is by definition the value $`K_{}`$. We expect a considerable amount of such diffuse gas around massive halos. In particular, the most massive clusters are likely to be still accreting significant amounts of matter in most cosmologies, since they are the last objects to form in any hierarchical universe. The total accretion rate of matter for a cluster of $`8`$ keV (roughly corresponding to a mass of $`10^{15}h^1M_{}`$) is expected to be on average quite high at $`z=0`$. The predicted average mass growth in baryons, computed in the extended PS framework (see, e.g., Lacey & Cole 1993), is about $`f_b\mathrm{\hspace{0.17em}0.24}\times 10^{15}M_{}`$/Gyr for $`\mathrm{\Omega }_0=1`$ with a tilted, cluster normalized CDM spectrum (tCDM) and $`f_b\mathrm{\hspace{0.17em}0.08}\times 10^{15}M_{}`$/Gyr for $`\mathrm{\Omega }_0=0.3`$ and $`\mathrm{\Lambda }=0.7`$ ($`\mathrm{\Lambda }`$CDM). Here $`f_b`$ is the universal baryonic fraction. In the following we will use the standard nucleosynthesis value $`f_B=0.02h^2/\mathrm{\Omega }_0`$ for $`\mathrm{\Lambda }`$CDM. This value gives similar baryonic accretion rates in $`\mathrm{\Lambda }`$CDM and tCDM. However, we are forced to use a value at least two times larger in tCDM, in order to have the average baryonic fraction in halos as high as $`15`$ %, as observed (White et al. 1993). As a general feature, the accretion rates are correspondingly higher at higher redshifts for the same masses. However, in this current work we will focus on $`z0`$.
The break scale between adiabatic (low masses) and shock (large masees) regimes can be investigated by studying the dependence of the infall velocity $`v_i`$ of the accreting IGM on the total mass of the system. Approximating the infall as an adiabatic flow, we calculate the infall velocity at the shock radius, where the gas is expected to be shocked and reach hydrostatic equilibrium. The effect of a constant entropy minimum $`K_{}`$ on the infalling IGM is to introduce a compression term proportional to the square of the sound speed $`c_s^2=\gamma K_{}\rho _e^{2/3}`$, where $`\rho _e`$ is the external baryonic density and $`\gamma =5/3`$ is the adiabatic index for a monoatomic gas. In fact, part of the gravitational energy goes into compression, to give for the infall velocity:
$$\frac{v_i^2}{2}=\frac{v_{ff}^2}{2}+\mathrm{\Delta }W\frac{c_s^2}{\gamma 1}+\frac{c_s^2}{\gamma 1}\left(\frac{\rho _{ta}}{\rho _e}\right)^{\gamma 1},$$
(1)
where $`v_{ff}`$ is the free fall velocity, and $`\mathrm{\Delta }W`$ is the contribution added to $`v_{ff}^2/2`$ to have the total work done by the gravitational potential (see TN). The fourth term on the right hand side results from the initial condition $`v_i=0`$ for a gas shell at the turnaround radius, when the gas had a density $`\rho _{ta}\rho _{back}`$. The compression term carries an increasing fraction of the total gravitational energy when the system mass is lower, or, since the sound speed is proportional to $`K_{}^{1/2}`$, when the entropy is higher.
The infall velocity can then be compared with the sound speed in the infalling gas, to test whether $`v_i>c_s`$ and shocks can develop. In Figure 1 the infall velocity computed at the shock radius is plotted as a function of the virialized mass, which is in turn a function of the redshift (here we have assumed an average mass growth for the dark halo commensurate with a $`\mathrm{\Lambda }`$CDM universe). In the above picture only the external entropy level is needed to determine the transition between the adiabatic and the shock regime. The external density $`\rho _e`$, which determines the sound speed in the IGM, is obtained by imposing mass conservation at the accretion radius, plus the assumption that the accreted baryons are a constant fraction of the total virialized mass (see TN).
At early epochs, when the virialized mass is still low, the compression term is important and the infall velocity is lower than the sound speed. In this case, the accretion of the IGM proceeds entirely adiabatically, giving rise to an adiabatic core (insets Figure 1. As the virialized mass grows, the infall velocity eventually becomes larger than the sound speed, marking the epoch when shocks dominate (here we have neglected the small velocity of the shock front in the cluster rest frame). The infall velocity then asymptotically approaches the free fall velocity of the system. In Figure 1 an external constant entropy of $`K_{34}=0.3`$ (where $`K_{34}`$ is in units of $`10^{34}`$ erg g<sup>-5/3</sup> cm<sup>2</sup>) has been assumed for a low density ($`\mathrm{\Omega }_0=0.3`$) flat cosmology, for objects of mass $`10^{14}h^1M_{}`$ and $`10^{15}h^1M_{}`$. At lower masses, the transition from the adiabatic to the shock regime occurs later, giving rise to a relatively larger adiabatic core. From Figure 1 we can also see that for low mass systems ($`M<10^{14}M_{}`$), a growing fraction of the accreted baryons retains the pre-collapse entropy level. This fraction approaches unity at the scale of poor systems and groups, providing self-consistency with our expectations of isentropic gas in groups. Note that the transition beteween the adiabatic and the shock regime is marked by a transition radius $`r_t`$ within which an approximately constant baryonic mass in contained. This can provide a meaningful observable as further discussed in §3.
As a consequence of the above picture, the entropy profiles change dramatically along the mass sequence: flat in low mass halos, steep and discontinuous for large masses. The steep part of the profile corresponds to strongly shocked gas, while the flat part is the adiabatically accreted gas. External to the shock, the accreting gas is simply adiabatically compressed. In the following sections we will describe in greater detail the resulting entropy profiles.
### 2.2 The entropy profile with external heating
We focus first on large mass scales, where a strong accretion shock is expected irrespective of the initial entropy level. Such an accretion shock is likely to occur at approximately the virial radius, where the gas density can typically be a factor $`1000`$ lower with respect to that at the cluster center. A simple relation exists between the density jump and the temperatures of the hot internal, and colder external, gas (Landau & Lifshitz 1959, see Cavaliere, Menci & Tozzi 1997, 1999):
$$\rho _i/\rho _e=2\left(1T_e/T_i\right)+\sqrt{4\left(1T_e/T_i\right)^2+T_e/T_i},$$
(2)
where $`\rho _i`$ and $`T_i`$ are the internal gas density and temperature. The external density $`\rho _e`$ and temperature $`T_e`$ refer to the infalling gas just prior to being shocked. Note that $`T_e`$ is not simply the temperature of the field IGM. The accreted IGM will experience adiabatic compression prior to reaching the accretion shock, thus $`T_e=\mu m_pK_{}\rho _e^{2/3}`$. The overdensity of the baryons with respect to the background value is expected to be $`10`$ for rich clusters both in $`\mathrm{\Lambda }`$CDM and in tCDM. This would correspond to temperatures of $`kT_e3.2\times 10^2K_{34}\delta ^{2/3}0.15K_{34}`$ keV at $`z=0`$. However, since in tCDM we are forced to assume a baryonic density larger than a factor of $`2`$ (with respect to the standard nucleosynthesis value), the external temperature $`T_e`$ for a given $`K_{}`$ and $`\delta `$ will be about $`60`$ % larger.
To compute the internal density at the shock boundary, we could use the detailed and self–consistent density and temperature profile of the ICM resulting from a minimum entropy model, as derived in TN. However, the density profiles can be fitted to a good approximation with a $`\beta `$ model (Cavaliere & Fusco Femiano 1976), at least at large radii where the cooling time is large, as it is shown in Figure 2. Therefore for simplicity we use it to approximate the gas density internal to the shock:
$$\rho =\rho _c\left(1+(r/r_c)^2\right)^{\frac{3}{2}\beta },$$
(3)
where $`r_c`$ is the core radius. For a flat temperature profile, the observed X-ray surface brightness at the shock radius $`r_s`$ can be written as:
$$\mathrm{\Sigma }=\mathrm{\Sigma }_c\left(1+(r_s/r_c)^2\right)^{3\beta +1/2}.$$
(4)
In this case a simple inversion from surface brightness to density within the shock radius is given by:
$$\frac{\rho }{\rho _c}=\left(\frac{\mathrm{\Sigma }}{\mathrm{\Sigma }_c}\right)^{1/(21/3\beta )}.$$
(5)
For a $`\beta `$ model, the discontinuity in the surface brightness expected at the shock is approximately $`(\rho _i/\rho _e)^{21/3\beta }(T_i/T_e)^{1/2}`$ (for $`T_e1`$ keV) with respect to the extension of the pure beta model to the shock radius. In the case in which there is no shock ($`\rho _i/\rho _e=1`$) and the temperature profile decreases adiabatically as $`T\rho ^{2/3}`$ (expected in small groups), we can use the same functional form, replacing $`\beta `$ with an effective $`\beta ^{}`$ which accounts for the mild dependence of the emissivity $`ϵ`$ on temperature. In the case of pure bremsstrahlung and temperature $`kT>2`$ keV, $`ϵT^{1/2}`$ and $`\beta ^{}=\frac{7}{6}\beta `$ (see Ettori 1999). At lower temperatures we must include the contribution from line emission, which is significant in the wide energy band of XMM ($`0.112`$ keV). For $`0.1<kT<2`$, $`ϵconstant`$ with good approximation if the metallicity is about one third solar, virtually removing the temperature dependence and giving $`\beta ^{}=\beta `$ again. The entropy jump can therefore be detected when both the X-ray surface brightness (from which density is determined) and the temperature are measured at the shock radius. In Figure 2 the surface brightness and the emission weighted temperature profiles are shown for three relevant cases of the external scenario, the same that will be discussed in detail in §3.
The effect of the accretion shocks is to raise the entropy over its initial (external) level. Since the transition from the adiabatic accretion to the shocked accretion is very fast (with the shock radius rapidly approaching the virial one, see Figure 4, third panel), the transition between the two regimes is recorded in the entropy profile as a sudden change of slope at the transition radius $`r_t`$. In the inner part the adiabatic core is visible (though eventually it will be partially erased by cooling), while in the outer, shocked regions a featureless power law profile is expected. A reference slope for this power law can be derived to a first approximation with the usual assumption of an isothermal profile: $`\rho r^2`$ gives $`Kr^{4/3}`$. This value is close to the $`Kr^{1.1}`$ predicted from the model (see TN), where a temperature gradient is present due to further adiabatic compression after accretion, and the density distribution is somewhat steeper than $`2`$. In other words, the gas in cluster is well described by a polytropic distribution with a polytropic index $`\gamma _p1.2`$ (see Loewenstein 2000; TN). The expected entropy profiles in the external heating scenario are shown in Figure 3 for two objects of $`M=10^{15}H^1M_{}`$ and $`M=10^{14}H^1M_{}`$. We can see the entropy core partially erased by cooling within $`r<0.1R_{vir}`$, and the shocked gas with the characteristic slope $`Kr^{1.1}`$ at $`r0.10.3h^1`$ Mpc.
### 2.3 Internal heating
How do these predictions change in the internal heating scenario? A first major difference is that the gas is heated when it is at much higher densities, and the change of entropy for a given energy input is consequently lower. In other words, to reproduce the breaking of the scale invariance in X–ray halos, a larger energy budget is needed in the internal scenario with respect to the external one. In the internal heating case, the IGM is essentially cold when it is accreted, and the gas always experiences strong shocks, even in low mass objects. In this case, the gas may be detected via emission in the UV band (and may be related to the UV excess detected around nearby clusters, see Lieu, Bonamente & Mittaz 2000). Alternatively, such gas may be seen in absorption against bright background sources, in the X–ray band if $`kT0.1`$ keV (see Hellstein, Gnedin & Miralda–Escude’ 1998), or in the UV band if $`kT0.01`$ keV. In the last case OVI, which peaks at $`0.03`$ keV in collisional equilibrium, is the best diagnostic (K. Sembach, private communication).
It is worth recalling that in the absence of any heating, in the central regions of halos the entropy gained by shock heating alone is not enough to prevent the gas from cooling. In fact, the absence of an initial extra entropy would result in a cooling catastrophe (see White & Rees 1978, Blanchard, Vall Gabaud & Mamon 1996). Furthermore, the combination of shock heating and central cooling (without additional heating) is, in fact, not able to generate an entropy floor by the selective removal of the lowest entropy gas in the very center (see TN), a mechanism sometimes advocated to explain the entropy plateau (see PCN).
In the internal heating scenario, the number of free parameters is larger than one and dependent on the model used to describe the heating sources. Here we assume a simple phenomenological model, with a distribution of sources of equal mass (or output) given by a King profile with a large core (about $`1/2`$ of the virial radius). The number of the heating sources is then normalized to the total mass of the given halo. The absolute number density of the sources is clearly degenerate with the average heating rate associated with each source, since only the global heating (as a function of the radius $`r`$) is relevant to the final entropy profile. Therefore, for each heating model we quote only the average heating per particle released up to the present epoch. Not that the heating is defined as the amount of energy dumped into the ICM, which can clearly be different from the total energy budget of the sources, depending on the gas heating efficiency.
We note that the difference in the energy budget between the internal and the external heating, is further exacerbated by cooling. Indeed, if the internal heating is not rapid enough to keep the density low and prevent further cooling, the densities in cluster centers will be always very large, and the same final entropy profile will require a very large energy budget. A direct consequence is that if the heating is not large enough, the energy input will be rapidly re-emitted by the high density gas, no matter how much energy has been released (see, e.g., Lewis et al. 1999).
The heating rate must have a dependence on $`z`$; indeed, if the heating starts in the halo but at very early epochs, before a non–negligible amount of mass of the final halos has been accreted, the gas never reaches high overdensities and the properties of the external scenario are reproduced. Here we assume (motivated by the need to reproduce the observed $`L`$-$`T`$ relation) that the heating rate peaks at $`z1`$, with an exponential decline at higher redshifts, and a mild power law decline $`(1+z)^2`$ for $`z<1`$. The final energy budget is, in this case, mass dependent, since the number of sources is larger in higher mass halos. Moreover, enhanced heating is expected in the center, where the density of the heating sources is higher.
For the assumed peak redshift $`z=1`$, we calibrate the energy budget by requiring consistency with local properties (e.g., fitting the $`L`$$`T`$ relation). We find that a total budget of about $`12`$ keV per particle in clusters ($`M=10^{15}h^1M_{}`$) and $`0.51`$ keV in small clusters ($`M=10^{14}h^1M_{}`$) can reproduce approximately the scaling relation for X–ray halos (see also Figure 4). Despite the uncertainties in the internal heating model, we always find that the energetic budget is more than an order of magnitude larger with respect to the external scenario in order to reproduce approximately the $`L`$$`T`$ relation. This estimate is robust and it is expected also on the basis of a simple analytical calculation, since the typical overdensities within virialized halos are of the order of few hundred, while the typical overdensities in the external accreting gas are $`\delta 10`$ (see TN). The energetic budget in the internal scenario may be too high to be provided by SNe heating only. In this perspective, the comparison of the external and internal scenario may put constraint on the nature of the heating sources.
In Figure 3 we show the resulting entropy profiles, in comparison to those of the external scenario. In a massive halo, the relatively flat distribution of heating sources results in a flatter entropy profile in the center $`Kr^{0.5}`$, while in the external regions the same profile of the external case $`Kr^{1.1}`$ is recovered. The slope of the inner profile depend on the total amount of energy injected, as shown by the labels in Figure 3. The same amount of heating in a smaller halo ($`M10^{14}h^1M_{}`$) results in a larger entropy core that emerges in the central regions. In the regions where a negative entropy gradient developes, we expect instabilities and mixing. We do not show the case of very small halos, since large negative entropy gradients develop with consequent instabilities that cannot be included in the present treatment. The effect may be real, in the sense that in small halos the effect of the internal heating may disrupt the profile resulting from adiabatic/shocked accretion, and can eject virtually all the gas from the potential well, giving a patchy and irregular surface brightness. This possiblity should be investigated with fully 3D numerical simulations.
In principle, measuring the entropy profile at a radius $`0.1R_{vir}`$ in high and medium mass halos, can reveal a signature of the internal heating scenario as a departure from the profile expected in the external scenario. However, we emphasise that this is just an example based on a particular choice of the internal heating distribution, and in some cases the internal heating may result in an entropy profile very similar to that of the external scenario. A more comprehensive investigation of the parameter space in the case of internal heating will be presented in a future work.
It is worth noting, nevertheless, that observations at a radius $`r0.1R_{vir}`$ will be less difficult with respect to those at the shock radius, due to the higher surface brightness. The signal will be much higher and the entropy profile can be reconstructed in greater detail (see PCN). In principle, it will also be possible to produce entropy maps inside clusters. In the assumption that the ICM has not been stirred due to massive merger events (an assumption which is implicit for the spherical model used here and described in TN), the entropy maps would trace the patches of major heating internal to the cluster.
In Figure 4 we show the time evolution of a cluster, a small cluster and a group in a $`\mathrm{\Lambda }`$CDM cosmology for the internal (dashed lines) and external heating (continuous line) scenarios. The external scenario assumes $`K_{34}=0.3`$, while the internal scenario assumes an energy budget of $`0.9`$ keV per particle. The lowest mass ($`M=10^{13}h^1M_{}`$) is not shown for the latter model. At $`z=0`$ the total luminosities and the emission weighted temperatures are quite similar, and it is not possible to distinguish the two scenarios from the statistical properties of the X–ray halo population only; both scenarios fit the $`L`$$`T`$ relation (as can be confirmed from the final luminosities and temperature at the different scales; see however TN). The shock radius is much closer to the virial one in the external scenario, since the external gas is cold and its infall is not slowed by the pressure support while it is accreted. The largest difference in the shock position are predicted at low masses, where, unfortunately, the shock feature is currently hard to detect due to the low surface brightness.
Another way to break the degeneracies between the internal and the external scenarios, is to look at the global properties of high redshift halos, for which the expected differences are larger. Indeed, the entropy level is the major driver of the evolution of the global properties of X–ray halos (see also Bower 1997). At large redshifts ($`z1`$) the luminosity and temperature evolution depends on the intensity and the timescale of the non–gravitational heating. In the internal case, the luminosity and temperature have a flatter time dependence. The shock radius is always close to the virial radius (third panel) in the internal heating scenario, beacause the entropy of the external gas is always negligibly small. Thus the epoch and distribution of the heating affects global quantities, such as the contribution of X–ray halos to the X–ray background (see Wu, Fabian & Nulsen 1999, where the gas is heated along the merger tree of halos and requires an average extra energy of $`13`$ keV per particle). However, we recall again that the many parameters produce a large degeneracy in the internal scenario. The external scenario, instead, provides better defined predictions, since it depends only on the initial value of the entropy. This further strengthens our claim that the best way to probe the thermodynamic history of the ICM is by looking at the entropy profile of nearby halos rather than the global properties of unresolved distant halos.
## 3 Simulated Observations and Feasibility
Here we focus on the observation of nearby halos. This is the strategy that we propose as the best way to investigate the thermodynamic history of the baryons, taking advantage of the spatial and spectral resolution of present day X–ray missions. In particular the external heating scenario can be tested by the detection of the external entropy level around present day clusters, allowing the accretion shock itself to be located. A power law entropy profile, $`Kr^{1.1}`$, is expected between the shock radius and the transition radius $`r_t`$. Within this last radius, a flatter entropy profile will mark the original entropy plateau, partially eroded by cooling.
As described in §2, the two dominant, gravitationally–driven, mechanisms for changing the thermodynamic state of cluster gas are shock heating and adiabatic compression. While shock heating occurs principally at the accretion radius, adiabatic compression will occur both interior and exterior to this radius. As shown above, adiabatic compression of gas during accretion (prior to being shocked) will raise the external gas temperature to values dependent on the initial entropy. The average entropy in the external IGM at $`z=0`$ can span an order of magnitude and still give a good fit the $`L`$$`T`$ relation.
In the case of a constant entropy, the range is $`K_{}=0.20.4\times 10^{34}`$ erg g<sup>-5/3</sup> cm<sup>2</sup>, which corresponds approximately to pre–shock (adiabatically–raised) temperatures of $`kT_e0.1`$ keV maximum. Another interesting possibility is to assume a strong evolution in the entropy, of the form $`K_{}(1+z)^2`$. This case gives a good fit to the local $`L`$$`T`$, and at the same time has a value as high as $`K_{}(0)=3\times 10^{34}`$ erg g<sup>-5/3</sup> cm<sup>2</sup> in the external gas,corresponding to a temperature of $`1`$ keV. Despite the very high final value of the entropy, the total (average) energetic budget is less than $`0.1`$ keV per particle, even lower than in the case with constant $`K_{}0.3`$. The reason is that in the constant entropy case, most of the energy is released at high redshift, when the density is higher, while if $`K_{}(1+z)^2`$, most of the energetic budget is released only at small $`z`$. We also consider this case since from the observational point of view it is one of the most tractable; an external temperature of $`1`$ keV makes the gas (at an overdensity of $`10`$) detectable in emission. We note that such high temperatures in the infalling gas can also be achieved in the case in which the gas is previously gravitationally shocked in filaments (see Cen & Ostriker 1999 and references therein). As we will discuss later, this gravitational contribution to the external entropy may help in attaining a large value of $`K_{}`$ in the outskirts of clusters.
Once the external level of the entropy is assumed, the other relevant piece of information is that the gas density immediately exterior to the shock will be no more than a factor of $`4`$ lower than that at the inner shock boundary, following equation 2.
It is also a consequence of the increasing shock strength at larger radii and of the adiabatic compression that a mildy negative (radially decreasing) temperature gradient is expected, in good agreement with current observations of clusters (Markevitch 1998). The temperature gradients are expected to be stronger when the entropy distribution gets flatter, until the adiabatic limit $`T\rho ^{2/3}`$. For simplicity, in rich clusters we will consider an isothermal distribution of gas within the shock radius, since the predicted temperature profiles can be well approximated as constant (the predicted polytropic index is $`\gamma _p1.1`$, where $`\gamma _p=1`$ is the isothermal case, see Figure 2).
What is the best strategy to investigate the described scenarios? Recent X-ray data lack the necessary combination of both spatial and spectral resolution to have routinely detected the accretion shock and detailed entropy profiles of clusters. The same situation applies for low luminosity groups (for $`kT<1`$ keV, the luminosity is often defined within a fixed radius of $`100h^1`$ kpc, see Ponman et al. 1996). ROSAT for example, while its limiting surface brightness was quite low (a typical background level $`1\times 10^{15}`$ erg s<sup>-1</sup> cm<sup>-2</sup> arcmin<sup>-2</sup> in the 0.5-2 keV band), had a point spread function width from $`2060`$ arcsec and insufficient spectral sensitivity to constrain temperatures to the precisions required. Attempts to push the capabilities of the ASCA X–ray satellite to their limits and observe this accretion shock in archival data of nearby clusters failed, mainly due to the poor ASCA point-spread function (Gendreau & Scharf, private communication).
Current missions should however be well suited to detecting cluster accretion shocks and entropy profiles. Chandra’s high spatial resolution ($`110`$ arcsec) may allow details of the spatial structure of a shock region to be investigated. With an effective area of $`4600`$ cm<sup>2</sup> at 1 keV, XMM has approximately 10 times higher throughput than ROSAT, and combined with a $`615`$ arcsec PSF and excellent spectral resolution is ideally suited to this task. XMM can, for example, detect an accretion shock in the nearby Perseus cluster ($`z=0.018`$, $`L=2.8\times 10^{44}`$ erg s<sup>-1</sup> in the 2-10 keV band) with an exposure of the order of 20 ksec. However, in this case, the area to be searched is extremely large compared to the field of view of XMM.
For the lower mass groups the detection, or non-detection, of the accretion shock is much more difficult. However, determining the emission profile beyond the regime currently studied ($`100h^1`$kpc) and the nature of the entropy profile will be quite feasible. In both cases (rich and poor systems) a key observational criteria will be the ability to detect emission at a level of $`10^{16}`$ erg s<sup>-1</sup> cm<sup>-2</sup> arcmin<sup>-2</sup> (see Figure 2). Additionally, the ability to accumulate sufficient counts to constrain gas temperatures to precisions of 10-20% will be necessary. In general, with a typical background, we find that to measure the gas temperature with a precision of $`1020`$% requires at least $`10002000`$ source photons respectively for low ($`1`$ keV) and high ($`8`$ keV) temperatures. This is a consequence of lower temperature spectra having more photons on the exponential cutoff, where the impact of temperature is strongest.
As an observational baseline for rich clusters we have chosen the Abell 2029 system. At a redshift of $`z=0.0767`$ and with $`L_{210keV}=2.07\times 10^{45}`$ erg s<sup>-1</sup> ($`h=0.5`$) and $`kT=7.8`$ keV (David et al. 1993) this cluster presents an optimal angular scale ($`30`$ arcmin) and surface brightness. In addition Abell 2029 is a strong cooling flow cluster, and, at least in the inner regions, appears to be in equilibrium with no sign of merging of cluster subunits (Sarazin et al 1998). The core radius of 0.164h<sup>-1</sup>Mpc, corresponds to $`2.5`$ arcmin. Assuming $`\beta =2/3`$, we obtain an estimated surface brightness at the shock of $`1\times 10^{16}`$ erg s<sup>-1</sup> cm<sup>-2</sup> arcmin<sup>-2</sup>, in close agreement with our model (see Figure 2).
We can now ask what would be required to measure the entropy profile to an emission level of $`10^{16}`$ erg s<sup>-1</sup> cm<sup>-2</sup> arcmin<sup>-2</sup>, i.e., to the shock radius. Combining the expected count rates (from XSPEC) for the PN +2MOS for a 7.8 keV plasma and the expected background counts we estimate that to achieve an emission detection of $`6\sigma `$, and at least 2000 cluster photons, requires $`70`$ ksec and counts accumulated from $`270`$ arcmin<sup>-2</sup>. Given the angular dimensions of A2029 we could meet these criteria with 4 XMM pointings of 70 ksec, equally spaced around the expected $`20`$ arcmin shock radius. The detection of the accreting gas beyond the shock radius is more difficult, since the surface brightness of the external gas can be an order of magnitude lower. We therefore perform a more realistic simulation in order to assess the feasibility of its detection.
Using QUICKSIM (Snowden 1998) and XSPEC we have simulated a range of XMM observations of cluster gas emission. The simulated cluster (group) is orientated such that the XMM field of view is centred on the shock radius. We simulate both the PN and two MOS EPIC cameras. The internal and cosmic background count rates in the $`0.112`$ keV band at the coordinates of Abell 2029 are estimated to be $`3.67\times 10^3`$ ct s<sup>-1</sup> arcmin<sup>-2</sup> (PN) and $`1.11\times 10^3`$ (MOS) and are included.
We began by simulating a 200 ksec XMM observation of A2029 with a pointing offset $`20`$ arcmin from the cluster center (i.e. still within the expected shock radius). The external entropy level is set to $`K_{34}=3(1+z)^2`$. The assumed surface brightness profile is the one shown in Figure 2 as a long–dashed line, which is flatter than the average expectation but better resembles the realistic case of A2029. We choose tCDM as the background cosmology. A critical universe has larger accretion rates at $`z=0`$, and then smaller shock radius, with respect to the $`\mathrm{\Lambda }`$CDM case (in a $`\mathrm{\Lambda }`$CDM universe, a shock radius $`20`$ % larger than the virial one is expected, see TN, and Figure 4). The resulting outputs are spatially and spectrally analyzed with XSELECT and XSPEC, assuming an absorbed Raymond-Smith spectrum. Background counts are subtracted for all spatial data bins using a simulated observation of blank sky, to account for vignetting effects. The spectral fits are performed in concentric annuli, centred on the cluster core. The neutral hydrogen column density is fixed ($`N_H=3\times 10^{20}`$ cm<sup>-2</sup> to match the value at A2029) and the redshift is fixed, while the temperature, normalization and metallicity are allowed to vary. The metallicity is always poorly constrained in these simulations.
The simulated cluster observations are shown in Figure 5, where only the data of the PN detector have been used. In a more realistic observation, the use of the two MOS detectors significantly aids obtaining a stronger signal, or can decrease the required exposure time (see Figure 6). The errors in the figure correspond to 1-sigma. In the first case, the external gas is detected and its temperature measured with about $`20`$% uncertainty (1 sigma). The reason for the small error in the external temperature, despite the low emissivity of the external gas, is due to its value $`kT_e0.7`$; for this value the exponential cutoff, from which the temperature is measured, falls in the spectral region of maximum sensitivity for XMM. In the third panel the resulting entropy profile is shown. Despite the large uncertainty in the external value, the discontinuity in the entropy is visible. We point out that we assumed an external density profile $`\rho _{ext}r^2`$. The presence of substructures, such as small clumps being accreted by the cluster, can make the gas much more visible in emission, due to the enhanced density. Moreover, the entropy of the gas is not changed by the presence of substructure, which contributes only with adiabatic compression. Therefore we believe that the case shown in Figure 5 is to be considered realistic, if not pessimistic.
The second cluster case (Figure 5) envisages a strong shock front with a cold, low-emission external plasma ($`kT_e<0.1`$ keV), corresponding to the case with $`K_{34}0.3`$ constant (here in a $`\mathrm{\Lambda }`$CDM; c.f. 2nd row in Figure 2). In this case only the gas internal to the shock radius is detectable, due to the low entropy of the external gas. The non-detection of an external gas halo would not provide any direct constraint on the external entropy level. Note that the region of the cluster being observed here is at a radius approximately twice that of the last significant point of Sarazin et al. (1998): the surface brightness profile in clusters has never been tested to such large radii.
Finally, the third case (Figure 6) is expected to represent lower mass groups: pure adiabatically compressed ICM with a flat entropy profile, and a corresponding steeper temperature gradient following $`T\rho ^{2/3}`$. The emission from the halo smoothly fades into the external IGM, without any discontinuity. In this case, with $`K_{34}=0.3`$ (in $`\mathrm{\Lambda }`$CDM), the surface brightness is characterized by a relatively large core. A very interesting aspect of this observation is the detection of emission from small groups at an unprecedented distance from the center. At the same redshift of A2029, 6 arcmins correspond to 0.4 $`h^1`$ Mpc. At such a radius, the surface brightness and the temperature gradient are clearly detected. The entropy profile is flatter than the shocked power law in the center, but the errors are too big at $`0.4h^1`$ Mpc to discriminate between a shocked and and adiabatic profile in the very external regions.
As already mentioned, such low surface brightness emission is now emerging from ROSAT data (Helsdon & Ponman 2000) and predicted by TN. We recall that for practical reasons the current luminosities of loose groups are estimated only within a fixed radius of $`100h^1`$ kpc (see Ponman et al. 1996), while the total luminosities can be higher by a factor larger than 3 when including all the gas accreted from the halo. Note however, that here and in TN we define an accreted mass of baryons $`M_B=f_BM_{tot}`$. In the case with $`K_{}0.1`$, the most external part of this gas is compressed (but not shocked) by the presence of the potential well and can be located at radii as large as three times the virial radius, thus it can hardly be said “to be accreted”. This reflects the difficulty in defining X–ray emission in small mass halos, in contrast to large mass halos where the X–ray emission is dominated by the central regions. These considerations add further interest to tracing the X–ray emission of small mass objects to the largest radii.
In Figure 6 we show, as a function of the total X-ray luminosity, the exposure times needed with XMM to detect the emission interior and exterior to the accretion shock with a signal to noise of $`5`$, and enough photons to derive the temperature to within 20% uncertainty. Here we use the $`210`$ keV luminosity, and map to $`T`$ using the 2-10 keV EXOSAT $`L`$-$`T`$ relation (David et al. 1993). We have assumed the same redshift as Abell 2029 ($`z=0.0767`$), $`\beta 0.7`$, and an external temperature of about $`1`$ keV. As we already discussed, such an external temperature is expected in the case with $`K_{34}=3(1+z)^2`$ , while a constant $`K_{34}0.20.4`$ would give an almost undetectable external gas. Thus we use Figure 6 as a guide to the needed exposure time in the cases when the shock is detectable (i.e., $`kT>0.2`$ keV); we recall that temperatures lower than 1 keV (but still larger than 0.2 keV) are more easily constrained due to their stronger exponential cutoff. The limits in Figure 6 are derived using the signal in the PN + 2MOS detectors, for different choices of the ratio $`R_S/R_V`$ of the shock to the virial radius (as long as $`\rho _i/\rho _e>1`$). We have always assumed the shock fronts to be at the center of the XMM field of view. The small circle represents our simulated observation of Abell 2029. The constraints on the observation times are dominated by the requirement to have $`4000`$ and $`2000`$ photons respectively inside and outside the shock (continuous lines) while the requirement of the 5-sigma emission detection becomes dominant at lower luminosities (dashed lines). It is clear that, with sufficent exposure at lower luminosities, the shock/adiabatic transition can be mapped to a considerable extent, allowing a direct test of the general picture summarized in §2.
Looking further ahead, two possible missions would make the cluster accretion shock a routine observation in studies of clusters. ESA’s X-ray evolving Universe Spectroscopy (XEUS) mission, has design goals for a $`3\times 10^5`$ cm<sup>2</sup> effective area ($`70`$ times larger than XMM) and sub 2-arcsec imaging, with high spectral resolution. NASA’s CONSTELLATION-X with a factor 20-100 times larger area than current missions, plus the ability to perform high-resolution spectroscopy of extended objects, could potentially see line emission from pre-shock gas superimposed on the continuum emission of the shocked gas.
## 4 Discussion
From empirical evidence it seems clear that non–gravitational heating plays a key role in the thermodynamics of the ICM, but the physical mechanism responsible for this heating is not known. A debate exists over whether the sources of heating are mostly stars or AGN (see Wu & Fabian 1999; Valageas & Silk 1999; Menci & Cavaliere 2000; Loewenstein 2000). Numerical simulations that try to include stellar feedback and cooling still have difficulties in linking the small scale physics to the large scale dynamics of the ICM. Our approach starts simply from the analysis of the thermodynamics of the ICM as it is seen in local, and possibly distant clusters, and attempts to answer the most direct questions. We do not build a heating scenario a priori, rather we want to investigate possible scenarios by directly studying the thermodynamic state of the ICM. A key question that we address here is what if the gas has been heated before or after the collapse of the X–ray halo, or, in other terms, at low (with a relatively small energy input) or at large densities (with a larger energy budget)? Here we briefly review the consequences for a scenario where stars provide most of the heating energy.
As described in §1, the entropy level measured in high–$`z`$ Ly$`\alpha `$ clouds is low with respect to the level observed in clusters of galaxies. From Figure 10b of Ricotti et al. 2000, we can interpolate $`K_{Ly_\alpha }\mathrm{1.6\hspace{0.17em}10}^2(1+z)^1\times 10^{34}`$ erg g<sup>-5/3</sup> cm<sup>2</sup>; higher values of this entropy would make the IGM invisible in absorption. Thus, we estimate that the ratio of the entropy $`K_{cl}`$ observed in the clusters to that observed in Ly$`\alpha `$ is $`K_{cl}/K_{Ly_\alpha }10(1+z)`$. This difference is even larger for the higher values of $`K_{}`$ that enable a good fit to the local $`L`$$`T`$ relation and are still allowed by current data.
If we assume that the gas seen in the Ly$`\alpha `$ clouds is representative of the majority of the IGM, we are witnessing a clear evolution in the equation of state of the diffuse baryons going from the low densities of the Ly$`\alpha `$ clouds to the higher densities of the X–ray halos. In the framework of the entropy model, this indicates that the IGM undergoes substantial heating just before or after being accreted into the potential wells of groups and clusters. Additionally, the chemical properties of the IGM seen in the Ly$`\alpha `$ forest are different from those of the ICM in clusters, indicating that the ICM is affected by star formation processes and chemical enrichment, with a commensurate amount of entropy production. The subsequent questions are: when does this heating occur? Can the star formation processes do the job? To determine if star formation processes can be solely responsible for the excess entropy or just provide a minor contribution, a clear correlation between the epoch of star formation and that of the excess entropy production must be established.
From the entropy profiles of local clusters we can extract temporal information. We predict a major feature of these profiles to be the transition between a shock induced power law and a central (adiabatic) entropy core. When the entropy plateau is eroded by cooling, especially in larger halos, the central entropy level can no longer be directly related to the initial $`K_{}`$ value. However, the transition between the shock and the adiabatic regime is still a relevant and robust feature, marked by a change of slope in the entropy profile. Meaningful quantities are this transition radius $`r_t`$, and the baryonic mass enclosed within the radius itself, $`M_{ad}`$. These two quantities, in the external heating scenario, are almost constant between clusters and groups, with a clear dependence on the parameter $`K_{}`$ (see Figure 7). The baryonic fraction, of course, refers only to the diffuse, hot gas, and not to gas that may have cooled and sunk to the center. So, as a first approximation, the measure of $`M_{ad}`$ or $`r_t`$ at several mass scales will provide a test of the simplest external scenario with a single value for $`K_{}`$, and at the same time an indication on the level of $`K_{}`$ itself. Moreover, in larger halos, the internal adiabatic core has been accreted at higher redshift (third panel of Figure 7). Detecting the presence of the adiabatic transition in large halos, will put a lower limit to the redshift when the entropy $`K_{}`$ must be already in place.
In the internal case, a transition radius is not defined. Rather, the non–gravitational entropy contribution is simply superimposed on the shocked profile. We do not consider a physical model for the internal heating, and thus we do not make specific prediction for the corresponding entropy profile. Nevertheless, the detection of a break in the entropy profile, together with a constant baryonic mass enclosed within $`r_t`$, would favour the external scenario. In this way the measure of $`M_{ad}`$ and $`r_t`$ can significantly constrain this model and probe if the stellar populations provide the bulk of the excess entropy. These pieces of information can be combined with other measures from different wavelenghts to better evaluate the contribution of star formation processes to the global heating.
The final part of this discussion is devoted to the effect of substructures in the infalling medium. Some of the baryons can be shocked in small sub–halos before being accreted by the main progenitor. Such a gravitationally–produced entropy would raise the average entropy level around large clusters of galaxies, with respect to the average value $`K_{}`$ in the non–shocked gas. However, the entropy which is generated in these gravitational processes does not break the self similarity, since it always scales with the mass of the accreting halo. In other words, shock heating processes such as these are not able to generate an entropy plateau in the center of X–ray halos. Moreover, in the presence of a minimum entropy $`K_{}`$, such external gravitational contribution rapidly vanishes at small scales, since the satellites of smaller halos are correspondingly smaller and unable to shock the baryons. Thus, the net effect of a moderate amount of substructure around halos is to enhance the detectability of the gas without changing its entropy.
In the present treatment we are not including the bow shocks developed through the merging of cluster subunits of comparable mass (indicated by ASCA and ROSAT observations, cf. Henriksen & Markevitch 1996, Donnelly et al. 1998). In this case strong non-equilibrium features appear (hot spots) and the plasma is vigorously stirred. However, the occurrence of large, violent mergers is expected to be relatively rare in (for example) Cold Dark Matter dominated cosmologies within the framework of the extended Press-Schechter theory. Most of the mass growth of a typical cluster occurs by accretion of small clumps and diffuse matter onto a main progenitor, the relative amounts of which depend on the details of the cosmology and mass power spectrum. In most CDM models, dynamically quiet clusters always constitute a significant fraction of the total population, especially at $`z=0`$. It is these systems that are most likely to exhibit well defined accretion shocks. The picture is different at redshift $`z1`$, where the accretion rate and then the rate of massive merger are about an order of magnitude larger with respect to $`z=0`$. We estimate (from the extended PS theory) that the average number of major mergers (i.e., with a mass ratio larger than $`0.3`$) occurring within $`1`$ Gyr, is $`<0.1`$ at $`z=0`$ and $`0.30.6`$ at $`z=1`$ in a $`\mathrm{\Lambda }`$CDM cosmology. Thus the fraction of X–ray halos possibly affected by massive merger increases dramatically at high redshifts. Such a population of halos would ideally be modelled with hydrodynamical simulations, which can capture the full three dimensional complexity of the processes.
## 5 Conclusions
In this work we have described how to investigate the thermodynamics of the intra-cluster medium by resolving the entropy distribution within X–ray halos. The ability to spatially and spectrally resolve nearby groups and clusters ($`z0.1`$) with current X–ray satellites, can provide many crucial observations: the measure of the entropy level of the non–shocked gas at $`z0`$ around clusters of galaxies; the detection of the extended, low surface brightness emission at large radii in groups, and the transition from the adiabatic to the shock regime imprinted in the inner entropy profile of X–ray halos (expected at about $`r_t0.20.4h^1`$ Mpc in the external scenario). Such observations will help in probing the two basic scenarios adopted here to describe the non–gravitational heating of the ICM: external and internal.
In the simpler case of the external scenario, the only free parameter is the entropy excess $`K_{}`$ initially present in the diffuse IGM. In this case, entropy profiles will statistically constrain the value of $`K_{}`$ and can be used to put a lower limit on the redshift of the heating. This will allow an estimation of the epoch and energetics of the heating process itself, and will help to answer the question of whether or not the star formation processes are responsible for the bulk of the entropy excess.
One of the most exciting possibilities is the direct measurement of the external entropy from the emission of the accreting IGM just outside the shock radius of very massive clusters.
In detail, the detection of the external entropy of the pre–shocked gas requires a measurement of both the surface brightness and temperature of cluster gas around the shock radius. With such data, the entropy profile across the shock can be derived, and hence the thermodynamic state of both the ICM and the IGM. The detection of accretion shock signatures in rich clusters, together with the observation of constant entropy profiles in groups, would be consistent with the hypothesis of an excess entropy in the external IGM, accreted by dark matter halos. In particular, if the measured entropy level in the gas around clusters and in groups is similar, the simple scenario of homogeneous entropy production in the IGM at high redshifts will be strongly supported. This would simultaneously help constrain physical models for the generation of the entropy.
We describe simulated observations of clusters and groups with XMM to assess feasibility. For two representative values assumed in the external entropy, we show how to detect the low–surface brightness gas at large radii both in large and small halos. In particular, in large halos a discontinuity in the entropy may be visible, corresponding to the shock radius, while in small halos, a continuous isentropic distribution is expected, possibly extending to very large radii.
A failure to detect the excess entropy in outer, non–shocked gas in massive clusters, would favour the internal scenario, in which the excess entropy is produced within the X–ray halo after the accretion, thus when the gas has already reached higher densities. In this case the energy budget required to attain the same entropy excess at $`z=0`$ is much higher (more than $`1`$ keV, with respect to the $`0.1`$ keV required in the external scenario). Moreover, the internal scenario may leave an imprint in the internal entropy profile of X–ray halos which is at variance with the profiles $`Kr^{1.1}`$ expected in the external scenario. Indeed, the capabilities of current X-ray satellites may be sufficient to image the structure of enhanced internal entropy production.
Such observations would therefore provide crucial information at the confluence of many different physical processes involving both baryons and dark matter, that put in a common perspective an enormous amount of data, both in the optical and the X-ray band. At present there are no other viable observations which can connect the entropy of the IGM detected, e.g., in the Ly$`\alpha `$ forest with the entropy level required to explain X–ray constraints from galaxy clusters and groups. We show how an instrument such as XMM can relatively easily perform the necessary measurements and hope this work encourages future observations which will directly test the cluster physics described here.
###### Acknowledgements.
We thank Megan Donahue and David Strickland for their help in the use of XSELECT and XSPEC. We acknowledge interesting discussion with the participants in the Milano 1999 workshop on “Evolution of Galaxies in Clusters”, especially A. Babul, R. Bower, and N. Menci. We thank also the anonymous referee for stimulating comments. This work has been supported by NASA grants NAG 8-1133. CAS acknowledges the support of NASA grant NAG 5-3257.
|
no-problem/9907/astro-ph9907344.html
|
ar5iv
|
text
|
# New Dwarf Galaxies in the IC342/Maffei Group
## 1 Introduction
Due to its position within the zone of avoidance between the Andromeda region of the Local Group and the M81 group the IC342/Maffei group has been recognized only lately as a group. Since 1994 a great number of dwarf galaxies have been discovered in this area with growing interest for galaxies in the zone of avoidance. There have been blind HI surveys and optical searches for galaxies in the area. Recent discoveries of Dwingeloo 1 (Kraan-Korteweg et al. 1994, Huchtmeier et al. 1995), Dwingeloo 2 (Burton et al. 1996), Cas 1 (Huchtmeier et al. 1995), MB1 and 2 (McCall and Buta 1995, McCall et al. 1995), Cam B (Huchtmeier et al. 1997), MB3 (McCall and Buta 1997) have increased the number of known galaxies in this group considerably. Here we report HI-detection of the dwarf galaxies Camelopardalis D, Perseus A and B, and of Draco A in the area of the M81 group. <sup>1</sup><sup>1</sup>1Possibly more detections have been reported by Rivers 1998. The IC342/Maffei group is the nearest group outside the Local Group with a photometric distance of 2.2 Mpc.
A new list of candidates of nearby dwarf galaxies from the sky survey of surface brightness dwarf galaxies based on the POSS II and ESO/SERC films has been searched for HI emission with the 100-m radiotelescope at Effelsberg. So far two lists of the Karachentsev survey have been published (Karachentseva and Karachentsev 1998, Karachentseva et al. 1999), HI observations of some of these new dwarf galaxies have been reported (Huchtmeier et al. 1997, Huchtmeier et al. 1999). Among those newly discovered galaxies three are situated in the IC342/Maffei group according to their position and radial velocity.
## 2 Observations
Observations were performed with the 100-m radio telescope at Effelsberg which has a half power beam width (HPBW) of 9.3’ at the wavelength of 21-cm. Observations have been obtained in the total power mode combining the on-source position with a reference field. A bandwidths of 3.125 MHz was split into four channels yielding a channels spacing of 12.2 kHz and a resolution of 3.1 km s<sup>-1</sup> (or 5.1 km s<sup>-1</sup> after Hanning smoothing). For all galaxies four additional positions a beam width off the central position in R.A. and Dec. have been observed to check for extend of the HI. The HI emission was centered to the optical positions and nearly not extended compared to the HPBW. The profiles are shown in Fig. 1, the observed HI parameters in Table 1. Apart from Cam D all profiles seem partially confused by local HI which is seen best for Perseus A and B.
## 3 Discussion
The clustering of galaxies around IC342 and Maffei 1 (Fig. 2) within the zone of avoidance along the supergalactic equator in addition to the similar corrected radial velocities (Table 1) suggests strongly a typical group of galaxies. In the past greatly different distances have been quoted for this group , see discussion by Krismer et al. (1995) and by McCall and Buta (1997). In recent years photometric distances have been derived for 10 galaxies in this group (Karachentsev and Tikhonov 1993, 1994, Karachentsev et al. 1997, and unpublished work). These distances agree quite well with each other and yield a distance of 2.2$`\pm 0.5`$ Mpc for the IC342/Maffei group. At such a close distance it might have played a significant role in the dynamical evolution of the Local Group (McCall 1986, 1989, Zheng et al. 1991, Valtonen et al. 1993, Peebles 1994).
###### Acknowledgements.
The NED database is supported at IPAC by NASA. This work has been partially supported by the Deutsche Forschungsgemeinschaft (DFG) under project no. 436 RUS 113/470/0.
|
no-problem/9907/cond-mat9907149.html
|
ar5iv
|
text
|
# PHASE ORDERING IN CHAOTIC MAP LATTICES WITH CONSERVED DYNAMICS
## Ackowledgements
We thank G. Gonnella for valuable suggestions. We also thank D. Caroppo and G. Nardulli for useful discussions.
Figure Captions
Figure 1: Snapshots of the ordering system. Black (white) pixels correspond to $`\sigma =1`$ ($`1`$). Lattice of $`100\times 100`$ sites, $`\beta =10`$ and iterations times $`t=0`$ (a), $`t=200`$ (b), $`t=25,000`$ (c), $`t=1,000,000`$ (d).
Figure 2: Time evolution of the domain size $`R(t)`$ (a) and persistence $`p(t)`$ (b) at zero temperature. Solid lines are best linear fits.
Figure 3: The estimated growth exponent $`z`$ versus the temperature $`T=1/\beta `$.
Figure 4: Scaling collapse of the correlation function for $`\beta =\mathrm{}`$ (circles), $`\beta =20`$ (squares) and $`\beta =10`$ (triangles). Correlations at eight times, equally spaced in the interval \[$`10^4`$, $`4\times 10^4`$\], are shown.
|
no-problem/9907/hep-ph9907319.html
|
ar5iv
|
text
|
# Supernatural Supersymmetry: Phenomenological Implications of Anomaly-Mediated Supersymmetry Breaking
## I Introduction
Signals from supersymmetry (SUSY) are important targets for particle physics experiments. These signals range from the direct discovery of supersymmetric particles at high energy colliders to indirect signals at lower energy experiments through measurements of flavor-changing processes, magnetic and electric dipole moments, and so on. The set of possible signals and the promise of individual experiments for SUSY searches depend strongly on what model of SUSY breaking is assumed. It is therefore important to understand the characteristic features and predictions of well-motivated SUSY breaking scenarios.
Probably the most well-known scenario is that of SUSY breaking in the supergravity framework, i.e., “gravity-mediated” SUSY breaking. In this framework, SUSY breaking originates in a hidden sector and is transmitted to the observable sector though Planck scale-suppressed operators. In particular, soft masses for squarks, sleptons, and Higgs bosons are induced by direct Kähler interactions between hidden and observable sector fields. Unfortunately, these Kähler interactions are not, in general, flavor-diagonal. Squark and slepton mass matrices therefore typically have large flavor mixings, and these induce unacceptably large flavor-changing processes, such as $`K^0`$-$`\overline{K}^0`$ mixing and $`\mu e\gamma `$ . These difficulties, together commonly referred to as the SUSY flavor problem, may be avoided if the Kähler potential is somehow constrained to be flavor-diagonal. Gauge-mediated SUSY breaking is one proposal for solving this problem.
Recently the mechanism of “anomaly-mediated” SUSY breaking has been proposed as a possibility for generating (approximately) flavor-diagonal squark and slepton mass matrices . In this scenario, SUSY is again broken in a hidden sector, but it is now transmitted to the observable sector dominantly via the super-Weyl anomaly . Gaugino and scalar masses are then related to the scale dependence of the gauge and matter kinetic functions. For first and second generation fields, whose Yukawa couplings are negligible, wavefunction renormalization is almost completely determined by gauge interactions. Their anomaly-mediated soft scalar masses are thus almost diagonal, and the SUSY flavor problem is solved. Note that this solution requires that the anomaly-mediated terms be the dominant contributions to the SUSY breaking parameters. This possibility may be realized, for example, if SUSY breaking takes place in a different world, i.e., on a brane different from the 3-brane of our world, and direct Kähler couplings are thereby suppressed .
As will be discussed below, the expressions for anomaly-mediated SUSY breaking terms are scale-invariant. Thus, they are completely determined by the known low energy gauge and Yukawa couplings and an overall mass scale $`M_{\text{aux}}`$. Anomaly-mediated SUSY breaking is therefore highly predictive, with fixed mass ratios motivating distinctive experimental signals, such as macroscopic tracks from highly degenerate Wino-like lightest supersymmetric particles (LSPs) . Unfortunately, one such prediction, assuming minimal particle content, is that sleptons are tachyons. Several possible solutions to this problem have already been proposed . We will adopt a phenomenological approach, first taken in Ref. , and assume that the anomaly-mediated scalar masses are supplemented by an additional universal contribution $`m_0^2`$. For large enough $`m_0`$, the slepton squared masses are positive. Along with the requirement of proper electroweak symmetry breaking, this defines the minimal anomaly-mediated model in terms of only 3+1 parameters: $`M_{\text{aux}}`$, $`m_0`$, $`\mathrm{tan}\beta `$, and $`\text{sign}(\mu )`$, where $`\mathrm{tan}\beta `$ is the ratio of Higgs vacuum expectation values (VEVs), and $`\mu `$ is the Higgsino mass parameter. The simplicity of this model allows one to thoroughly examine all of parameter space.
In this paper, we present a detailed study of the phenomenology of the minimal anomaly-mediated model. We begin in Sec. II with a brief discussion of the mechanism of anomaly-mediated SUSY breaking. In Sec. III we review the tachyonic slepton problem and the universal $`m_0`$ “solution,” and present in detail the minimal anomaly-mediated model described above. The universal scalar mass $`m_0`$ breaks the simple scale invariance of expressions for soft terms. However, this breaking is rather minimal, in a sense to be explained, and the minimal anomaly-mediated model inherits several simple properties from the pure anomaly-mediated case.
The naturalness of this model is examined in Sec. IV. We find that the minimal anomaly-mediated model exhibits a novel renormalization group (RG) “focus point” (as opposed to fixed point) behavior, which allows slepton and squark masses to be well above their usual naturalness bounds. The title “supernatural supersymmetry” derives from this feature and the envisioned other-worldly SUSY breaking.
We then turn in Sec. V to high-energy experimental implications. We explore the parameter space and find a variety of interesting features, including 3 possible LSP candidates: a degenerate triplet of Winos, the lighter stau $`\stackrel{~}{\tau }_1`$, and the tau sneutrino $`\stackrel{~}{\nu }_\tau `$. The Wino LSP scenario is realized in a large fraction of parameter space and has important new implications for both collider physics and cosmology . We find that naturalness and electroweak symmetry breaking favor light Winos with the smallest possible mass splittings, i.e., the ideal region of parameter space for Wino searches and within the discovery reach of Run II of the Tevatron.
While anomaly-mediated models have the virtue that they predict very little flavor-changing in the first and second generations, they are not therefore automatically safe from all low-energy probes. In Sec. VI we analyze several sensitive low-energy processes: $`bs\gamma `$, which probes flavor-changing in the third generation, and three important flavor-conserving observables, the anomalous magnetic dipole moment of the muon, and the electric dipole moments of the electron and neutron.
Our conclusions and final remarks are collected in Sec. VII. In the Appendix, we present expressions for anomaly-mediated SUSY breaking terms in a general supersymmetric theory and also the full flavor-dependent expressions for the specific case of the minimal anomaly-mediated model.
## II Anomaly-Mediated Supersymmetry Breaking
In supergravity, SUSY breaking parameters always receive anomaly-mediated contributions. However, in the usual gravity-mediated SUSY breaking scenario, SUSY breaking masses also arise from direct interactions of observable sector fields with hidden sector SUSY breaking fields. Such contributions are usually comparable to the gravitino mass, and so anomaly-mediated contributions, which are loop-suppressed relative to the gravitino mass, are sub-leading. However, in a model with no direct coupling between observable and hidden sectors, the anomaly-mediated terms can be the dominant contributions. In this paper, we assume that this is the case, and that the anomaly-mediated terms are (one of) the leading contributions to the SUSY breaking parameters. This is realized, for example, in the “sequestered sector” model of Ref. , where the SUSY breaking sector and the observable sector are assumed to lie on different branes, thereby suppressing direct observable sector-hidden sector couplings.
In global SUSY, the (loop-corrected) effective Lagrangian may be written as
$`_{\mathrm{global}}(\mathrm{},\mathrm{\Lambda }_{\mathrm{cut}}^{},\mathrm{\Lambda }_{\mathrm{cut}})`$ $`=`$ $`{\displaystyle \frac{1}{4}}{\displaystyle d^2\theta \left[\frac{1}{g^2}\frac{b}{8\pi ^2}\mathrm{log}(\mathrm{}^{1/2}/\mathrm{\Lambda }_{\mathrm{cut}})\right]W^\alpha W_\alpha }+\mathrm{h}.\mathrm{c}.`$ (3)
$`+{\displaystyle d^4\theta Z_\varphi (\mathrm{},\mathrm{\Lambda }_{\mathrm{cut}}^{}\mathrm{\Lambda }_{\mathrm{cut}})\varphi ^{}\varphi }`$
$`+{\displaystyle d^2\theta Y\varphi ^3}+\mathrm{h}.\mathrm{c}.+\mathrm{},`$
where $`W^\alpha `$ and $`\varphi `$ are the gauge field strength and chiral superfields, respectively. Here $`b`$ is the $`\beta `$-function coefficient for the gauge coupling constant $`g`$, $`Z_\varphi `$ is the wavefunction renormalization factor of $`\varphi `$, $`Y`$ is the Yukawa coupling constant, and $`\mathrm{\Lambda }_{\mathrm{cut}}`$ is the cut-off of the theory.
However, once we consider local SUSY, i.e., supergravity, this expression is modified. The most important modification for our argument results from the fact that, in global SUSY, $`\mathrm{}`$ is given by $`g^{\mu \nu }_\mu _\nu `$. In supergravity, $`g^{\mu \nu }`$ becomes a dynamical field and is part of the supergravity multiplet. $`\mathrm{}`$ must therefore be promoted to an object compatible with supergravity. The complete expression for $`\mathrm{}`$ is complicated. However, since we are interested only in the SUSY breaking terms, our task is simplified. Perhaps the easiest prescription for deriving the SUSY breaking terms is to introduce the compensator superfield $`\mathrm{\Phi }`$, whose VEV is given by
$`\mathrm{\Phi }=1M_{\text{aux}}\theta ^2.`$ (4)
Here $`M_{\mathrm{aux}}`$ is proportional to the VEV of an auxiliary field in the supergravity multiplet and is of order the gravitino mass after SUSY breaking. With this compensator field, all of the terms relevant for calculating the anomaly-mediated SUSY breaking parameters are contained in the Lagrangian <sup>*</sup><sup>*</sup>*We assume there are no Planck scale VEVs.
$`_{\mathrm{SUGRA}}_{\mathrm{global}}(\mathrm{},\mathrm{\Lambda }_{\mathrm{cut}}^{}\mathrm{\Phi }^{},\mathrm{\Lambda }_{\mathrm{cut}}\mathrm{\Phi }).`$ (5)
Because $`\mathrm{}`$ appears in Eq. (3) only through terms $`\mathrm{}^{1/2}/\mathrm{\Lambda }_{\mathrm{cut}}`$ and $`\mathrm{}^{1/2}/\mathrm{\Lambda }_{\mathrm{cut}}^{}`$, the replacement of $`\mathrm{}`$ by its supergravity generalization is effectively carried out by the replacement $`\mathrm{\Lambda }_{\mathrm{cut}}\mathrm{\Lambda }_{\mathrm{cut}}\mathrm{\Phi }`$ .
Expanding the above Lagrangian with the VEV of $`\mathrm{\Phi }`$ given in Eq. (4), and solving the equation of motion for the auxiliary component of $`\varphi `$, the anomaly-mediated contributions to the gaugino mass $`M_\lambda `$, scalar squared mass $`m^2`$, and trilinear scalar coupling $`A`$ are
$`M_\lambda |_{\mathrm{AM}}`$ $`=`$ $`{\displaystyle \frac{1}{16\pi ^2}}bg^2M_{\text{aux}}`$ (6)
$`m^2|_{\mathrm{AM}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}\dot{\gamma }M_{\text{aux}}^2`$ (7)
$`A|_{\mathrm{AM}}`$ $`=`$ $`{\displaystyle Y\gamma M_{\text{aux}}},`$ (8)
where
$$\gamma \frac{1}{2}\frac{dZ_\varphi }{d\mathrm{log}\mathrm{}^{1/2}},\dot{\gamma }\frac{d\gamma }{d\mathrm{log}\mathrm{}^{1/2}}.$$
(9)
Here $`b`$ and $`\gamma `$ are to be evaluated with the supersymmetric field content present at the appropriate scale. In the above formulae, indices have been suppressed. The full expressions for general chiral superfield content may be found in the Appendix.
One important feature of this result is that the formulae for the anomaly-mediated SUSY breaking parameters are RG-invariant . The anomaly-induced masses are given as functions of the gauge and Yukawa coupling constants, as shown in Eqs. (6) – (9), and the $`\beta `$-functions for the individual SUSY breaking parameters agree with the $`\beta `$-functions of the right-hand sides whose scale dependences are determined through the gauge and Yukawa coupling RG equations.
## III The Minimal Anomaly-mediated Model
As described in the previous section, in pure anomaly-mediated SUSY breaking, soft terms are determined by RG-invariant expressions involving the gauge and Yukawa couplings. The soft terms are therefore completely fixed by the low energy values of these couplings and an overall scale $`M_{\text{aux}}`$. If a scalar has negligible Yukawa interactions, its squared mass is determined by gauge coupling contributions $`_ib_ig_i^4`$, where the sum is over all gauge groups under which the scalar is charged, and (positive) constants have been omitted (see Appendix). From this form, we see that sleptons, which interact only with non-asymptotically free groups ($`b_i>0`$), have negative squared masses. Tachyonic sleptons are the most glaring problem of the anomaly-mediated scenario.
Several mechanisms for solving the tachyonic slepton problem have been proposed. Additional positive contributions to slepton squared masses may arise from bulk contributions , gauge-mediated-like contributions , new Yukawa interactions , or non-decoupling higher order threshold effects . Here, we adopt a simple phenomenological approach : we assume an additional, universal, non-anomaly-mediated contribution $`m_0^2`$ to all scalars at the grand unified theory (GUT) scale $`M_{\text{GUT}}`$. The resulting boundary conditions,
$`M_\lambda (M_{\mathrm{GUT}})`$ $`=`$ $`M_\lambda |_{\mathrm{AM}}(M_{\mathrm{GUT}})`$ (10)
$`m^2(M_{\mathrm{GUT}})`$ $`=`$ $`m^2|_{\mathrm{AM}}(M_{\mathrm{GUT}})+m_0^2`$ (11)
$`A(M_{\mathrm{GUT}})`$ $`=`$ $`A|_{\mathrm{AM}}(M_{\mathrm{GUT}}),`$ (12)
define the minimal anomaly-mediated model. For large enough $`m_0^2`$, slepton squared masses are therefore positive, and the tachyonic slepton problem is averted. Such a universal term may be produced by bulk interactions , but is certainly not a feature common to all anomaly-mediated scenarios. The extent to which the following results depend on this assumption will be addressed in Sec. VII.
The addition of a non-anomaly-mediated term destroys the feature of RG invariance. However, the RG evolution of the resulting model nevertheless inherits some of the simplicity of the original pure anomaly-mediated relations. Schematically, scalar masses $`m_i`$ satisfy the one-loop RG equations
$$\frac{d}{dt}m_i^2\frac{1}{16\pi ^2}\left[g^2M_\lambda ^2+A^2+\underset{j}{}Y^2m_j^2\right],$$
(13)
where $`t\mathrm{ln}(\mu /M_{\text{GUT}})`$, positive numerical coefficients have been omitted, and the sum is over all chiral fields $`\varphi _j`$ interacting with $`\varphi _i`$ through the Yukawa coupling $`Y`$. Letting $`m_i^2m_i^2|_{\text{AM}}+\delta m_i^2`$, where $`m_i^2|_{\text{AM}}`$ is the pure anomaly-mediated value, the RG invariance of the anomaly-mediated masses implies
$$\frac{d}{dt}\delta m_i^2\frac{1}{16\pi ^2}\underset{j}{}Y^2\delta m_j^2.$$
(14)
Thus, at one-loop, the deviations from pure anomaly-mediated relations satisfy simple evolution equations that depend only on the deviations themselves. For scalars with negligible Yukawa couplings, such as the first and second generation squarks and sleptons, the deviation $`\delta m_i^2`$ is a constant of RG evolution. For them, $`\delta m_i^2`$ is simply an additive constant, and the weak scale result for $`m_i^2`$ is independent of the scale at which $`\delta m_i^2`$ is generated. For fields interacting through large Yukawa couplings such as the top Yukawa coupling, the deviations $`\delta m_i^2`$ evolve; however, this evolution is simply analyzed. We will see an important consequence of this evolution for naturalness in Sec. IV.
We will assume that the boundary conditions of Eq. (11) are given at $`M_{\mathrm{GUT}}=2\times 10^{16}\mathrm{GeV}`$. The SUSY breaking parameters are then evolved with one-loop RG equations to the superparticle mass scale $`m_{\mathrm{SUSY}}`$, which we have approximated to be the squark mass scale. For the gaugino mass parameters, we also include the largest next-to-leading order corrections from $`\alpha _s`$ and $`\alpha _ty_t^2/4\pi `$ given in Ref. .
All parameters of the theory are then specified, except for $`\mu `$, the Higgsino mass parameter, and $`B_\mu `$, the soft bilinear Higgs coupling. We do not specify the mechanism for generating these parameters, but assume that they are constrained so that electroweak symmetry is properly broken. Given the other soft parameters at $`m_{\mathrm{SUSY}}`$, the Higgs potential is determined by $`\mu `$ and $`B_\mu `$, or alternatively, by the Fermi constant $`G_\mathrm{F}=[2\sqrt{2}(H_u^0^2+H_d^0^2)]^11.17\times 10^5\mathrm{GeV}^2`$ (or, equivalently, the $`Z`$ mass) and $`\mathrm{tan}\beta =H_u^0/H_d^0`$. It is more convenient to use the latter two as inputs; $`\mu `$ and $`B_\mu `$ are then fixed so that the Higgs potential has a proper minimum with correct $`G_\mathrm{F}`$ and $`\mathrm{tan}\beta `$. We minimize the Higgs potential at one-loop, including radiative corrections from third generation quarks and squarks , but neglecting radiative corrections from other particles.
In fact, the constraint of proper electroweak symmetry breaking does not determine the sign of the $`\mu `$ parameter.In general, $`\mu `$ is a complex parameter, and its phase cannot be determined from the radiative breaking condition. In Sec. VI C, we consider the implications of complex $`\mu `$. However, in the rest of the paper, we assume that $`\mu `$ is real. In the anomaly-mediated framework, there are several models in which all CP-violating phases in SUSY parameters are absent . The entire parameter space of the minimal anomaly-mediated model is therefore specified by 3+1 parameters:
$$M_{\text{aux}},m_0,\mathrm{tan}\beta ,\text{and}\text{sign}(\mu ).$$
(15)
## IV Naturalness
Supersymmetric theories are considered natural from the point of view of the gauge hierarchy problem if the electroweak scale is not unusually sensitive to small variations in the underlying parameters. There are a variety of prescriptions for quantifying naturalness with varying degrees of sophistication . For the present purposes, we simply consider a set of parameters to be natural if no large cancellations occur in the determination of the electroweak scale. At tree-level, the relevant condition is
$$\frac{1}{2}m_Z^2=\frac{m_{H_d}^2m_{H_u}^2\mathrm{tan}^2\beta }{\mathrm{tan}^2\beta 1}\mu ^2,$$
(16)
where $`m_{H_u}^2`$ and $`m_{H_d}^2`$ are the soft SUSY breaking masses for up- and down-type scalar Higgses. Naturalness then requires that $`|\mu |`$ as determined from electroweak symmetry breaking not be too far above the electroweak scale. A typical requirement is $`|\mu |1\text{ TeV}`$.
In Fig. 1 we present values of $`\mu `$ in the $`(m_0,M_{\text{aux}})`$ plane for three representative values of $`\mathrm{tan}\beta `$: 3 (low), 10 (moderate), and 30 (high). We have chosen $`\mu <0`$ to avoid constraints from $`bs\gamma `$ at large $`\mathrm{tan}\beta `$ (see Sec. VI A), but similar $`|\mu |`$ are found for $`\mu >0`$. The parameter $`M_{\text{aux}}`$ is not phenomenologically transparent, and so on the right-hand axis, we also give approximate values of the Wino mass $`M_2`$, using $`M_2=\frac{g_2^2}{16\pi ^2}M_{\text{aux}}2.9\times 10^3M_{\text{aux}}`$.
The value of $`|\mu |`$ rises with increasing $`M_{\text{aux}}`$, as expected. Irrespective of $`m_0`$ and $`\mathrm{tan}\beta `$, $`|\mu |1\text{ TeV}`$ implies $`M_2200\text{ GeV}`$. Such a restriction is encouraging for searches for degenerate Winos at upcoming runs of the Tevatron, as will be discussed more fully in Sec. V A.
The special case of $`m_0=0`$ corresponds to pure anomaly-mediated SUSY breaking. In this case, the expressions for soft SUSY breaking terms are RG-invariant and the soft masses may be evaluated at any scale, including a low (TeV) scale. Based on this observation, it has been argued that, since the stop masses do not enter the determination of $`m_{H_{u,d}}^2`$ with large logarithms through RG evolution, stop masses of $`2\text{ TeV}`$ or even higher are consistent with naturalness . This is contradicted by Fig. 1: for $`m_0=0`$, as will be seen in Sec. V C, stop masses of $`2\text{ TeV}`$ require very large $`M_{\text{aux}}`$ corresponding to values of $`|\mu |`$ above 2 TeV. Stop masses of 2 TeV are therefore as unnatural in pure anomaly-mediated SUSY breaking as they are in more conventional gravity-mediated scenarios, such as minimal supergravity. This applies to all cases where the pure anomaly-mediated relations are approximately valid for squark and Higgs soft masses, and includes models in which a mechanism for avoiding tachyonic sleptons is invoked which does not disturb the squark and Higgs masses.
For the minimal anomaly-mediated model with $`m_0>0`$, however, the squark and Higgs masses are explicitly modified, and the argument above does not apply. It is exactly in this case, where the soft SUSY masses are not RG-invariant, that there is the possibility that heavy squarks can be consistent with naturalness, and we will see that, in fact, this is realized by a novel mechanism for large $`m_0`$.
In Fig. 1, for $`\mathrm{tan}\beta =3`$, an upper bound on $`|\mu |`$ implies an upper bound on $`m_0`$. However, for moderate and large $`\mathrm{tan}\beta `$, the contours of constant $`|\mu |`$ are extremely insensitive to $`m_0`$, and so large squark and slepton masses are consistent with naturalness in the large $`m_0`$ regime.In Ref. , the insensitivity of $`|\mu |`$ to $`m_0`$ is implicit in Fig. 1; its implications for naturalness were not noted. This behavior may be understood first by noting that, for moderate and large $`\mathrm{tan}\beta `$, Eq. (16) implies that $`\mu `$ depends sensitively on $`m_{H_u}^2`$ only. The RG evolution of $`m_{H_u}^2`$ is most easily understood by letting $`m_{H_u}^2m_{H_u}^2|_{\text{AM}}+\delta m_{H_u}^2`$, where $`m_{H_u}^2|_{\text{AM}}`$ is the pure anomaly-mediated value, and similarly for all other scalar masses. The deviations $`\delta m_i^2`$ satisfy simple RG equations, as discussed in Sec. III. For $`\mathrm{tan}\beta `$ not extremely large, the only large Yukawa is the top Yukawa $`Y_t`$, and $`m_{H_u}^2`$ is determined by the system of RG equations
$$\frac{d}{dt}\left(\begin{array}{c}\delta m_{H_u}^2\\ \delta m_{U_3}^2\\ \delta m_{Q_3}^2\end{array}\right)=\frac{Y_t^2}{8\pi ^2}\left(\begin{array}{ccc}3& 3& 3\\ 2& 2& 2\\ 1& 1& 1\end{array}\right)\left(\begin{array}{c}\delta m_{H_u}^2\\ \delta m_{U_3}^2\\ \delta m_{Q_3}^2\end{array}\right),$$
(17)
where $`Q_3`$ and $`U_3`$ denote the third generation squark SU(2) doublet and up-type singlet representations, respectively.
Such systems of RG equations are easily solved by decomposing arbitrary initial conditions into components parallel to the eigenvectors of the evolution matrix, which then evolve independently . In the present case, the solution with initial condition $`m_0^2(1,1,1)^T`$ is
$$\left(\begin{array}{c}\delta m_{H_u}^2\\ \delta m_{U_3}^2\\ \delta m_{Q_3}^2\end{array}\right)=\frac{m_0^2}{2}\left(\begin{array}{c}3\\ 2\\ 1\end{array}\right)\mathrm{exp}\left[6_0^t\frac{Y_t^2}{8\pi ^2}𝑑t^{}\right]\frac{m_0^2}{2}\left(\begin{array}{c}1\\ 0\\ 1\end{array}\right).$$
(18)
For $`t`$ and $`Y_t`$ such that $`\mathrm{exp}\left[6_0^t\frac{Y_t^2}{8\pi ^2}𝑑t^{}\right]=1/3`$, $`\delta m_{H_u}^2=0`$, i.e., $`m_{H_u}^2`$ assumes its pure anomaly-mediated value for any $`m_0`$.
The RG evolution of $`m_{H_u}^2`$ is shown for several values of $`m_0`$ in Fig. 2. As expected, the RG curves intersect at a single point where $`m_{H_u}^2`$ is independent of $`m_0`$; we will call this a “focus point.” Remarkably, however, the focus point occurs near the weak scale for $`Y_t`$ corresponding to the physical top mass of $`m_{\text{top}}174\text{ GeV}`$. Thus the weak scale value of $`m_{H_u}^2`$ is nearly its pure anomaly-mediated value for all values of $`m_0`$. Note that this behavior applies only to $`m_{H_u}^2`$; no other scalar mass has a focus point behavior.
The focus point is not a fixed point; for example, below the focus point, the RG curves diverge again. The position of the focus point depends on $`Y_t`$, and we must check the sensitivity to variations in $`Y_t`$. In Fig. 2 we show also the behavior for $`Y_t`$ corresponding to $`m_{\text{top}}=184`$ GeV. The exact weak scale value of $`m_{H_u}^2`$ depends on $`Y_t`$ and, when the focus point is not exactly at the weak scale, also on $`m_0`$. However, for top quark masses near the physical one, the focus point remains within a couple of decades of the weak scale, and the sensitivity to variations in $`m_0`$ is always suppressed. This is demonstrated in Fig. 3, where values of $`\mu `$ are given in the $`(m_0,m_{\text{top}})`$ plane. Even for $`m_0^2=25\text{ TeV}^2`$ and $`m_{\text{top}}=174\pm 5\text{ GeV}`$, we find that $`\mu ^2`$ lies naturally below 2 $`\text{ TeV}^2`$.
An interesting question is whether $`m_0`$ can be bigger than the weak scale by a loop factor without compromising naturalness. If this were the case, there would be no need to appeal to a sequestered sector to eliminate tree-level scalar masses. However, $`m_0`$ cannot be arbitrarily large. In Fig. 3, we see that the requirement of proper electroweak symmetry breaking implies $`m_05\text{ TeV}`$. In any case, a similar bound would follow from requiring that one-loop finite corrections to the Higgs squared mass parameter, which are proportional to $`m_{\stackrel{~}{f}}^2`$, not introduce large fine-tunings. The maximum allowed $`m_0^2`$ is thus roughly an order of magnitude below $`M_{\text{aux}}^2`$. Thus, while it is possible to eliminate the sequestered seor mechanism for direct Kähler interaction suppression, it is still required that the tree-level scalar squared mass $`m_0^2`$ be suppressed by an order of magnitude relative to its “natural” value $`M_{\text{aux}}^2`$.
Nevertheless, given that we have no understanding of the source of $`m_0`$, it is at least somewhat reassuring that it may be far above the weak scale without incurring a fine-tuning penalty. A direct consequence of this is that the minimal anomaly-mediated model is a model that naturally accommodates multi-TeV sleptons and squarks. As we will see below, this has important phenomenological consequences both for high energy colliders and low energy probes.
## V Superpartner Spectra and Implications for High Energy Colliders
Having defined the minimal anomaly-mediated model in Sec. III and explored the natural range of its fundamental parameters in Sec. IV, we now consider the resulting masses and mixings of the superpartners. The lightest supersymmetric particles are either a degenerate triplet of charginos and neutralinos, the lighter stau $`\stackrel{~}{\tau }_1`$, or the tau sneutrino $`\stackrel{~}{\nu }_\tau `$. We begin by considering these, and conclude with a discussion of the squark spectrum. We do not discuss the gluino and heavy Higgses in detail. However, their masses are given in Eq. (19) and Figs. 13 and 14, respectively.
### A Charginos and Neutralinos
Charginos and neutralinos are mixtures of gauginos and Higgsinos. Their composition is determined by $`M_2`$, $`M_1`$, $`\mu `$, and $`\mathrm{tan}\beta `$ at tree-level. Inserting the values of the gauge coupling constants at $`m_Z`$ in Eq. (6), and including the largest next-to-leading corrections as described in Sec. III, we find
$$M_1:M_2:M_32.8:1:8.3.$$
(19)
Typical values of $`(\mu ,M_2)`$ allowed by radiative electroweak symmetry breaking in the minimal anomaly-mediated model are given in Fig. 4. Combined with the anomaly-mediated relation $`M_12.8M_2`$, Fig. 4 implies $`M_2<M_1<|\mu |`$ with substantial hierarchies in these parameters throughout parameter space. The chargino and neutralino mass eigenstates are therefore well-approximated by pure gaugino and pure Higgsino states with masses
$`M_2:`$ $`\stackrel{~}{\chi }_1^0\stackrel{~}{W}^0,\stackrel{~}{\chi }_1^\pm \stackrel{~}{W}^\pm `$ (20)
$`M_1:`$ $`\stackrel{~}{\chi }_2^0\stackrel{~}{B}`$ (21)
$`|\mu |:`$ $`\stackrel{~}{\chi }_{3,4}^0\stackrel{~}{H}_u^0\pm \stackrel{~}{H}_d^0,\stackrel{~}{\chi }_2^\pm \stackrel{~}{H}^\pm ,`$ (22)
and the lightest of these is always a highly degenerate triplet of Winos.
In much of parameter space, as we will see in Sec. V B, these Winos are the LSPs. The possibility of searching for supersymmetry in the Wino LSP scenario has been the subject of much recent attention . The detection of Wino LSPs poses novel experimental challenges. Neutral Winos pass through collider detectors without interacting. Charged Winos are detectable in principle, but are typically highly degenerate with neutral Winos, with $`\mathrm{\Delta }m=m_{\stackrel{~}{\chi }_1^\pm }m_{\stackrel{~}{\chi }_1^0}150300\text{ MeV}`$ and corresponding decay lengths $`c\tau =0.510`$ cm . They therefore decay to invisible neutral Winos and extremely soft pions before reaching the muon chambers, thereby escaping both conventional searches based on energetic decay products and searches for long-lived charged particles that produce hits in the muon chamber.
Fig. 4, however, has two important and encouraging implications for Wino LSP searches. First, as noted in Sec. IV, naturalness bounds on $`|\mu |`$ imply stringent bounds on $`M_2`$. From Fig. 4, for example, we find that $`|\mu |1\text{ TeV}`$ implies $`M_2200\text{ GeV}`$. Continuing searches at LEP , although limited kinematically to the region $`M_2100\text{ GeV}`$, will be able to probe a significant fraction of this parameter region. In addition, such limits on the Wino mass imply large cross sections at the Tevatron. For $`M_2=200\text{ GeV}`$ and $`\sqrt{s}=2\text{ TeV}`$, the Wino pair production rate is $`\sigma (p\overline{p}\stackrel{~}{W}^\pm \stackrel{~}{W}^0,\stackrel{~}{W}^\pm \stackrel{~}{W}^{})100`$ fb, and if a jet with $`p_T>30\text{ GeV}`$ and $`|\eta |<2`$ is required for triggering, the associated production rate is $`\sigma (p\overline{p}\stackrel{~}{W}^\pm \stackrel{~}{W}^0+\text{ jet},\stackrel{~}{W}^\pm \stackrel{~}{W}^{}+\text{ jet})10`$ fb . Such cross sections imply hundreds of Wino pairs produced at the upcoming Run II, and tens of Wino pairs produced in association with jets.
Second, the region of $`(\mu ,M_2)`$ space favored in Fig. 4 is the far gaugino region, where $`\mathrm{\Delta }m`$ is minimized. For the parameters of Fig. 4, $`\mathrm{\Delta }m<180\text{ MeV}`$, corresponding to decay lengths of $`c\tau >3.5`$ cm. (See Ref. .) Thus, a significant fraction of Winos will pass through several vertex detector layers. When produced in association with a jet for triggering, such Winos will be discovered off-line as high $`dE/dx`$ tracks with no associated calorimeter or muon chamber activity. Such a signal should be spectacular and background-free. This possibility is discussed in detail in Ref. , where an integrated luminosity of 2 $`\text{ fb}^1`$ is shown to probe the entire region discussed here with $`|\mu |<1\text{ TeV}`$. It is exciting that Run II of the Tevatron will either discover Wino LSPs or exclude most of the natural region of parameter space in this model.
### B Sleptons
Slepton masses and mixings are given by the mass matrix
$$𝑴_{\stackrel{\mathbf{~}}{𝒍}}^\mathrm{𝟐}=\left(\begin{array}{cc}m_{\stackrel{~}{L}}^2+m_l^2m_Z^2(\frac{1}{2}\mathrm{sin}^2\theta _W)\mathrm{cos}2\beta & m_l(A_l\mu \mathrm{tan}\beta )\\ m_l(A_l\mu \mathrm{tan}\beta )& m_{\stackrel{~}{E}}^2+m_l^2m_Z^2\mathrm{sin}^2\theta _W\mathrm{cos}2\beta \end{array}\right)$$
(23)
in the basis $`(\stackrel{~}{l}_L,\stackrel{~}{l}_R)`$, and sneutrino masses are given by
$$m_{\stackrel{~}{\nu }}^2=m_{\stackrel{~}{L}}^2+\frac{1}{2}m_Z^2\mathrm{cos}2\beta ,$$
(24)
where $`m_{\stackrel{~}{L}}^2`$ and $`m_{\stackrel{~}{E}}^2`$ are the soft SUSY breaking masses.
In anomaly-mediated models, as discussed in Ref. , if both $`m_{\stackrel{~}{L}}^2`$ and $`m_{\stackrel{~}{E}}^2`$ receive the same $`m_0^2`$ contribution, the diagonal entries of the slepton mass matrix are accidentally highly degenerate. The anomaly-mediated boundary conditions imply (see the Appendix)
$$𝑴_{\stackrel{\mathbf{~}}{𝒍}}^{}{}_{LL}{}^{\mathrm{𝟐}}𝑴_{\stackrel{\mathbf{~}}{𝒍}}^{}{}_{RR}{}^{\mathrm{𝟐}}=\frac{3}{2}\left(\frac{g_2^2M_{\text{aux}}}{16\pi ^2}\right)^2\left[11\mathrm{tan}^4\theta _W1\right]+m_Z^2\left[2\mathrm{sin}^2\theta _W\frac{1}{2}\right]\mathrm{cos}2\beta .$$
(25)
For $`\mathrm{sin}^2\theta _W=0.2312`$, $`\mathrm{tan}^4\theta _W=0.0904`$, and both bracketed expressions are extremely small. This accidental degeneracy implies that same-flavor sleptons may be highly degenerate. The physical mass splitting for staus is given in Fig. 5. For low $`\mathrm{tan}\beta `$ (and, by implication, for all $`\mathrm{tan}\beta `$ for selectrons and smuons), degeneracies of order 10 GeV or less are found throughout the parameter region. For large $`\mathrm{tan}\beta `$, however, large Yukawa effects dilute the degeneracy significantly.
Equation (25) also implies that even small off-diagonal entries may lead to large mixing. The left-right mixing for staus is given in Fig. 6. Throughout parameter space, and even for low $`\mathrm{tan}\beta `$, the stau mixing is nearly maximal. In fact, even smuon mixing may be significant — for large $`\mathrm{tan}\beta `$ and low $`M_{\text{aux}}`$, it too is almost maximal. Nearly degenerate and highly-mixed same flavor sleptons are a distinctive feature of the minimal anomaly-mediated model and distinguish it from other gravity- and gauge-mediated models, where, typically, $`m_{\stackrel{~}{l}_L}>m_{\stackrel{~}{l}_R}`$. These features may be precisely tested by measurements of slepton masses and mixings at future colliders.
The lighter stau $`\stackrel{~}{\tau }_1`$ is always the lightest charged slepton, and it therefore plays an important phenomenological role. The $`\stackrel{~}{\tau }_1`$ mass is displayed in Fig. 7.
For low $`m_0`$, $`\stackrel{~}{\tau }_1`$ is either tachyonic or excluded by experimental bounds. The current bounds are fairly complicated in this model, since the mass ordering and mass splittings between $`\stackrel{~}{\tau }_1`$, the Winos, and the sneutrinos vary throughout the parameter space. For staus decaying to neutralinos with a mass splitting greater than 15 GeV, combined LEP analyses of the $`\sqrt{s}=189\text{ GeV}`$ data yield the bound $`m_{\stackrel{~}{\tau }}>71\text{ GeV}`$ , but this drops to near the LEP I limit of 45 GeV as the mass splitting goes to zero. However, for stable staus, combined LEP analyses of data up to $`\sqrt{s}=183\text{ GeV}`$ imply $`m_{\stackrel{~}{\tau }}>87\text{ GeV}`$ . The light shaded region of Fig. 7 is excluded by $`m_{\stackrel{~}{\tau }}>70\text{ GeV}`$ and represents a rough summary of these bounds. In the remaining region, the bounds $`m_{\stackrel{~}{\nu }}>43\text{ GeV}`$ , $`m_{\stackrel{~}{e}}>89\text{ GeV}`$ , and $`m_{\stackrel{~}{\mu }}>84\text{ GeV}`$ are always satisfied. In the following, we will include the excluded shaded region in plots of observables that involve sleptons. For quantities such as squark masses or rates for $`bs\gamma `$, we omit this, as such quantities are well-defined even for small $`m_0`$, and in fact, the $`m_0=0`$ axis gives their values in anomaly-mediated models where the slepton mass problem is fixed without changing the squark and Higgs masses.
For large $`m_0`$, $`m_{\stackrel{~}{\tau }_1}m_0`$, and the Wino is the LSP. This is the case in the unshaded region of Fig. 7. The experimental implications of the Wino LSP scenario have been discussed above in Sec. V A.
Finally, there exists an intermediate $`m_0`$ region, in which the LSP is either the $`\stackrel{~}{\tau }_1`$ or the $`\stackrel{~}{\nu }_\tau `$. In the $`\stackrel{~}{\tau }_1`$ LSP scenario (the dark shaded region of Fig. 7), the stau may be found at both LEP and the Tevatron through its spectacular anomalous $`dE/dx`$ and time-of-flight signatures . At the Tevatron, for example, for $`m_{\stackrel{~}{\tau }_1}150\text{ GeV}`$, $`\sigma (p\overline{p}\stackrel{~}{\tau }_1\stackrel{~}{\tau }_1^{})1`$ fb, and so a significant fraction of the stau LSP parameter space may be explored.<sup>§</sup><sup>§</sup>§Note that in this parameter region, the stau is absolutely stable, assuming R-parity conservation. (Recall that the gravitino mass is of order $`M_{\text{aux}}`$.) This scenario therefore requires some mechanism for diluting the stau density, such as late inflation with a low reheating temperature .
In the case of the sneutrino LSP (the blackened region of Fig. 7), there are many possible experimental signatures. While this region appears only for a limited range of SUSY parameters, superparticles tend to be relatively light in this region, with $`m_{\stackrel{~}{\tau }_1}100\text{ GeV}`$ and $`M_2110\text{ GeV}`$, and so it is amenable to study at LEP. In this region, the slepton mass ordering is always
$$\stackrel{~}{\nu }_\tau ,\stackrel{~}{\nu }_\mu ,\stackrel{~}{\nu }_e<\stackrel{~}{\tau }_1<\stackrel{~}{e}_R,\stackrel{~}{\mu }_1<\stackrel{~}{e}_L,\stackrel{~}{\mu }_2<\stackrel{~}{\tau }_2,$$
(26)
and the Wino triplet may appear anywhere between the sneutrinos and $`\stackrel{~}{\tau }_2`$. Typically, though not always, the only kinematically accessible superparticles at LEP are the sneutrinos, $`\stackrel{~}{\tau }_1`$ and the Winos. The two possible mass orderings and dominant decay modes in each scenario are then
$`\stackrel{~}{\tau }_1>\stackrel{~}{W}^{\pm ,0}>\stackrel{~}{\nu }:`$ $`\stackrel{~}{\tau }_1\tau \stackrel{~}{W}^0,\nu _\tau \stackrel{~}{W}^\pm `$ (28)
$`\stackrel{~}{W}^0\nu _l\stackrel{~}{\nu }_l,\stackrel{~}{W}^\pm l\stackrel{~}{\nu }_l`$
$`\stackrel{~}{W}^{\pm ,0}>\stackrel{~}{\tau }_1>\stackrel{~}{\nu }:`$ $`\stackrel{~}{W}^0\nu \stackrel{~}{\nu },\tau \stackrel{~}{\tau }_1,\stackrel{~}{W}^\pm l\stackrel{~}{\nu }_l,\nu _\tau \stackrel{~}{\tau }_1`$ (30)
$`\stackrel{~}{\tau }_1\pi ^\pm \stackrel{~}{\nu }_\tau .`$
### C Squarks
In anomaly-mediated SUSY breaking, squarks are universally very heavy, as their masses receive contributions from the strong coupling. The gauge coupling contribution to scalar squared masses is of the form $`b_ig_i^4`$, where $`b_i`$ is the one-loop $`\beta `$-function coefficient (see Appendix), and so the strong coupling contribution completely overwhelms those of the SU(2) and U(1) couplings. Squark masses for the first two generations are therefore both flavor- and chirality-blind; we find that the $`\stackrel{~}{u}_L`$, $`\stackrel{~}{u}_R`$, $`\stackrel{~}{d}_L`$, and $`\stackrel{~}{d}_R`$, and their second generation counterparts are all degenerate to within $`10`$ GeV throughout parameter space.
The first and second generation squark masses are given in Fig. 8. The squarks are hierarchically heavier than Winos and sleptons for low $`m_0`$, and their mass increases as $`m_0`$ increases. For $`m_02\text{ TeV}`$, the squark mass is above 2 TeV. Thus, the focus point naturalness behavior discussed in Sec. IV, which allows such large $`m_0`$, has important phenomenological consequences. Direct detection of 2 TeV squarks is likely to be impossible at the LHC or NLC, and must wait for even higher energy hadron or muon colliders. Note, however, that some superparticles, notably the gauginos, cannot evade detection at the LHC and NLC.
Unlike the squarks of the first two generations, the masses of third generation squarks $`\stackrel{~}{t}_L`$, $`\stackrel{~}{t}_R`$, $`\stackrel{~}{b}_L`$, and (for large $`\mathrm{tan}\beta `$) $`\stackrel{~}{b}_R`$ receive significant contributions from large Yukawa couplings. These are shown in Figs. 9 and 10 for small and large values of $`\mathrm{tan}\beta `$. Yukawa couplings always reduce the masses and their effect may be large. For example, $`m_{\stackrel{~}{t}_1}`$ may be reduced by as much as 40% relative to the first and second generation squark masses. At the LHC, therefore, stops and sbottoms may be produced in much larger numbers than the other squarks, adding to the importance of $`b`$-tagging.
As in the case of sleptons, third generation squarks may have large left-right mixing. For $`\mathrm{tan}\beta =30`$, left-right mixing in both the stops and sbottoms is large, and is nearly maximal for low $`m_0`$. For $`\mathrm{tan}\beta =3`$, sbottom mixing is negligible, but stop mixing may still be as large as $`\mathrm{sin}2\theta _{LR}^{\stackrel{~}{t}}0.2`$.
## VI Low Energy Probes
Anomaly-mediated supersymmetry breaking naturally suppresses flavor-violation in the first and second generations, but not all low energy constraints are therefore trivially satisfied. In particular, since anomaly-mediated soft terms depend on Yukawa couplings, non-trivial flavor mixing involving third generation squarks can be expected. We first study the flavor-changing process $`bs\gamma `$, which is well-known for being sensitive to third generation flavor violation. We then consider magnetic and electric dipole moments, observables that are flavor-conserving, but are nevertheless highly sensitive to SUSY effects.
### A $`𝒃\mathbf{}𝒔𝜸`$
In the standard model, the flavor-changing transition $`bs\gamma `$ is mediated by a $`W`$ boson at one-loop. In supersymmetric theories, $`bs\gamma `$ receives additional one-loop contributions from charged Higgs-, chargino-, gluino-, and neutralino-mediated processes. The charged Higgs contribution depends only on the charged Higgs mass and $`\mathrm{tan}\beta `$, interferes constructively with the standard model amplitude, and is known to be large even for charged Higgs masses beyond current direct experimental bounds. The supersymmetric contributions may also be large for some ranges of SUSY parameters. Thus, $`bs\gamma `$ provides an important probe of all supersymmetric models, including those that are typically safe from other flavor-violating constraints.
In the well-studied cases of minimal supergravity and gauge-mediated SUSY breaking , the chargino- and, to a lesser extent, gluino-mediated contributions may be significant for large $`\mathrm{tan}\beta `$. Neutralino contributions are always negligible. For $`\mu <0`$ (in our conventions), these contributions are constructive and so, for large $`\mathrm{tan}\beta `$, positive $`\mu `$ is favored.
In the present case of anomaly-mediated SUSY breaking, several new features arise. First, in contrast to the case of minimal supergravity and gauge-mediation where squark mixing arises only through RG evolution, flavor violation in the squark sector is present even in the boundary conditions (and receives additional contributions from RG evolution). More importantly, the signs of the parameter $`A_t`$ and the gluino mass $`M_3`$ are opposite to those of minimal supergravity and gauge-mediation. The leading contributions for large $`\mathrm{tan}\beta `$ in the mass insertion approximation from charginos and gluinos are given in Fig. 11. For large $`\mathrm{tan}\beta `$, the amplitudes $`𝒜_{\stackrel{~}{\chi }^\pm }\text{sign}(\mu A_t)`$ and $`𝒜_{\stackrel{~}{g}}\text{sign}(\mu M_3)`$ are both opposite in sign relative to their values in minimal supergravity and gauge-mediation.
$`B(BX_s\gamma )`$ may be calculated by first matching the full supersymmetric theory on to the effective Hamiltonian
$$_{\text{eff}}=\frac{4G_F}{\sqrt{2}}V_{ts}^{}V_{tb}\underset{i=1}{\overset{8}{}}C_i𝒪_i$$
(31)
at the electroweak scale $`m_W`$. In the basis where the current and mass eigenstates are identified for $`d_L`$, $`d_R`$, and $`u_R`$, supersymmetry contributes dominantly to the Wilson coefficients $`C_7`$ and $`C_8`$ of the magnetic and chromomagnetic dipole operators
$`𝒪_7`$ $`=`$ $`{\displaystyle \frac{e}{16\pi ^2}}m_b(\overline{s}_L\sigma ^{\mu \nu }b_R)F_{\mu \nu }`$ (32)
$`𝒪_8`$ $`=`$ $`{\displaystyle \frac{g_s}{16\pi ^2}}m_b(\overline{s}_L\sigma ^{\mu \nu }T^ab_R)G_{\mu \nu }^a.`$ (33)
(Contributions to operators with chirality opposite to those above are suppressed by $`m_s/m_b`$ and are negligible.) We use next-to-leading order (NLO) matching conditions for the standard model and charged Higgs contributions. The remaining supersymmetric contributions are included at leading order . Some classes of NLO supersymmetric contributions have also been calculated ; however, a full NLO calculation is not yet available. For the present purposes, where we will be scanning over SUSY parameter space, the leading order results are sufficient. Note that the inclusion of some, but not all, NLO effects is formally inconsistent, but by doing so, we are effectively assuming that the NLO corrections in a given renormalization scheme are numerically small.
The Wilson coefficients $`C_i`$ at the weak scale are then evolved down to a low energy scale $`\mu _b`$ of order $`m_b`$, where matrix elements are evaluated using the resulting effective operators. The NLO anomalous dimension matrix is now known , as are the NLO matrix elements and the leading order QED and electroweak radiative corrections . These have been incorporated in the analysis of Ref. , where a simple form for $`B(BX_s\gamma )`$ in terms of weak scale Wilson coefficients is presented. The exact parametrization depends on the choice of $`\mu _b`$ and the photon energy cutoff $`E_\gamma ^{\text{min}}=\frac{1}{2}(1\delta )m_B`$. We choose $`\mu _b=m_b`$ and $`\delta =0.9`$. The SUSY branching fraction is then given by
$$\frac{B(BX_s\gamma )}{B(BX_s\gamma )_{\text{SM}}}=1+0.681r_7+0.116r_7^2+0.0832r_8+0.00455r_8^2+0.0252r_7r_8,$$
(34)
where $`r_{7,8}`$ are the fractional deviations from standard model amplitudes:
$$r_{7,8}\frac{C_{7,8}(m_W)}{C_{7,8}^{\text{SM}}(m_W)}1=\frac{𝒜_{H^\pm }+𝒜_{\stackrel{~}{\chi }^\pm }+𝒜_{\stackrel{~}{g}}+𝒜_{\stackrel{~}{\chi }^0}}{𝒜_{\text{SM}}}|_{7,8}.$$
(35)
For the standard model value, we take
$$B(BX_s\gamma )_{\text{SM}}=(3.29\pm 0.30)\times 10^4,$$
(36)
where the theoretical error includes uncertainties from scale dependence and standard model input parameters.
The most stringent experimental bounds are
CLEO: $`B(BX_s\gamma )=(3.15\pm 0.35_{\text{stat}}\pm 0.32_{\text{syst}}\pm 0.26_{\text{model}})\times 10^4\text{[36]}`$ (37)
ALEPH: $`B(BX_s\gamma )=(3.11\pm 0.80_{\text{stat}}\pm 0.72_{\text{syst}})\times 10^4\text{[37]},`$ (38)
which may be combined in a weighted average of
$$B(BX_s\gamma )_{\text{exp}}=(3.14\pm 0.48)\times 10^4.$$
(39)
Bounds on SUSY parameter space are extremely sensitive to the treatment of errors. With this in mind, however, to guide the eye in the figures below, we also include bounds from Eq. (39) with $`2\sigma `$ experimental errors:
$$2.18\times 10^4<B(BX_s\gamma )<4.10\times 10^4.$$
(40)
Similar bounds would follow from combining 1$`\sigma `$ experimental and theoretical errors linearly.
Given a set of parameters $`M_{\text{aux}}`$, $`m_0`$, $`\mathrm{tan}\beta `$, and $`\text{sign}(\mu )`$, we may now determine $`B(BX_s\gamma )`$, assuming the central value of Eq. (36). In Fig. 12 we plot $`B(BX_s\gamma )`$ as a function of $`m_{H^\pm }`$, for three representative values of $`\mathrm{tan}\beta `$, fixed choice of $`\text{sign}(\mu )`$, and scanning over the remaining parameters $`M_{\text{aux}}`$ and $`m_0`$. The solid lines show the value when only the charged Higgs diagram is included.
As in minimal supergravity and gauge-mediated models, the neutralino diagrams are negligible, but the chargino and, to a lesser extent, gluino diagrams may be substantial, especially for large $`\mathrm{tan}\beta `$. In contrast to these other SUSY models, however, as a result of the sign flips in $`A_t`$ and $`M_3`$ noted above, both chargino and gluino contributions enhance the standard model prediction for $`\mu >0`$. The parameter space with $`\mu >0`$ is thus highly constrained, and requires large charged Higgs masses, especially for large $`\mathrm{tan}\beta `$. For example, for $`\mathrm{tan}\beta =30`$, the upper bound of Eq. (40) implies $`m_{H^\pm }700\text{ GeV}`$, significantly more stringent than the bound $`m_{H^\pm }400\text{ GeV}`$ that would apply in the absence of chargino and gluino contributions. For $`\mu <0`$, the supersymmetric contributions may cancel the charged Higgs contribution, and the parameter space is constrained only for very low $`M_{\text{aux}}`$ and $`m_0`$, where the destructive SUSY contributions push $`B(BX_s\gamma )`$ below experimental bounds.
In Figs. 13 and 14 we plot $`B(BX_s\gamma )`$ in the $`(m_0,M_{\text{aux}})`$ plane for various values of $`\mathrm{tan}\beta `$ and $`\text{sign}(\mu )`$. Regions excluded by Eq. (40) are shaded; for $`\mu >0`$ and large $`\mathrm{tan}\beta `$, this includes much of the parameter space with light sleptons and light Winos.
### B Muon magnetic dipole moment
While anomaly-mediated SUSY breaking does not contribute substantially to flavor-violating observables involving the first and second generations, it may give significant contributions to flavor-conserving observables involving the first and second generations. It is well-known that SUSY loops may give a sizable contribution to the muon magnetic dipole moment (MDM) . The SUSY contribution to the muon MDM is from smuon-neutralino and sneutrino-chargino loop diagrams. Since these superparticles may have masses comparable to the electroweak scale, these contributions may be comparable to, or even larger than, electroweak contributions from $`W`$\- and $`Z`$-boson diagrams. The on-going Brookhaven E821 experiment is expected to measure the muon MDM with an accuracy of $`0.4\times 10^9`$, which is about a few times smaller than the electroweak contribution to the muon MDM. Therefore, the Brookhaven E821 experiment will provide an important constraint on SUSY models.
In general, the muon anomalous MDM is given by the coefficient of the “magnetic moment-type” operator
$`_{\mathrm{MDM}}={\displaystyle \frac{e}{4m_\mu }}a_\mu \overline{\mu }\sigma _{\mu \nu }\mu F_{\mu \nu },`$ (41)
where the anomalous magnetic moment $`a_\mu `$ is related to the muon $`g2`$ by $`a_\mu =\frac{1}{2}(g2)_\mu `$.
As suggested from the structure of the operator, diagrams for the muon anomalous MDM require a left-right muon transition. In SUSY diagrams, this transition may occur through a chirality flip along the external muon line, through left-right mixing in the smuon mass matrix, or through the interaction of a muon and smuon with a Higgsino. In the latter two cases, the diagrams are proportional to the muon Yukawa coupling constant and are therefore enhanced for large $`\mathrm{tan}\beta `$. These diagrams also include gaugino mass insertions. As a result, in the large $`\mathrm{tan}\beta `$ limit, the muon anomalous MDM is given by
$`a_\mu ^{\mathrm{SUSY}}`$ $``$ $`{\displaystyle \frac{g_1^2}{16\pi ^2}}m_\mu ^2\mu M_1\mathrm{tan}\beta \times F_1(m_{\stackrel{~}{\mu }}^2,m_{\stackrel{~}{\chi }^0}^2)`$ (43)
$`+{\displaystyle \frac{g_2^2}{16\pi ^2}}m_\mu ^2\mu M_2\mathrm{tan}\beta \times F_2(m_{\stackrel{~}{\mu }}^2,m_{\stackrel{~}{\nu }}^2,m_{\stackrel{~}{\chi }^0}^2,m_{\stackrel{~}{\chi }^\pm }^2),`$
where the $`F`$ functions (see the last reference in Ref. ) are typically $`Fm_{\mathrm{SUSY}}^4`$, with $`m_{\mathrm{SUSY}}`$ being the mass scale of the superparticles in the loop. For large $`\mathrm{tan}\beta `$, then, the SUSY contribution $`a_\mu ^{\mathrm{SUSY}}`$ is approximately proportional to $`\mathrm{tan}\beta `$ and may be much larger than the electroweak contribution.
Results for the SUSY contribution to the muon MDM $`a_\mu ^{\mathrm{SUSY}}`$ in the minimal anomaly-mediated model are given in Fig. 15.
Both $`\mathrm{tan}\beta `$ enhanced and unenhanced contributions were included by using the mass eigenstate bases of squarks, sleptons, neutralinos, and charginos. The SUSY contribution to the muon MDM is typically $`10^810^{10}`$, and is enhanced for large $`\mathrm{tan}\beta `$. Furthermore, heavier superparticles suppress $`a_\mu ^{\mathrm{SUSY}}`$, as expected.
Experimentally, the muon anomalous MDM is currently constrained to be
$`a_\mu ^{\mathrm{exp}}=(1165923.0\pm 8.4)\times 10^9,`$ (44)
and hence the anomaly-mediated SUSY contribution is usually smaller than the present experimental accuracy, unless $`\mathrm{tan}\beta `$ is very large. However, as mentioned above, in the near future, the Brookhaven E821 experiment will improve the measurement, with a projected error of $`0.4\times 10^9`$. If this is realized, some anomaly may be seen in the muon MDM in the anomaly-mediated SUSY breaking scenario, particularly for moderate or large values of $`\mathrm{tan}\beta `$.
### C Electric dipole moments of the electron and neutron
In general, parameters in SUSY models are complex, and (some combinations of) their phases are physical. In the anomaly-mediated SUSY breaking scenario, most of the SUSY breaking parameters are proportional to the single parameter $`M_{\text{aux}}`$, and so many of the phases can be rotated away. In particular, the gaugino mass parameters and the $`A`$ parameters can be made real simultaneously. However, even in anomaly-mediated SUSY breaking, a physical phase may exist in the $`\mu `$ and $`B_\mu `$ parameters since their origins are not well-understood. In our analysis, we have not assumed any relation between $`\mu `$ and $`B_\mu `$, and have simply constrained them so that electroweak symmetry is properly broken. In this approach, one physical phase remains, which is given by
$$\theta _{\mathrm{phys}}\mathrm{Arg}(\mu B_\mu ^{}M_i).$$
(45)
If this phase is non-vanishing, electric dipole moments (EDMs) are generated. As is known from general analyses, the EDMs of the electron and neutron may be extremely large unless $`|\mathrm{sin}\theta _{\mathrm{phys}}|`$ is suppressed .
To determine the constraints on this phase in the anomaly-mediated framework, we calculate the electron and neutron EDMs with the minimal anomaly-mediated model mass spectrum. The EDM $`d_f`$ of a fermion $`f`$ is given by the effective electric dipole interaction
$$_{\mathrm{EDM}}=\frac{i}{2}d_f\overline{f}\sigma _{\mu \nu }\gamma _5fF_{\mu \nu },$$
(46)
which becomes $`_{\mathrm{EDM}}d_f\stackrel{}{\sigma }\stackrel{}{E}`$ in the non-relativistic limit.
The calculation of the electron EDM is similar to that of the muon anomalous MDM, since the structure of the Feynman diagrams is almost identical. If the slepton masses are flavor universal, $`a_\mu `$ and $`d_e`$ are approximately related byIn the calculation of the muon anomalous MDM, we neglected the effect of CP violation. If $`\mathrm{sin}\theta _{\mathrm{phys}}0`$, $`a_\mu `$ is proportional to $`\mathrm{cos}\theta _{\mathrm{phys}}`$ in the large $`\mathrm{tan}\beta `$ limit.
$`d_e{\displaystyle \frac{m_e}{2m_\mu ^2}}\mathrm{tan}\theta _{\mathrm{phys}}\times a_\mu ^{\mathrm{SUSY}},`$ (47)
Therefore, the electron EDM is also proportional to $`\mathrm{tan}\beta `$.
The calculation of the up and down quark EDMs is also straightforward, given the SUSY model parameters. The only major difference from the electron EDM is the contribution from the squark-gluino diagram. However, in calculating the neutron EDM, we must adopt some model for the structure of the neutron. We use the simplest model, i.e., the non-relativistic quark model. The neutron EDM is then given by
$`d_n={\displaystyle \frac{1}{3}}(4d_dd_u).`$ (48)
Since $`d_d`$ is also proportional to $`\mathrm{tan}\beta `$, the neutron EDM is also enhanced for large $`\mathrm{tan}\beta `$.
Figures 16 and 17 show the EDMs of the electron and neutron, respectively, in the minimal anomaly-mediated model. The EDMs are proportional to $`\mathrm{sin}\theta _{\mathrm{phys}}`$. In these plots, we assume maximal CP violation, i.e., $`\mathrm{sin}\theta _{\mathrm{phys}}=1`$.
Currently, there is no experimental result which suggests a non-vanishing EDM, and experimental constraints on the EDMs are very stringent. For the electron EDM, using $`d_e=(0.18\pm 0.12\pm 0.10)\times 10^{26}e`$ cm , we obtain the constraint
$$|d_e|0.44\times 10^{26}e\mathrm{cm},$$
(49)
where the right-hand side is the upper bound on $`d_e`$ at 90% C.L. For the neutron, $`d_n`$ is constrained to be
$$|d_n|0.97\times 10^{25}e\mathrm{cm}.$$
(50)
The naturalness arguments of Sec. IV play an important part in evaluating the sensitivity of the EDMs. For $`d_e`$ and small $`\mathrm{tan}\beta `$, while very large effects are possible, $`d_e`$ may be within the experimental bounds even for $`|\mathrm{sin}\theta _{\mathrm{phys}}|`$ close to 1 without violating the condition $`|\mu |1\text{ TeV}`$. For moderate and large $`\mathrm{tan}\beta `$, $`d_e`$ becomes much larger, and the physical phase $`\theta _{\mathrm{phys}}`$ is constrained to be $`|\mathrm{sin}\theta _{\mathrm{phys}}|𝒪(10^2)`$ for $`m_01\text{ TeV}`$. However, for such $`\mathrm{tan}\beta `$, the naturalness bound on $`m_0`$ is also relaxed, and reasonably large $`𝒪(0.1)`$ phases are possible in natural regions of parameter space where $`d_e`$ is suppressed by slepton masses of a few TeV. Thus, while large effects comparable to current bounds are predicted in much of parameter space, constraints from $`d_e`$ may also be satisfied by superpartner decoupling in the minimal anomaly-mediated model. For $`d_n`$, similar conclusions hold. In fact, the constraints from $`d_n`$ on the CP-violating phases are more easily satisfied, and $`d_e`$ appears to be the more stringent constraint at present.
In our discussion, as noted above, we have not assumed a specific model for the $`\mu `$ and $`B_\mu `$ parameters, and hence we regarded $`\theta _{\mathrm{phys}}`$ as a free parameter. However, several mechanisms have been proposed to generate $`\mu `$ and $`B_\mu `$ in which $`\mathrm{sin}\theta _{\mathrm{phys}}`$ vanishes . In those scenarios, of course, $`d_e`$ and $`d_n`$ vanish, and the EDM constraints are automatically satisfied.
## VII Conclusions
In this study we have analyzed a model of “supernatural supersymmetry,” in which squarks and sleptons may be much heavier than their typical naturalness limits, and SUSY is broken in another world. SUSY breaking is then communicated to our world dominantly via anomaly-mediation, and we have considered in detail a model in which tachyonic sleptons are avoided by a non-anomaly-mediated universal scalar mass $`m_0`$.
The novel naturalness properties of this model are a result of a “focus point” behavior in the RG evolution of $`m_{H_u}^2`$, such that its weak scale value is highly insensitive to $`m_0`$. Naturalness bounds on superparticle masses are therefore highly variable and differ from naive expectations. Naturalness places strong bounds on gaugino masses, and Wino masses $`M_2200\text{ GeV}`$ are preferred. On the other hand, for moderate and large values of $`\mathrm{tan}\beta `$, multi-TeV values of $`m_0`$, and therefore slepton and squark masses, are natural.
A number of spectacular collider signals are possible. The possibility of a highly degenerate triplet of Wino LSPs has recently attracted a great deal of attention . In the minimal anomaly-mediated scenario, we find that Winos are not only the LSPs in much of parameter space, but are typically light, with mass $`200\text{ GeV}`$, and extraordinarily degenerate, with charged Wino decay lengths of several centimeters. Such Wino characteristics are ideal for Tevatron searches, where Winos may appear as vertex detector track stubs in monojet events. The prospects for discovery at the Tevatron in Run II or III are highly promising .
In the remaining parameter space, the LSP is either the lighter stau, or the tau sneutrino. In the $`\stackrel{~}{\tau }_1`$ LSP scenario, the $`\stackrel{~}{\tau }_1`$ is typically lighter than 200 GeV and is stable. It may be found in searches for stable charged massive particles at both LEP and the Tevatron . In the $`\stackrel{~}{\nu }_\tau `$ LSP scenario, the Winos, $`\stackrel{~}{\tau }_1`$ and sneutrinos are all $`110\text{ GeV}`$. In both scenarios, ongoing searches at LEP and the Tevatron will be able to probe substantial portions of the relevant parameter space.
The minimal anomaly-mediated model also has a number of other features that distinguish it from other models. In addition to characteristic gaugino mass ratios, these include highly degenerate same-flavor sleptons, and large left-right mixing. If SUSY is discovered, measurements of slepton masses and mixings will provide strong evidence for or against the minimal model and its assumption of an additional universal slepton mass.
We have also considered a variety of low energy observables that are sensitive probes of anomaly-mediated parameter space. Effects on the flavor-changing process $`bs\gamma `$ may be large, and significant regions of parameter space for large $`\mathrm{tan}\beta `$ and $`\mu >0`$ are already excluded. The anomalous magnetic moment of the muon may also be affected at levels soon to be probed by experiment. Finally, the electron and neutron electric dipole moments provide rather strong constraints on the CP-violation phase $`\theta _{\text{phys}}`$ in much of parameter space, but even for large $`\mathrm{tan}\beta `$, $`𝒪(0.1)`$ phases are still be allowed for multi-TeV $`m_0`$ at its focus point naturalness limit.
It is interesting to note that positive signals in these low energy experiments may not only provide evidence for SUSY, but may also exclude some supersymmetric interpretations and favor others. For example, the signs of the SUSY contributions to $`bs\gamma `$ and $`a_\mu ^{\mathrm{SUSY}}`$ are determined by $`\text{sign}(\mu M_3)`$ Here we assume that the signs of $`M_3`$ and $`A_t`$ are correlated, as they are in anomaly-mediation, and, through RG evolution, in gauge-mediated models and minimal supergravity. and $`\text{sign}(\mu M_2)`$, respectively. A large anomalous measurement of $`a_\mu ^{\mathrm{SUSY}}`$ would imply large $`\mathrm{tan}\beta `$, and, given the current bounds on $`bs\gamma `$, a preferred sign for $`\text{sign}(\mu M_3)`$. The sign of the $`a_\mu ^{\mathrm{SUSY}}`$ anomaly then determines $`\text{sign}(M_2M_3)`$. For example, assuming a SUSY interpretation, a large negative anomalous MDM measurement would imply $`M_2M_3<0`$, and would favor anomaly-mediated models over virtually all other well-motivated models.
Finally, as stated in Sec. III, the assumption of a universal scalar mass contribution, while possibly generated by bulk contributions , does not hold generally in anomaly-mediated scenarios. Several features presented above depend on various parts of this assumption, and we therefore close with a brief discussion of these dependences.
The naturalness properties described above, and, in particular, the focus point behavior, results from the fact that the non-anomaly-mediated piece is identical for $`m_{H_u}^2`$, $`m_{U_3}^2`$, and $`m_{Q_3}^2`$. While the focus point mechanism as implemented here relies on this subset of the universal boundary conditions, a variety of other boundary conditions also have similar properties<sup>\**</sup><sup>\**</sup>\**For example, the initial condition $`(m_{H_u}^2,m_{U_3}^2,m_{Q_3}^2)=m_0^2(1,1+x,1x)`$, for any $`x`$, also leads to focus point behavior., and it would be interesting to explore applications of the focus point mechanism in other settings. The accidental degeneracy of left- and right-handed sleptons, and the possibility for large left-right mixings, holds only if both left- and right-handed sleptons receive the same non-anomaly-mediated contribution. Measurement of large left-right smuon mixing, along with confirmation of anomaly-mediated gaugino mass parameters, for example, would therefore be strong evidence for anomaly-mediation with a universal slepton mass contribution. Finally, the low energy observables discussed are sensitive quantitatively to either the hadronic or leptonic superpartner spectrum. However, qualitative results, such as the stringency of constraints for large $`\mathrm{tan}\beta `$, can be expected to remain valid for a variety of anomaly-mediated models, as long as the attractive flavor properties of anomaly-mediation are preserved in these models and they do not have new large sources of flavor violation.
###### Acknowledgements.
We are grateful to Greg Anderson, Jon Bagger, Toru Goto, Erich Poppitz, Lisa Randall, Yael Shadmi, Yuri Shirman, and Frank Wilczek for helpful correspondence and conversations. The work of JLF was supported by the Department of Energy under contract DE–FG02–90ER40542 and through the generosity of Frank and Peggy Taplin. The work of TM was supported by the National Science Foundation under grant PHY–9513835 and a Marvin L. Goldberger Membership.
## Anomaly-Mediated Boundary Conditions
In this appendix, we present the leading order soft supersymmetry breaking terms, first for a general anomaly-mediated supersymmetric theory, and then for the minimal anomaly-mediated model.
Consider a supersymmetric theory with simple gauge group $`G`$. The anomaly-mediated boundary conditions are completely specified in terms of the gauge coupling $`g`$, supersymmetric Yukawa couplings
$$W=\frac{1}{6}Y^{ijk}\varphi _i\varphi _j\varphi _k,$$
(51)
and the supersymmetry breaking parameter $`M_{\text{aux}}`$.
In the convention that the soft supersymmetry-breaking terms are
$$_{\text{SSB}}=\frac{1}{2}M_\lambda (i\lambda )(i\lambda )\frac{1}{2}(m^2)_i^j\stackrel{~}{\varphi }^i\stackrel{~}{\varphi }_j\frac{1}{6}A^{ijk}\stackrel{~}{\varphi }_i\stackrel{~}{\varphi }_j\stackrel{~}{\varphi }_k,$$
(52)
the leading order anomaly-mediated soft supersymmetry breaking terms are
$`M_\lambda |_{\mathrm{AM}}`$ $`=`$ $`{\displaystyle \frac{1}{16\pi ^2}}bg^2M_{\mathrm{aux}}`$ (53)
$`(m^2)_i^j|_{\mathrm{AM}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(\dot{\gamma }\right)_i^jM_{\mathrm{aux}}^2`$ (54)
$`A^{ijk}|_{\mathrm{AM}}`$ $`=`$ $`{\displaystyle \underset{m}{}}\left(Y^{mjk}\gamma _m^i+Y^{imk}\gamma _m^j+Y^{ijm}\gamma _m^k\right)M_{\mathrm{aux}},`$ (55)
where
$$\gamma _i^j=\frac{1}{2}Y_{imn}Y^{jmn}2\delta _i^jg^2C(i).$$
(56)
Here $`\dot{()}d/d\mathrm{ln}\mu `$, $`Y_{ijk}=Y_{}^{ijk}{}_{}{}^{}`$, and the one-loop $`\beta `$-function coefficient is $`b=S(R)3C(G)`$, where $`C(i)`$ is the quadratic Casimir invariant for representation $`i`$, and $`S(R)`$ is the total Dynkin index summed over all the chiral superfields. In terms of the matter field wavefunction $`Z`$, $`\gamma _i^j\frac{1}{2}\dot{\left(\mathrm{ln}Z\right)}_i^j`$.
For minimal field content, anomaly-mediated gaugino masses are given as
$$M_i=\frac{1}{16\pi ^2}b_ig_i^2M_{\mathrm{aux}},$$
(57)
where $`b_i=(\frac{33}{5},1,3)`$ in the GUT normalization. Furthermore, with the superpotential
$`W=U_i𝒀_{𝒖}^{}{}_{ij}{}^{}Q_jH_u+D_i𝒀_{𝒅}^{}{}_{ij}{}^{}Q_jH_d+E_i𝒀_{𝒆}^{}{}_{ij}{}^{}L_jH_d,`$ (58)
the flavor-dependent wavefunction factors are
$`16\pi ^2\gamma _{H_u}`$ $`=`$ $`3\text{Tr}(𝒀_𝒖^{}𝒀_𝒖){\displaystyle \frac{3}{2}}g_2^2{\displaystyle \frac{3}{10}}g_1^2`$ (59)
$`16\pi ^2\gamma _{H_d}`$ $`=`$ $`3\text{Tr}(𝒀_𝒅^{}𝒀_𝒅)+\text{Tr}(𝒀_𝒆^{}𝒀_𝒆){\displaystyle \frac{3}{2}}g_2^2{\displaystyle \frac{3}{10}}g_1^2`$ (60)
$`16\pi ^2𝜸_𝑸`$ $`=`$ $`𝒀_𝒖^{}𝒀_𝒖+𝒀_𝒅^{}𝒀_𝒅{\displaystyle \frac{8}{3}}g_3^2{\displaystyle \frac{3}{2}}g_2^2{\displaystyle \frac{1}{30}}g_1^2`$ (61)
$`16\pi ^2𝜸_𝑼`$ $`=`$ $`2𝒀_𝒖^{}𝒀_𝒖^T{\displaystyle \frac{8}{3}}g_3^2{\displaystyle \frac{8}{15}}g_1^2`$ (62)
$`16\pi ^2𝜸_𝑫`$ $`=`$ $`2𝒀_𝒅^{}𝒀_𝒅^T{\displaystyle \frac{8}{3}}g_3^2{\displaystyle \frac{2}{15}}g_1^2`$ (63)
$`16\pi ^2𝜸_𝑳`$ $`=`$ $`𝒀_𝒆^{}𝒀_𝒆{\displaystyle \frac{3}{2}}g_2^2{\displaystyle \frac{3}{10}}g_1^2`$ (64)
$`16\pi ^2𝜸_𝑬`$ $`=`$ $`2𝒀_𝒆^{}𝒀_𝒆^T{\displaystyle \frac{6}{5}}g_1^2,`$ (65)
where the Yukawa couplings $`𝒀`$ are $`3\times 3`$ matrices in generation space.
The gauge and Yukawa coupling RG equations are as in Ref. , and are reproduced here for convenience and completeness:
$`16\pi ^2\dot{g}_i`$ $`=`$ $`b_ig_i^3`$ (66)
$`16\pi ^2\dot{𝒀_𝒖}`$ $`=`$ $`𝒀_𝒖\left[3\text{Tr}(𝒀_𝒖𝒀_𝒖^{})+3𝒀_{𝒖}^{}{}_{}{}^{}𝒀_𝒖+𝒀_{𝒅}^{}{}_{}{}^{}𝒀_𝒅{\displaystyle \frac{16}{3}}g_3^23g_2^2{\displaystyle \frac{13}{15}}g_1^2\right]`$ (67)
$`16\pi ^2\dot{𝒀_𝒅}`$ $`=`$ $`𝒀_𝒅\left[3\text{Tr}(𝒀_𝒅𝒀_𝒅^{})+3\text{Tr}(𝒀_𝒆𝒀_𝒆^{})+3𝒀_{𝒅}^{}{}_{}{}^{}𝒀_𝒅+𝒀_{𝒖}^{}{}_{}{}^{}𝒀_𝒖{\displaystyle \frac{16}{3}}g_3^23g_2^2{\displaystyle \frac{7}{15}}g_1^2\right]`$ (68)
$`16\pi ^2\dot{𝒀_𝒆}`$ $`=`$ $`𝒀_𝒆\left[3\text{Tr}(𝒀_𝒅𝒀_𝒅^{})+\text{Tr}(𝒀_𝒆𝒀_𝒆^{})+3𝒀_{𝒆}^{}{}_{}{}^{}𝒀_𝒆3g_2^2{\displaystyle \frac{9}{5}}g_1^2\right].`$ (69)
Our sign convention for the $`\mu `$ and $`A`$ parameters is such that, with soft terms as defined in Eq. (52), the chargino mass terms are $`(\psi ^{})^T𝑴_{\stackrel{\mathbf{~}}{𝝌}^\mathbf{\pm }}\psi ^++\mathrm{h}.\mathrm{c}.`$, where $`(\psi ^\pm )^T=(i\stackrel{~}{W}^\pm ,\stackrel{~}{H}^\pm )`$ and
$$𝑴_{\stackrel{\mathbf{~}}{𝝌}^\mathbf{\pm }}=\left(\begin{array}{cc}M_2& \sqrt{2}m_W\mathrm{sin}\beta \\ \sqrt{2}m_W\mathrm{cos}\beta & \mu \end{array}\right),$$
(70)
and the stop left-right mixing terms are $`m_t(A_t\mu \mathrm{cot}\beta )`$.
|
no-problem/9908/cond-mat9908459.html
|
ar5iv
|
text
|
# Microwave-induced constant voltage steps in surface junctions of Bi2Sr2CaCu2O8+δ single crystals
## Abstract
We have observed the zero-crossing steps in a surface junction of a mesa structure micro-fabricated on the surface of a Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+δ</sub> single crystal. With the application of microwave of frequencies 76 and 94 GHz, the current-voltage characteristics show clear voltage steps satisfying the ac Josephson relation. Increasing the microwave power, the heights of the steps show the Bessel-function behavior up to step number $`n=4`$. We confirm that the intrinsic surface junction meets the criterion for the observation of zero-crossing steps.
Irradiated with a microwave of frequency $`f`$, Josephson tunnel junctions can spontaneously exhibit quantized dc voltages of $`V_n=nhf/2e`$ in the absence of a bias current, where $`n`$ is an integer and $`h`$ is the Planck’s constant. In current-voltage ($`I`$-$`V`$) characteristics, this effect manifests itself as constant voltage steps crossing the zero-current axis. The occurrence of these voltage steps is a direct consequence of the ac Josephson effect and the phase-coherent pair tunneling in response to an external electromagnetic excitation. Since no voltages other than the quantized values $`V_n`$ are present for zero current bias, Josephson tunnel junctions are ideal as voltage standards which require constant voltage output independent of environmental parameters such as temperature or humidity. Thus, most Josephson voltage standards currently in use consist of several thousands of Nb/AlO<sub>x</sub>/Nb tunnel junctions connected in series, with each junction exhibiting highly hysteretic $`I`$-$`V`$ characteristics.
The highly anisotropic high-$`T_c`$ superconductors (HTSCs) with layered structures, such as Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+δ</sub> (Bi-2212) and Tl<sub>2</sub>Ba<sub>2</sub>Ca<sub>2</sub>Cu<sub>3</sub>O<sub>10+δ</sub> (Tl-2223), can be considered as series arrays of Josephson tunnel junctions along the $`c`$-axis. For Bi-2212 single crystals the superconducting order parameter tends to be localized in $`3`$-Å-thick Cu-O bilayers and the transport along the $`c`$-axis occurs mainly via Josephson tunneling between the neighboring Cu-O bilayers, which are $`12`$ Å apart from each other. The $`c`$-axis dc $`I`$-$`V`$ characteristics of a Bi-2212 single crystal usually show multiple quasiparticle branches with very large hysteresis, where the number of branches corresponds to the number of intrinsic Josephson junctions (IJJs) in the crystal. In spite of the general consensus for dc $`I`$-$`V`$ characteristics, the microwave response of a stack of IJJs in HTSCs is unclear. Applied with a microwave, a stack of IJJs has been observed to exhibit constant voltage steps. However, their values do not satisfy the Josephson frequency-voltage relation and strongly depend on the microwave power. Thus, the authors of Ref. have attributed their results to the microwave-induced phase-locked fluxon motion in a series stack of IJJs, rather than the ac Josephson effect. Although Shapiro steps or zero-crossing steps are observed in the studies, the measured interval between the voltage steps, $`\mathrm{\Delta }V`$, shows a large difference from the expected value $`Nhf/2e`$ and depends sensitively on the frequency and power of the applied microwave. Here, $`N`$ is the total number of IJJs in a measuring stack. This discrepancy is ascribed to other coupling mechanisms which give rise to the phase locking of IJJs in addition to the ac Josephson effect.
In this paper, we report an observation of clear zero-crossing voltage steps in $`I`$-$`V`$ characteristics of stacks of IJJs in Bi-2212 single crystals, irradiated with a microwave of frequency $`f=76`$ and $`94`$ GHz. The voltage steps are believed to be from a single intrinsic Josephson junction formed on the top surface of the stack in contact with a metallic (Au) electrode. The voltage difference between the successive steps coincides with the expected value of $`hf/2e`$. The magnitude of the voltage steps follows the Bessel-function dependence on the applied microwave power up to the order of $`n=4`$. This implies that the observed voltage steps are genuine zero-crossing voltage steps. The critical current of the surface intrinsic Josephson junction is significantly suppressed due to the proximity contact to the normal-metal electrode. This severe reduction of the critical current allows one to isolate the microwave response of the “surface junction” from that of the rest of the “inner junctions” in a stack of Bi-2212 single crystals. From our experimental results we were able to confirm the necessary condition for observing zero-crossing steps in an intrinsic Josephson junction of highly anisotropic high-$`T_c`$ superconductors.
Stacks of IJJs were fabricated on the surface of a Bi-2212 single crystal using photolithography and Ar-ion etching. The Bi-2212 single crystal, grown by standard solid-state-reaction method, was glued onto a MgO substrate using negative photoresist and was cleaved using pieces of Scotch tape until an optically smooth surface appeared. A thin ($``$ 50 nm) Au film was evaporated on top of the crystal immediately after cleaving to protect the crystal surface from any contamination during further fabrication processes. Then a large base mesa ($`450\times 20\times 1.2`$ $`\mu `$m<sup>3</sup>) was formed by photolithography and ion-beam etching, using the beam voltage $`V_{\mathrm{beam}}=300`$ V and the beam current $`I_{\mathrm{beam}}=0.8`$ mA/cm<sup>2</sup>. The ash of photoresist was removed by oxygen plasma etching. To prevent the regions of the specimen other than the top surface of the base mesa from being shorted to the contact electrodes, an insulation layer of photoresist was placed around the base mesa. Then a 400-nm-thick Au film was further evaporated and patterned afterwards by photolithography and ion-beam etching to form electrical extension pads and small stacks ($`18\times 20\times 0.04`$ $`\mu `$m<sup>3</sup>) on top of the base mesa. The lateral dimensions of a small stack were determined by the narrow width of the base mesa and the breadth of the electrode, which also acted as a mask for the fabrication of the small stack. The thickness of the measuring stack, which corresponded to the number of junctions in it, was controlled by adjusting the ion-beam etching time. The fabrication procedure was completed by removing the remnant photoresist by oxygen plasma etching. The heat treatment of the specimen was limited to $`T<`$ 120 <sup>o</sup>C during the entire microfabrication process.
The microwave response of the specimen was measured at $`T=4.2`$ K. The microwave generated by a Gunn diode was transmitted through a waveguide and coupled inductively to the specimen placed at $`\lambda `$/4 distance from the end of the waveguide. The maximum available microwave power was 100 mW for $`f=76`$ GHz and 50 mW for $`f=94`$ GHz. The power coupled to the specimen was tuned by using a level set attenuator.
Transport measurements were carried out using a three-terminal measurement method (see the inset in Fig. 1). Shown in Fig. 1 is a typical $`c`$-axis resistance vs temperature, $`R_c(T)`$. The resistance shows a weak semiconducting behavior above $`T_c87`$ K, indicating that the crystal is in a slightly overdoped regime. One also notices that the resistance remains finite below $`T_c`$ with a secondary peak appearing far below $`T_c`$. It is attributed to a weak intrisic Josephson junction formed at the surface of a measuring stack in contact with a Au normal-metal electrode. The superconductivity of the topmost Cu-O bilayer of a stack is suppressed by the proximity contact to a normal metal (Au) pad rather than by a degradation effect of the surface layer. Thus, the surface Cu-O bilayer has a superconducting transition temperature $`T_c^{}31`$ K far below the bulk $`T_c`$. In the temperature range of $`T_c^{}<T<T_c`$, the surface junction can be considered as a normal metal/insulator/$`d_{x^2y^2}`$-wave-superconductor (NID) junction consisting of the surface Cu-O bilayer in the normal state and the adjacent inner bilayer in the superconducting state. Thus, $`R_c(T)`$ corresponds to a quasiparticle tunneling resistance of the NID junction with a junction resistance $`R_n^{}=R_c(T_c)=3.9`$ $`\mathrm{\Omega }`$. As the surface junction becomes Josephson coupled below $`T_c^{}`$, $`R_c(T)`$, which is essentially the contact resistance between Au pad and the topmost Cu-O bilayer, becomes less than 40 m$`\mathrm{\Omega }`$ in our specimen.
Figure 2 shows the $`I`$-$`V`$ characteristics of IJJs in a stack below $`T_c^{}`$ in the absence of an exteranl rf field. With increasing bias current just above the critical current of each intrinsic Josephson junction in a stack, periodic voltage jumps occur in units of $`V_c23`$ mV, and the $`I`$-$`V`$ curves show highly hysteretic behavior. Although not apparent in the figure, the number of quasiparticle branches in the $`I`$-$`V`$ characteristics indicates that 28 IJJs are contained in the measuring stack. The average critical current $`I_c`$ is about $`4.5`$ mA and the normal state junction resistance of the inner junctions, estimated from the linear portion of the $`I`$-$`V`$ curves, is $`R_n=0.7`$ $`\mathrm{\Omega }`$. The inset of Fig. 2 shows the enlarged view of the $`I`$-$`V`$ curves in the low bias region. One notices that the weak surface junction shows a much smaller critical current $`I_c^{}130`$ $`\mu `$A with clear hysteresis. The reduced critical current of the surface junction, compared to the ones of the inner junctions, is due to the suppressed superconductivity of the surface layer. This result is consistent with the finite-resistance behavior of the $`R_c(T)`$ curve in Fig. 1.
Figure 3(a) shows the $`I`$-$`V`$ characteristics of the specimen with the application of a microwave of frequency $`f=94`$ GHz. Clearly seen are the two steps of height $`\mathrm{\Delta }I_1=120`$ $`\mu `$A. The voltage difference between the two steps is about $`400\pm 20`$ $`\mu `$V in agreement with the expected value of $`\mathrm{\Delta }V=2hf/2e=389`$ $`\mu `$V, implying that these steps are genuine zero-crossing steps corresponding to the step number $`n=\pm 1`$. Due to the weakness of the transmitted microwave power, we could not observe other steps of $`n>3`$ for $`f=94`$ GHz.
Shown in Fig. 3(b) are the $`I`$-$`V`$ characteristics of the same stack at a microwave frequency $`f=76`$ GHz. Compared to the case of $`f=94`$ GHz, the height of the $`n=1`$ step becomes reduced while the steps of higher orders $`n=\pm 2,\pm 3`$ are seen more clearly. The steps of $`n3`$ do not cross the zero-current line, possibly due to a large leakage current. The remarkably large leakage current for the $`c`$-axis tunneling in Bi-2212 high-$`T_c`$ superconductors is attributed to the existence of the gapless node for the $`d_{x^2y^2}`$-wave order parameter. By increasing the microwave power, we were able to identify other voltage steps up to the order of $`n=4`$. Further increase of the microwave power caused a noticeable slope in the voltage steps, possibly due to chaotic switching of the surface junction between a Josephson-tunneling state and a resistive one. Once any of the Josephson junctions in the stack becomes resistive, all the steps are bound to exhibit a finite resistive slope, making it difficult to identify the steps at high bias voltages.
Figure 4 shows the measured step heights as a function of the square root of the applied microwave power, $`P^{1/2}`$. Varying the step order from $`n=0`$ to 4, the measured step heights are in qualitative agreement with the relation of $`\mathrm{\Delta }I_n=I_c^{}|J_n(I_{ac}f_c^{}/I_c^{}f)|`$, where $`I_c^{}`$ is the critical current of the surface junction at $`T=4.2`$ K, $`J_n`$ the $`n`$-th order Bessel function, $`I_{ac}`$ the applied rf current, and $`f_c^{}=2eI_c^{}R_n^{}/h`$ the characteristic frequency of the surface junction. We obtained the fitting parameter $`I_c^{}`$ (4.2 K) to be $`180`$ $`\mu `$A, which corresponds to the critical current density of 50 A/cm<sup>2</sup>. This value is consistent with the ones observed in the surface junctions of other specimens at 4.2 K. To reveal the Bessel-function behavior, a Josephson junction is required to satisfy the condition of $`\mathrm{\Omega }^2\beta =(f/f_p)^21`$, where $`\mathrm{\Omega }=f/f_c`$ is the frequency reduced with the characteristic frequency of a junction, $`\beta =2eI_cR_n^2C/\mathrm{}`$ the hysteresis parameter, and $`f_p=\sqrt{eI_c/\pi hC}`$ the Josephson plasma frequency. $`C`$ is the capacitance of the Josephson junction. This criterion is not satisfied with the inner IJJs in the stack but is satisfied with the surface junction (see Table I). Thus we infer that the observed zero-crossing steps should originate from the weak surface junction. As shown in Table I, specimens used by other groups concerning the microwave-induced fluxon motion or collective behavior of the IJJs do not meet the above criterion. Nonetheless, one can notice that the typical parameters of Josephson junctions currently used for Nb-based voltage standards are similar to the ones of the surface junction. Although the observing conditon for the zero-crossing steps was originally proposed for Josephson tunnel junctions made of conventional superconductors, our results indicate that the IJJs in HTSCs provide high potential for observing the same phenomenon.
The observing condition for the zero-crossing steps can be rewritten as $`\mathrm{\Omega }^2\beta =(\pi h/e)(ϵf^2/dJ_c)1`$, where $`ϵ`$ is a dielectric constant of the blocking layer between adjacent conducting bilayers, $`d`$ a inter-bilayer distance, and $`J_c`$ a critical current density of the intrinsic Josephson junction. For our specimen in this study, the critical current density of the inner junctions is 24 times larger than that of the surface junction. An inner junction thus has the Josephson plasma frequency $`f_p`$ about five times larger than that of the surface junction. IJJs with larger critical current densities require higher microwave frequencies and higher power to produce stable zero-crossing steps. Reducing the tunneling critical current density is, therefore, required to obtain the stable voltage steps from the inner IJJs in a stack. In addition, to prohibit any nonuniform rf-current flow into the junction, which becomes more probable at higher microwave frequencies, one needs to reduce the junction size. These requirements may be fulfilled with ultra-small IJJs in Bi-2212 single crystals or with IJJs in Bi-2212 single crystals intercalated with guest molecules such as HgI<sub>2</sub> or HgBr<sub>2</sub>.
In summary, we have studied the inverse ac Josephson effect from a intrinsic Josephson junction located in the surface of Bi-2212 single crystals irradiated with the external microwave of $`f=76`$ and $`94`$ GHz. The surface weak Josephson junction shows clear voltage steps satisfying the ac Josephson relation and the step heights follow the Bessel-function behavior with increasing microwave power, up to the step number $`n=4`$. Our results indicate that the intrinsic Josephson junctions in highly anisotropic HTSCs with very low tunneling critical current density may be a promising candidate for the observation of zero-crossing steps.
This work was supported in part by KRISS Project No. 98-0502-102. This work was also supported by BSRI under Contract No. 1NH9825602, MARC under Contract No. 1MC9801301, and POSTECH under Contract No. 1UD9900801.
|
no-problem/9908/cond-mat9908463.html
|
ar5iv
|
text
|
# Model for crystallization kinetics: Deviations from Kolmogorov-Johnson-Mehl-Avrami kinetics
## Abstract
We propose a simple and versatile model to understand the deviations from the well-known Kolmogorov-Johnson-Mehl-Avrami kinetics theory found in metal recrystallization and amorphous semiconductor crystallization. We analyze the kinetics of the transformation and the grain size distribution of the product material, finding a good overall agreement between our model and available experimental data. The information so obtained could help to relate the mentioned experimental deviations due to preexisting anisotropy along some regions, to certain degree of crystallinity of the amorphous phases during deposition, or more generally to impurities or roughness of the substrate.
The interest in thin film transistors made of polycrystalline silicon and silicon-germanium has been driven by the technological development of active matrix-addressed flat-panel displays and thin film solar cells . In this context, the capability to engineer the size and geometry of grains becomes crucial to design materials with the required properties. Crystallization of these materials takes place by nucleation and growth mechanisms: Nucleation starts with the appearance of small atom clusters (embryos). At a certain fixed temperature, embryos with sizes greater than a critical one become stable nuclei; otherwise, they shrink and eventually they vanish. Such a critical radius arises from the competition between surface tension and free energy density difference between amorphous and crystalline phases (which favours the increasing of grain volume) yielding an energy barrier that has to be overcome to build up a critical nucleus. Surviving nuclei grow by incorporation of neighboring atoms, yielding a moving boundary with temperature dependent velocity that gradually covers the untransformed phase. Growth ceases when growing grains impinge upon each other, forming a grain boundary. The final product consists of regions separated by grain boundaries. This simple picture has, however, two problems: On the one hand, this theory of nucleation and growth predicts an energy barrier far from the experimental value so nucleation would hardly be probable at available annealing temperatures . On the other hand, it is known that in crystallization of Si over SiO<sub>2</sub> substrates, nucleation develops in the Si/SiO<sub>2</sub> interface due to inhomogeneities or impurities that catalyze the transformation . Therefore, a theory of homogeneous nucleation and growth is not entirely applicable to the referred experiments.
The transformation kinetics is also problematic. It is generally accepted that the fraction of transformed material during crystallization, $`X(t)`$, obeys the Kolmogorov-Johnson-Mehl-Avrami (KJMA) model , according to which $`X(t)=1\mathrm{exp}(At^m),`$ where $`A`$ is a nucleation- and growth-rate dependent constant and $`m`$ is an exponent characteristic of the experimental conditions. Two well-defined limits have been extensively discussed in the literature: When all the nuclei are present and begin to grow at the beginning of the transformation, the KJMA exponent, $`m`$, is equal to $`2`$ (in two dimensions), and the nucleation is termed site saturation. The product microstructure is tesselated by the so-called Voronoi polygons (or Wigner-Seitz cells). On the contrary, when new nuclei appear at every step of the transformation, $`m=3`$ and the process is named continuous nucleation. Plots of $`\mathrm{log}[\mathrm{log}(1X)]`$ against $`\mathrm{log}(t)`$ should be straight lines of slope $`m`$, called KJMA plots. The validity of the KJMA theory has been questioned in the last few years , and subsequently several papers have been devoted to check it in different ways . However, those theoretical results still leave some open questions: For example, an exponent between $`2`$ and $`3`$ is experimentally obtained in two dimensions, the KJMA plots from experimental data do not fit a straight line in some cases , and the connection between geometrical properties (grain size distributions) and the KJMA exponent is not clear.
In this letter, we show that these questions may be answered by assuming that nucleation is heterogeneous, not in a phenomenological way as in other proposed models , but sticking to the basic ideas due to Cahn and Beck : The material is not perfectly homogeneous but contains regions with some extra energy (regions with some order produced during deposition, or substrate impurities) at which nucleation is more probable. Accordingly, we introduce a computational model consisting of several simple irreversible rules, with the additional advantage that it describes simultaneously space and time evolution. Furthermore, it allows us to average over a large number of realizations in very short computational times as compared to other computer models (see the recent review by Rollett for an overview of simulation models of recrystallization).
The model is defined on a two-dimensional lattice (square and triangular lattices were employed) with periodic boundary conditions. Every lattice site (or node) $`𝐱`$ belongs to a certain grain or state, $`q(𝐱,t)=0,1,2,`$…, the state $`0`$ being that of an untransformed region. The lattice spacing is a typical length scale related to the available experimental resolution. Following the idea that the amorphous phase has random regions in which nucleation is favored, we choose a fraction $`c`$ of the total lattice sites to be able to nucleate. We term these energetically favorable sites potential nuclei. These potential sites may be interpreted as random sites on a region where order is present, not just an isolated critical cluster. Initially $`q(𝐱,0)=0`$ for all lattice sites $`𝐱`$ and the system evolves by parallel updating according to the following rules: i) A transformed site remains in the same state \[$`q(x,t+\mathrm{\Delta }t)=q(x,t)0`$\]; ii) An untransformed potential site may become a new non-existing state (i.e., crystallizes) with probability $`n`$ (nucleation probability), if and only if there are no transformed nearest neighbors around it; iii) An untransformed site (including potential sites) transforms to a already existing transformed state with probability $`g`$ (growth probability), if and only if there is at least one transformed site on its neighborhood. The new state is randomly chosen among the neighboring grain states.
For the model parameters, we expect a functional form $`ne^{E_n/k_BT}`$ and $`ge^{E_g/k_BT}`$, where $`E_n`$ and $`E_g`$ are the energy barriers of nucleation and growth respectively. Hence, temperature is implicit in the definition of $`n`$ and $`g`$. Figure 1 shows the microstructure at two different stages for two different sets of parameters. As we are interested in this paper in how different nucleation conditions yield different KJMA exponents and different microstructures for isothermal experiments, we define a characteristic time $`\tau `$ as the time that a grain needs to increase its size by one lattice site, and consequently we can put $`g=1`$. The simulation time step is therefore this characteristic time $`\tau `$.
We have simulated $`1000\times 1000`$ triangular and square lattices and averaged the outcome of $`50`$ different realizations for each choice of parameters (characteristic simulation times are about $`15`$ to $`45`$ minutes in a Pentium II personal computer). The main results are the following: If $`c1`$, then most sites are potential sites, so new grains are able to nucleate at every stage of the transformation (continuous nucleation). On the contrary, when $`c1`$, and $`n1`$, every potential site nucleates at the early stages of the process (site saturation). Obviously, intermediate values yield a mixed behaviour. Interestingly, the model parameters tune the KJMA exponent between $`2`$ and $`3`$. It is important to note that for small values of $`c`$, which would in principle mean that growth is by site saturation, low values of $`n`$ (large energy barriers for nucleation) lead to $`m3`$, as in continuous nucleation.
Other forms of experimental behavior lead to the occurrence of non-straight KJMA plots. We argue that this fact may be due, on the one hand, to the decay of the nucleation rate when $`n1`$, because some potential sites are overlapped by already growing grains; and on the other hand, when the potential site concentration is $`c1`$, the grains grow independently for times lower than a characteristic impingement time, proportional to the mean grain distance $`1/c^{1/2}`$. Figure 2 shows this fact for several choices of parameters $`n`$ and $`c`$. Note that when $`n1`$, the potential sites nucleate during the earlier stages of the transformation, so the mentioned overlapping of potential sites cannot be the cause of the bending of the KJMA plots. Therefore, we must conclude that heterogeneous nucleation is not the unique cause of the unexpected bending of the KJMA plots, as $`m`$ may be affected by anisotropies or preferential crystalline directions yielding growth or nucleation rates that may change locally throughout the material. This agrees with the fact $`m`$ is not a reliable guide to characterize the morphology of the evolving grains .
As we have pointed out, our model provides information about microstructure, i.e., number of grains, mean grain area, grain size distribution, and so on. For site saturation, Weire et al. proposed a phenomenological expression for grain size distributions : $`P(A^{})=(A^{})^{\alpha 1}\alpha ^\alpha e^{\alpha A^{}}/\mathrm{\Gamma }(\alpha )`$, where $`\alpha 3.65`$ and $`A^{}=A/\overline{A}`$ is the reduced area. The mean area $`\overline{A}`$ changes from one process to another, but the normalized distribution is the same for all. Analogously, in the case of continuous nucleation, a simple expression has been proposed : $`P(A^{})=e^A^{}`$. Figure 3 shows the good agreement between the simulations of our model and these theoretical predictions. For intermediate-ranging parameters, a continuous evolution is obtained from site saturation to continuous nucleation grain size distributions. We thus have two elements of comparison between our model and experimental results: the KJMA exponent, $`m`$, and the grain size distribution $`P(A^{})`$.
In conclusion, we have presented a simple lattice model for crystallization which sheds light on the possible causes of the experimental deviations from the KJMA theory. Thus, preexisting inhomogeneities in the initial state, such as regions with a lesser degree of disorder or impurities, dramatically change the product structure and the time development of the crystalline phase. One of the remarkable points of our model is its versatility, so other ingredients can be simply added to the model rules. We postpone the detailed study of heterogeneous growth or preferential directions to further research. The main conclusion of this work is that the KJMA exponent is not enough to understand and to characterize the crystallization mode in a specific experiment: Indeed, we have shown that conditions close to site saturation and continuous nucleation give rise to very similar values of $`m`$. Therefore, studies of the grain size distribution are indispensable to identify correctly the crystallization mode. We stress that the model rules are physically meaningful (alternative proposals can be found in Ref. , but are far from being physical because they depend strongly on the lattice geometry and the site interactions), and lead to experimentally verifiable predictions. Due to its versatility and short simulation times, it is an easy to reproduce, good, and nonexpensive testbed for the design of materials and structures with tailored grain size or shape properties.
The authors wish to thank A. Rodríguez, J. Olivares, and C. Ballesteros for stimulating discussion on experimental issues, and E. Maciá for helpful comments. This work has been supported by CAM under project 07N/0034/98 (M. C. and F. D.-A.), by DGES under project PB96-0119 (A. S.), and by MAT96-0438 (T. R.).
|
no-problem/9908/astro-ph9908258.html
|
ar5iv
|
text
|
# The Role of a Massive Central Singularity in Galactic Mergers on the Survival of the Core Fundamental Plane.
## 1 Introduction
Although elliptical galaxies may evolve passively, there is considerable evidence that at least some ellipticals evolve by merging. For example, counterrotating cores can be explained by the recent accretion of a spinning galaxy, especially when accompanied by a secondary starburst (Kormendy, 1984; Franx & Illingworth, 1988; Carollo et al, 1997). Mergers are also thought to be responsible for the apparent bimodality of globular cluster populations (Kissler-Patig et al, 1998; Whitmore, 1997; Ashman & Zepf, 1992), and for structural changes like multiple nucleii, dust lanes, circumnuclear shells, and boxy isophotes (Malin & Carter, 1983; Schweizer, 1982; Seitzer & Schweizer, 1990; Forbes & Thomson, 1992).
To the extent that elliptical galaxies evolve by satellite accretion, it is difficult for a large, gas-poor elliptical galaxy to preserve a low density core, because the accretion of a small, high density satellite will steepen the inner density profile of the merger remnant (Faber et al, 1997), provided the secondary survives. This is the paradox of the core Fundamental Plane (cFP). The cFP demonstrates that elliptical galaxy centers maintain a tight relationship between projected central density and luminosity, a relationship foreshadowed independently by Lauer and Kormendy 15 years ago (Kormendy, 1985; Lauer, 1985). If, however, a secondary survives in a high density ratio merger (that is: if the central phase space distribution of the secondary and its debris has not dispersed compared to its original state), then the merger remnant will not lie on this plane. So, if even a fraction of large ellipticals accrete dense cFP secondaries, we would expect considerable scatter at the bright end of the cFP.
A massive black hole at the center of large galaxies may preserve the cFP by tidally disrupting dense secondaries in the accretion process. Observations are beginning to show that massive central black holes are a natural part of galaxy centers (Richstone et al, 1998), and as a consequence, their effects ought to be included in calculations of galaxy mergers. Current dynamical estimates of the best galactic black hole candidates have yielded masses on the order of $`0.005M_{\mathrm{bulge}}`$ (Magorrian et al, 1998; Kormendy & Richstone, 1995). Such massive black holes dominate the galactic potential inside the cusp radius, $`r_{\mathrm{cusp}}GM_{}/\sigma _{\mathrm{bulge}}^2`$, where $`M_{}`$ is the black hole mass, and $`\sigma _{\mathrm{bulge}}`$ is the velocity dispersion of the bulge. This cusp radius can be on the order of a kiloparsec for the largest ellipticals, which is not a small fraction of the core radius and can, in some cases, be resolved.
Despite the importance of black holes to elliptical galaxy centers, simulating the effects of a massive black hole on the stellar distribution unfortunately presents a numerical challenge. Stellar velocities increase as $`r^{1/2}`$ near the black hole, and the tiny timestep required to accurately track these stars is prohibitively expensive. In addition, the steep gradient in the potential near a black hole must be well resolved, and this typically requires a very large particle number (see however Sigurdsson et al, 1995). Nonetheless, there has been significant work done both on developing realistic galaxy models with central singularities (Merritt & Quinlan, 1998; Sigurdsson et al, 1995), and on simulating the effect that black hole binaries have on the host galaxy (Makino & Ebisuzaki, 1996; Governato et al, 1994). In addition, the analysis of an ensemble of individual stellar orbits within a black hole embedded galaxy may indicate the destabilizing influence of a black hole on a galaxy’s orbital structure (Merritt & Valluri, 1998).
In a previous paper, we showed that for purely stellar cFP galaxies, a secondary survives any merger in which it is more dense than the primary (Holley-Bockelmann & Richstone, 1999, hereafter paper 1). In this paper, we isolate the effect of a single massive black hole in these encounters by adding a black hole to the primary galaxy. As in paper 1, the primary is rigid, so the addition of a black hole can be modeled as a external force on the secondary. While the secondary is separated from the black hole (or if the encounter is impulsive), there is no need to invoke a tiny timestep or an increased particle number in the secondary. This approach is an efficient way to determine whether massive central black holes can destroy a dense satellite during a merger. In this paper, we apply the method developed to problem of secondary destruction. We begin with a review of our approximation method in $`\mathrm{\S }`$ 2. For details of the technique, in particular for tests of both the rigid primary approximation and our particle-field code, please refer to paper 1. The tests of the black hole embedded method and results of our simulations can be found in $`\mathrm{\S }`$ 3. Section 4 discusses the implications of the results on the persistence of the cFP and previews future work.
## 2 Methods
### 2.1 The Galaxy Models
We used the same technique as in paper 1 for choosing initial conditions, and for defining, modeling and populating galaxies on the cFP. See Table 1 for galaxy parameters and black hole masses. Our galaxies were initially composed of 5000 particles distributed over both the core and the envelope of our galaxies. For each mass ratio, a test merger was run, and the particle loss of the envelope was analyzed. To achieve better central resolution, we conducted the merger again, this time assigning 5000 particles to only the central regions of the secondary under the assumption that the galaxy envelope behaved in the same manner as in our test run. Table 2 presents the spatial resolution for our double and single component galaxy models. We followed only the particles that were bound to the secondary, but we preserved the phase space, energy, and angular momentum information of the unbound particles at the time they were stripped from the secondary. We will concentrate on the better resolved results for the 5000 particle single component galaxies, also referred to as inner $`\eta `$ models.
### 2.2 The Force on the Secondary
Since the addition of a central black hole introduces a force that does not tend smoothly to zero at the center of the primary, we chose not to use a tidal approximation of the external force, as we did in paper 1. Instead, we advanced the particles in the inertial frame, where the force on a secondary particle is:
$$\stackrel{}{F}_{\mathrm{tot}}(\stackrel{}{R})=\stackrel{}{F}_2(\stackrel{}{r})+\stackrel{}{F}_1(\stackrel{}{R})^{}+\stackrel{}{F}_{\mathrm{fric}}(\stackrel{}{R})+\stackrel{}{F}_{}(\stackrel{}{R}),$$
(1)
where $`\stackrel{}{F}_2`$ is the self gravity of the secondary, $`\stackrel{}{F}_1^{}`$ is the force on a secondary particle due to the stars in the primary, $`\stackrel{}{F}_{\mathrm{fric}}`$ is the force due to dynamical friction, $`\stackrel{}{r}`$ is the vector which points from the secondary center to a secondary particle, $`\stackrel{}{R}`$ is the vector which points from the primary center to the secondary particle, and $`\stackrel{}{F}_{}`$ is the force due to the black hole, expressed as:
$$\stackrel{}{F}_{}(\stackrel{}{R})=\frac{GM_{}m_\mathrm{p}}{(\stackrel{}{R}+ϵ)^3}\stackrel{}{R},$$
(2)
where $`ϵ`$ is a softening parameter which we chose to be close to the core radius of the secondary, and $`M_{}`$ was chosen to be consistent with the Kormendy & Richstone relation (1995). Our softening parameter is much larger than the spatial resolution in our inner $`\eta `$ models, but we chose this larger $`ϵ`$ because it was consistent with the spatial resolution in the double $`\eta `$ model galaxies here and in paper 1, and we wanted to be sure that we were isolating the effect of the black hole when we compared our results to our previous experiments. With this degree of softening, the experiments are designed to represent a lower limit to the damage done to dense secondaries.
### 2.3 The Dynamical Friction on a Secondary Particle
We apportioned the total dynamical friction force, $`F_{\mathrm{fric}}`$, equally to each secondary particle (see appendix A). The frictional acceleration applied to each particle in the secondary galaxy, $`d\stackrel{}{v}_f/dt`$, is derived from the Chandrasekhar formula and is a function of a secondary’s position in the primary galaxy:
$$\frac{d\stackrel{}{v}_f(\stackrel{}{R})}{dt}=f_{\mathrm{drag}}\mathrm{\hspace{0.25em}4}\pi \mathrm{ln}\mathrm{\Lambda }G^2\rho _1m_2(t)\left[\mathrm{erf}(X)\frac{2X}{\sqrt{\pi }}e^{X^2}\right]\frac{\stackrel{}{v}_2}{\stackrel{}{v_2}^3},$$
(3)
where $`\stackrel{}{v}_2`$ is the velocity of the secondary, $`Xv_2/(\sqrt{2}\sigma )`$, $`\sigma =\sqrt{0.4GM_1/r_{1,\mathrm{eff}}}`$, $`\mathrm{\Lambda }`$ is the Coulomb logarithm which was set to $`M_1/M_2`$, and $`f_{\mathrm{drag}}`$ is a drag coefficient, as explained in paper 1. We allowed the total mass of the secondary galaxy, $`m_2(t)`$ to vary as mass is lost in the merger.
Mass lost by the secondary will decrease the magnitude of the dynamical friction force, and will change the orbital decay trajectory such that the secondary experiences more pericenter passes. Analysis of the purely stellar simulations from paper 1 indicate that the secondary was stripped at each pericenter pass down to its tidal radius, $`r_{\mathrm{tide}}^{}{}_{}{}^{3}M_1/M_2D^3`$. We incorporated this knowledge into our set of black hole simulations. We initially set the mass of the secondary in $`a_{\mathrm{fric}}`$ equal to the total secondary mass, and when the secondary encountered a pericenter pass that was within the core radius of the primary, we reset the total mass to the mass enclosed by the tidal radius. <sup>1</sup><sup>1</sup>1We did not include secondary mass loss in the orbital decay calculations in paper 1. Therefore, we simulated the purely stellar 10:1 merger from paper 1 again with an orbital trajectory that included mass loss in the dynamical friction term as described above. Since the secondary remained intact, we can be certain that it was not the change in the orbital decay that destroys the secondary in our black hole experiments (figure 1).
This two part mass loss scenario was selected for reasons of computational speed, since a continuous mass loss term in the orbital decay would result in a larger number of orbits that are far from the damaging black hole potential. These large apocenter orbits take a long time to integrate, and most of the integration is spent following a secondary that is too far from the center of the primary to feel a significant external force. While this scheme underestimates the change in the dynamical friction force, resulting in fewer pericenter passes than in reality, it useful as a limiting experiment: if the secondary breaks up after a few pericenter passes, it would certainly break up after more.
## 3 Results
As in paper 1, we explore a 2-dimensional grid of parameter space. In the first dimension, we vary the mass ratio, and in the second dimension, we vary the initial orbital angular momentum of the secondary. In this section, the results are organized in groups of differing angular momentum. Since our goal is to investigate the effect of a single massive black hole has on the breakup or survival of dense secondaries in mergers with primaries, first sought to duplicate the plunging encounters we explored in paper 1. In this way, we can isolate the damage generated by the black hole. For a summary of the different experiments conducted, see table 3.
### 3.1 Plunging Orbits
Anticipating that the most damaging effect would be due to tidal forces experienced as the secondary passes through the center of the primary, we launched a series of nearly parabolic, plunging orbits. In this basic set of experiments, we investigated two mass ratios: 100:1, 10:1. These mass ratios correspond to density ratios of approximately 1:830 and 1:105 at a radius of 0.1 pc, and the density ratio rapidly increases at smaller radii. We focus on the 10:1 simulation. Figure 2 shows the time evolution of the secondary in the force field of the primary. Here, we learn that most of the mass is stripped at pericenter, and comes off impulsively in a cloud which continues to expand as the secondary crosses the primary center again. However, it is apparent that the impulsive injection of energy into the secondary by the black hole is not enough to disrupt the secondary after the first pericenter pass. From the impulse approximation, the first order change in energy of the secondary due to the black hole,
$$\mathrm{\Delta }E/E(M_{}/M_2)(\sigma _2/v_{\mathrm{orb}})(R_{\mathrm{core}}/P),$$
(4)
is of order $`10^2`$ on the first pass. Instead, the secondary remained intact inside the tidal radius, $`r_{\mathrm{tide}}^{}{}_{}{}^{3}M_1/M_2S^3`$, for approximately 5 more pericenter passes, until enough energy was pumped into the secondary to unbind it. In figure 3, we illustrate the change in the secondary energy from beginning to end for this merger. The stripped mass is still bound to the primary and is spread out over a volume of space with a radius of approximately 250 parsecs from the black hole, or roughly the second apocenter distance. With the secondary disrupted over so large a volume, the merger remnant will remain on the cFP. Figure 4 illustrates the change in the secondary density profile for the 10:1 mass ratio. For the 100:1 mass ratio, the secondary density profile and snapshots of the secondary during the merger are presented in figures 5 and 6, respectively. To ensure that the secondary’s disruption was not an error caused by pericenter passes directly through the black hole, we also conducted a $`\kappa =0.05`$, 100:1 experiment (figure 7), in which the first pericenter pass was approximately 400 parsecs from the black hole, on the order of the core radius of the primary. In all three experiments, the secondary was destroyed. This is a markedly different result from the purely stellar case where the secondary remained intact, and indicates the importance of massive black holes as a source of impulsive energy during mergers.
### 3.2 Small Black Hole Mass
To confirm that the black hole is the direct cause of the secondary’s destruction, we launched a zero angular momentum 10:1 mass ratio encounter in which the primary was host to a black hole with about $`0.005\%`$ the mass of the secondary, or nearly $`9x10^6M_{}`$. In this case, the center region of the secondary was preserved (figure 8). We interpret this result as evidence that our method can detect a secondary’s disruption yet does not force disruption erroneously through the simple existence of a singularity. It is conceivable that the cFP can put a constraint on the mass of central black holes, independent of AGN light predictions; clearly, when the black hole mass is down by a factor of over 300 from the ridgeline, as it is in this experiment, the mass is not sufficient to destroy a secondary. Faber et al, 1997 presents a similar argument that the central black hole mass is constrained by the mass of the stellar core profile that forms around a binary black hole pair.
### 3.3 Non-radial Secondary Orbits
#### 3.3.1 Secondary Destruction
To explore the effect that different orbits have on the preservation of the cFP, we launched secondaries in the 10:1 and 100:1 mass ratios on orbits with significant angular momentum. Our orbits are parameterized by $`\kappa L/L_{\mathrm{circ}}`$. In the 10:1 mass ratio, we selected $`\kappa =0.2,0.8`$, and for the 100:1 case, we chose $`\kappa =0.5`$. Figure 9 displays an xy plane projection of the first few orbits in each of these trajectories.
In each of these experiments, the secondary was stripped to the tidal radius at the last pericenter of the the orbit. However, since the secondary is far from the black hole on all but the last few pericenter passes, this tidal radius is quite large for most of the decay trajectory. The secondary’s core is therefore intact through all but the last passes, since the density at the secondary core is clearly greater than the stellar component of the primary everywhere. In the final passes, the black hole can exert significant tidal forces on the secondary, and the secondary is actually tidally compressed in two dimensions, which increases its central density profile briefly in projection.
On any given orbit, the secondary will eventually reach a place in the merger where the force exerted by the black hole is enough to overpower the secondary self gravity and it is destroyed. Roughly speaking, this occurs the first time the galaxy passes through a region where:
$$\frac{M_{}}{P^3}>\frac{M_2}{r_{2}^{}{}_{}{}^{3}},$$
(5)
where P is the distance of a pericenter pass, $`M_{}`$ is the black hole mass, and $`r_2`$ is the core radius of the secondary. Strictly speaking, all of our non-radial mergers resulted in the destruction of the secondary. However, it is insufficient to equate secondary damage with success in protecting the cFP. While it is true that the core Fundamental Plane requires the disruption of dense secondaries in mergers, it is $`not`$ true that the mere disruption of secondary necessarily results in a remnant that lies on the cFP. At disruption, the particles inherit the orbital energy of the secondary, and for the particles to be dispersed, the disruption must occur while there is still enough orbital energy to carry the debris out to a large apocenter so that the density of the debris is reduced to the density of the remnant. Otherwise, the remnant has too steep a central density profile to remain on the cFP. This can condition be stated as a function of the first post disruption apocenter:
$$\frac{M_1}{r_{1}^{}{}_{}{}^{3}}\frac{M_2}{A^3},$$
(6)
where $`r_1`$ is the core radius of the primary and A is the first post disruption apocenter. For the purposes of this paper, we define destruction to occur only when the debris is spread over a large enough apocenter to result in a significantly lower density profile. Likewise, we define the disruptions that occur only when the merger is effectively complete to be survivals.
Under this definition of destruction, the secondaries in the 10:1 $`\kappa =0.2`$ and $`\kappa =0.8`$ survived. See figures 10 and 11 for the change in the density profile of the secondary for the 10:1, $`\kappa =0.2`$ and 10:1, $`\kappa =0.8`$ experiments, respectively. The secondary in the 100:1, $`\kappa =0.5`$ merger, however, was destroyed. Figure 12 shows the density profile for the 100:1, $`\kappa =0.5`$ encounter. In fact, from our derived destruction criterion, we predict that for these galaxy parameters and our choice of dynamical friction, $`all`$ orbits in the 100:1 mass ratio will result in destruction.
We caution that the distance of the first post disruption apocenter depends critically on the magnitude of the dynamical friction force at pericenter, which itself depends on the shape of the primary density profile and the mass of the secondary. For larger mass ratios (ie smaller secondary mass), since the dynamical friction is weaker, the orbit retains considerable orbital energy, and therefore experiences larger apocenters. Hence it is easier for the 100:1 case to disrupt than the 10:1 case. Similarly, a flat central primary density profile produces less dynamical friction at pericenter than a a primary with a steep central density. Therefore, an $`\eta =3.0`$ primary are more likely to disrupt a secondary than an $`\eta =1.0`$ primary for a given mass ratio. With these conclusions, we numerically integrated several orbits with various mass ratios and primary density profiles and applied the disruption criteria in equation 5 (figure 13). From these results, we predict that if we were to run our 10:1, $`\kappa =0.5`$ experiment with an $`\eta =3.0`$ primary, the secondary would be $`destroyed`$, due mostly to the flatness of the central primary density.
Although our results are not general, in that they appear to depend on the choice of the primary, we note the following important result: these experiments have produced a set of mergers that were not destroyed by the addition of a massive central black hole. Black holes, then, do not universally protect the cFP. Additionally, we discovered that the preservation of the cFP depends critically on the character of the orbit at the end of the merger.
#### 3.3.2 Disk Formation
For non-radial mergers, the final part of the orbit produces interesting qualitative changes in the secondary as well. We define $`P_2`$ to be a pericenter pass that occurs before the secondary is disrupted. If $`P_2`$ is on the order of the core radius of the secondary, as was the case for the 10:1, $`\kappa =0.2`$ and $`\kappa =0.8`$ experiments, then the secondary is tidally torqued into a spinning disk with the radius of $`P_2`$ (orbital angular momentum is transferred into spinning up the secondary). A sense of the spin and a suggestion that the secondary is tidally torqued in the $`\kappa =0.8`$ experiment may be seen in figure 14. Figure 15 shows the flattening induced, in part, by the spin in the $`\kappa =0.2`$ encounter, and figure 16 illustrates the increase in the secondary spin for this experiment. Since the stellar disk begins to spin as it is still in the final orbits, it can be displaced from the primary center for a short time. For our $`\kappa =0.2`$ merger, the spinning stellar disk was detectably off center for approximately $`7`$ x $`10^7`$ years. This is reminiscent of the off-center dust disk observed in NGC 4261 (Ferrarese et. al., 1996). At the end of the simulation, we have a non self-gravitating central spinning stellar disk, which has some resemblance to the stellar disk in NGC 3115 (Kormendy et. al., 1996). However, with an aspect ratio of 4:1, the disk we have formed is much thicker than NGC 3115, which has an aspect ratio of 100:1. It is possible that with the addition of gas dynamics to our simulations, energy loss through gas dissipation could form a disk as thin as NGC 3115.
In the 10:1, $`\kappa =0.2`$ and $`\kappa =0.8`$ encounters, the high angular momentum particles were stripped preferentially from the forming disk, and by the time disk reaches the primary center, mostly plunging orbits were left. These plunging orbits become unbound to the secondary when they pass close to the black hole, so the secondary is dissolved within a few crossing times of reaching the center (although again, this denotes survival in our definition, because the debris is as tightly bound to the remnant as it was to the secondary.). Figure 17 shows the energy/angular momentum distribution for the $`\kappa =0.2`$ experiment.
## 4 Discussion
We have investigated the effect a massive black hole at the center of an otherwise purely stellar primary has on mergers of high density ratio galaxies on the cFP.
We have concluded that the amount of damage that the black hole can inflict on the secondary during a merger is highly dependent on the orbital decay trajectory of the secondary. If the secondary’s orbital decay is deeply plunging, the secondary encounters the black hole potential impulsively, and through repeated impulsive encounters, the black hole pumps enough energy into the secondary to unbind it.
If the secondary’s encounter is non-radial, the secondary is far outside the radius of influence of the black hole for most of the merger, and it is stripped merely to the Roche radius. In our simulations, the damage done to the secondary center during this early stage is quite minimal. Only on the final few orbits does the galaxy sink close enough to the black hole to experience significant tidal stripping, which eventually unbinds the dense secondary center. However, unless the disruption occurs while the merger has sufficient orbital energy, the debris orbits tightly around the center of the primary, and the remnant density is increased.
An important feature of the merger trajectory is the dependence on the density profile of the primary. In primaries with shallow density profiles and embedded black holes, the dynamical friction acting at the disruption is smaller than in primaries with steep density profiles, so the debris is more easily dispersed. For these galaxies, a much wider range of secondary masses will destroyed (that is: disrupted such that $`\rho _1\rho _2`$).
It is tempting to identify this feature with the dichotomy between the central light profiles of bright and faint galaxies. Faber et al. 1997 note that galaxies brighter than about $`M_v=21`$ have shallow, low density cores (their projected surface brightnesses $`dlogI/dlogr<0.5`$), while fainter ones are steeper. In our experiments, galaxies brighter than $`M_v=21.7`$ destroy high mass ratio secondaries much more efficiently than fainter ones. Further investigation of this result seems worthwhile.
A proper understanding of the preservation of the cFP requires knowledge of the distribution of mass ratios and impact parameters of present day bulges. This can perhaps be computed reliably in virialized clusters (Tormen, 1997), where the galaxies encounter unbound targets. In this case, low-mass secondaries tend to merge on more circular orbits. The situation is likely to be quite different in a cold Hubble flow, where progenitors encounter each other only if they are on bound orbits, and where are the orbits are likely to have little angular momentum (Aarseth & Fall, 1980). It is hard to believe, however, that progenitors of bulges encounters each other with no angular momentum, so it is not completely clear whether this work indicates the true resolution to the paradox of the cFP, or whether additional physics is needed in our experiments to explain the persistence of the cFP.
If $`both`$ the galaxies in a cFP merger have central black holes, then a black hole binary may form in the center of the merger remnant, and 3 body scattering may heat the center and lower the remnant’s central density, allowing it to lie on the cFP. Apparently, there is some debate as to whether black hole binaries form from high mass ratio mergers (Governato, et al. , 1994). However, for equal mass mergers, Makino & Ebisuzaki (1996), and Quinlan & Hernquist (1997) found considerable black hole binary heating. As a consequence, the high central density from the more circular encounters in this paper may be disrupted upon the introduction of a black hole in the secondary, as long as the black holes form a binary pair. If so, this may be a powerful case that a black hole resides in the center of every galaxy. We will present the results of the effects of multiple black holes in cFP mergers in a future paper.
A second interesting feature of these results is the formation of rapidly spinning disks. When the secondary is not destroyed on a non-radial encounter, it can begin to spin during the final orbits as it is torqued by the black hole. Spinning stellar disks have been observed in many galaxies (such as NGC 3115, Capaccioli et al. , 1987), and have been invoked to explain apparent multiple nucleii in others (M31, Tremaine, 1995; NGC 4486b, Lauer et al. , 1996). Our purely stellar simulations form rather thick disks, with aspect ratios of approximately 4:1. However, the formation of fat stellar disks seems inevitable in these mergers. To form a disk as razor thin as NGC 3115 would most likely require a dissipative component. Nonetheless, non-radial galaxy mergers appear to be an efficient way to make a spinning stellar disk, as long as one of the galaxies is embedded with a massive central black hole.
Some support for this work was provided by the Space Telescope Science Institute, through general observer grant GO-06099.05-94A, and by NASA through a theory grant G-NAG5-2758. We thank the members of the NUKER collaboration for helpful conversations. DR thanks the John Simon Guggenheim Foundation for a Fellowship.
## Appendix A Appendix A
Under the impulse approximation, any single particle with mass m experiences a deflection, $`\mathrm{\Theta }`$, when traveling with a velocity $`\stackrel{}{v}`$ past a single massive particle of mass M as follows:
$$\mathrm{\Theta }=1/v\frac{GM}{r^2}\frac{b}{r}\frac{dx}{v},$$
(A1)
where b is the impact parameter and dx is an infinitesimal distance in the velocity direction. In this simple case:
$$\mathrm{\Theta }=\frac{1}{v^2}\frac{2GM}{b}.$$
(A2)
The change in momentum in the direction of motion for this particle, $`\mathrm{\Delta }p_{}`$, is:
$$\mathrm{\Delta }p_{}=mv(cos\mathrm{\Theta }1),$$
(A3)
which for small $`\mathrm{\Theta }`$ can be expressed as:
$$\mathrm{\Delta }p_{}=\frac{2G^2M^2m}{b^2v^3}.$$
(A4)
By conservation of momentum, $`\mathrm{\Delta }p_{}=\mathrm{\Delta }P_{}`$, so the large particle also experiences a backward deflection. The change in velocity for this large mass is $`\mathrm{\Delta }V=\mathrm{\Delta }p_{}/M`$.
If the large mass were equally divided into n smaller masses such that the impact parameters were the same, equation A2 tells us that the deflection $`\mathrm{\Theta }`$ would be:
$$\mathrm{\Theta }=\frac{2GM/n}{bv^2},$$
(A5)
where the sum is over the n particles. This deflection angle is equal to equation A2. Consequently, the momentum for any mass M is the same as it would be if the large mass were equally divided, despite the apparent $`M^2`$ dependence. Likewise, if the small mass m were divided into n equal masses, the momentum of mass m as a whole is determined by the sum of n deflections, and is equivalent to the unpartitioned momentum.
To get the force on the small mass, we can use Newton’s 3rd law and find the acceleration of the large mass, $`dV/dt`$. When the small mass is subdivided into n particles with number density $`\eta `$, this acceleration is:
$$\frac{dV}{dt}=\frac{2G^2Mm}{b^2v^3}2\pi b𝑑b\eta v,$$
(A6)
which simplifies to:
$$\frac{dV}{dt}=\frac{4\pi G^2M\rho }{v^2}ln\mathrm{\Lambda }$$
(A7)
where $`ln\mathrm{\Lambda }`$ is the usual Coulomb logarithm. For a gaussian spectrum of velocities, equation A6 results in the dynamical friction acceleration, $`dv_f/dt`$, in equation 3 in the paper. Hence, the dynamical friction force on a secondary can be equally apportioned among equal mass secondary particles.
|
no-problem/9908/astro-ph9908074.html
|
ar5iv
|
text
|
# Turbulent Formation of Interstellar Structures and the Connection Between Simulations and Observations
## 1 Introduction
Turbulence is a prime example of a chaotic system, and the interstellar medium (ISM) is most probably a prime example of a turbulent medium. A discussion of interstellar turbulence thus befits this volume. Although chaos theory generally refers to systems with only a few degrees of freedom while turbulent flows have in general an extremely large number of them, both types of systems share the properties of sensitivity to initial conditions and the resulting practical unpredictability, as a consequence of the nonlinear couplings between the relevant variables. Furthermore, interstellar turbulence is much more complex than natural terrestrial and laboratory turbulence because the former is magnetized, and, in cloudy regions, highly compressible and strongly self-gravitating, thus not expected to Kolmogorov-like, except possibly in the diffuse gas.
In recent years, many reviews covering various aspects of interstellar and molecular cloud turbulence have appeared in the literature. Compressible turbulence basics and self-similar models are discussed by Vázquez-Semadeni. A compendium of a wide variety of interstellar turbulence aspects is given in the volume Interstellar Turbulence, including in particular turbulence in the HI gas and in the diffuse ionized component. A thorough review of the implications of compressible MHD turbulence for molecular cloud and star formation is presented in Vázquez-Semadeni et al. The present paper may be regarded as a companion to the latter reference, as it includes a number of topics not covered there. After reviewing the scenario of interstellar clouds as turbulent density fluctuations and some of its implications (sec. 2), I discuss the comparison of the structural properties of simulated clouds with those derived observationally for real clouds (sec. 3; see also Ossenkopf et al., this volume, for a discussion from a more observationally-oriented perspective), and on the effects of projecting three-dimensional (3D) data onto two dimensions, the latter point being important for the interpretation of observational data, which necessarily are projections on the plane of the sky (sec. 4).
## 2 Interstellar Clouds as Turbulent Density Fluctuations
A fundamental problem in the understanding of star formation is how the gas transits from a low-density diffuse medium to a comparatively enormously denser star. An intermediate step in this process is the formation of what we may generically refer to as interstellar clouds, including structures that span a wide range of physical conditions, from large diffuse HI clouds of densities a few $`\times `$ 10 cm<sup>-3</sup> and sizes up to hundreds of parsecs, to molecular cloud cores with densities $`\stackrel{>}{}10^4`$ cm<sup>-3</sup> and sizes of a few $`\times `$ 0.01 pc.
The process of cloud formation quite possibly involves more than a single mechanism, including the passage of spiral density waves and the effects of combined large-scale instabilities operating preferentially in the formation of the largest high-density structures, and the production of smaller density condensations by either swept-up shells, or by a generally turbulent medium. In the remainder of this section we focus on the latter process.
An important question is whether structures formed by either turbulent compressions or passages of single shock waves can become gravitationally unstable and collapse. This depends crucially on the cooling ability of the flow, which, as a first approximation, can be parameterized by an effective polytropic exponent $`\gamma _{\mathrm{eff}}`$ such that the pressure $`P`$ behaves as $`P\rho ^{\gamma _{\mathrm{eff}}}`$, where $`\rho `$ is the fluid density. Note that in this description both the “cooling” and the “pressure” can be generalized to refer to non-thermal energy forms, such as magnetic and turbulent.
The production and statistics of the density fluctuations in polytropic turbulence has been investigated recently by various groups. Passot and Vázquez-Semadeni have found that the probability density function (PDF) of the density fluctuations depends differently on the Mach number and on $`\gamma _{\mathrm{eff}}`$. By means of a simple phenomenological model, these authors find that, for isothermal flows ($`\gamma _{\mathrm{eff}}=1`$), the PDF is lognormal, as a consequence of the Central Limit Theorem and of the accumulative and multiplicative (additive in the log) nature of the density jumps caused by shocks. Increasing the rms Mach number only increases the width of the lognormal PDF and shifts its peak towards smaller densities. Instead, varying $`\gamma _{\mathrm{eff}}`$ changes the form of the distribution, which develops a power law tail at large densities for $`0<\gamma _{\mathrm{eff}}<1`$, and at small densities for $`\gamma _{\mathrm{eff}}>1`$. This effect is a consequence of the modification of the lognormal by the local variation of the sound speed, which in the general polytropic case varies with the density as $`\rho ^{(\gamma _{\mathrm{eff}}1)/2}`$. Essentially, in the power-law side of the PDF the density fluctuations are dominated by the nonlinear advection term in the momentum equation, with an increasingly negligible contribution from the pressure at increasingly large ($`\gamma _{\mathrm{eff}}<1`$) or small ($`\gamma _{\mathrm{eff}}>1`$) densities, due to the decreasing sound speed, while on the opposite side of the PDF the pressure dominates, impeding large excursions of the density. A concise discussion of the mechanism can also be found in Vázquez-Semadeni and Passot.
The stability of fluid parcels compressed in $`n`$ dimensions by shocks or turbulence requires $`\gamma _{\mathrm{eff}}>\gamma _{\mathrm{cr}}2(11/n)`$. For three-dimensional compressions, the minimum Mach number $`M_0`$ necessary to induce collapse by the velocity field has been computed by several authors as a function of $`\gamma _{\mathrm{eff}}`$ and the mass $`m`$ of the cloud in units of the Jeans mass. It is found that $`M_0\mathrm{ln}m`$ for the isothermal ($`\gamma _{\mathrm{eff}}=1`$) case, $`M_0m^{(\gamma _{\mathrm{eff}}1)/(43\gamma _{\mathrm{eff}})}`$ for $`4/3>\gamma _{\mathrm{eff}}>1`$() and $`M_0\sqrt{10/3(1\gamma _{\mathrm{eff}})}`$ for $`0<\gamma _{\mathrm{eff}}<1`$.
A relevant implication is that clouds formed by turbulent compressions are by necessity of a dynamical character, and are expected to either collapse, if the conditions described in the previous paragraph are satisfied, or else should reach some maximum density and then “rebound” as the external compression subsides (since the turbulent motions are chaotic, a compression will in general last a finite time only). This result has interesting implications on two “canonical” concepts of interstellar dynamics, as recently discussed by Ballesteros-Paredes et al. First, it may be that clouds in the ISM may not be pressure-confined as in popular models of the ISM, but rather in a highly dynamic and transient state, except if they become strongly gravitationally bound. An interesting corollary of this scenario occurs for regimes in which the thermal $`\gamma _{\mathrm{eff}}0`$ (i.e., a nearly isobaric behavior), as is indeed the case for the ISM between densities $`0.1`$ and $`10^2`$ cm<sup>-3</sup>. In this case, the thermal pressure remains nearly constant, regardless of the density structures formed by the turbulence. However, while traditionally this near pressure constancy has been regarded as a pressure balance condition that provides confinement for clouds, in the turbulent-cloud scenario it is only a relatively irrelevant consequence of the medium being maintained at nearly constant thermal pressure by the prevailing cooling processes as it is compressed by the turbulent motions.
Secondly, it appears difficult to produce the quasi-hydrostatic clumps which are the commonly assumed to be the initial conditions of many models of star formation. Their formation within a globally gravitationally stable region by a turbulent compression requires, as described above, that $`\gamma _{\mathrm{eff}}<\gamma _{\mathrm{cr}}`$. However, this sets them in a state of gravitational collapse. To then form a hydrostatic structure, a change in $`\gamma _{\mathrm{eff}}`$ is required during the collapse, in order for $`\gamma _{\mathrm{eff}}`$ to now become $`\gamma _{\mathrm{cr}}`$, so that the pressure may now overcome the ongoing gravitational compression, and ultimately halt the collapse. Such change in $`\gamma _{\mathrm{eff}}`$, at least from thermal contributions alone, is not expected until very high densities ($`\stackrel{>}{}10^8`$ cm<sup>-3</sup>) are reached.
Another implication of the turbulent density fluctuation scenario for the clouds is that the time scales associated to clouds may be smaller than those derived from their sizes and internal velocity dispersions. If the cloud is made by the collision of large-scale streams, the time scale for its formation is the crossing time through its size $`L`$ at the velocity difference between the turbulent scales larger than cloud (the colliding streams). For turbulence with a (normal) spectrum that decays with wavenumber $`k`$, the characteristic velocity difference increases with separation, and thus the crossing time scale for the cloud is smaller than that derived from its internal velocity dispersion. Indeed, the line widths from the HI envelopes of molecular clouds are generally larger than the line widths in the molecular clouds themselves. This result has been proposed as a possible solution to the absence of post-T-Tauri stars in the Taurus region, since the region may be younger than the age derived from its internal velocity dispersion.
## 3 Comparisons Between Simulations and Observations
In recent years, numerical simulations of interstellar turbulence have advanced to the point that statistical comparisons with observational results have become possible.<sup>1</sup><sup>1</sup>1But note that modeling of individual objects is not feasible because of the sensitivity to initial conditions of turbulent flows. This is a crucial task because it allows an iterative procedure in which simulations may be constrained as models of the ISM and interstellar clouds by comparison of their morphological, topological and statistical properties with their observational counterparts. Once the best-fitting set of parameters is found for a certain type of system, the simulations may then be used as highly complete models of such system to improve our understanding of the physical processes occurring within them. However, it should be pointed out that this is a difficult task, because the natures of the observations and of the simulations are quite different. While simulations are performed on a regular grid, with well-defined boundaries, observations refer to regions of space for which the size along the line of sight is not constant. For example, for HI observations, the path length through the disk decreases with Galactic latitude, while for molecular line observations, the observed objects in general may have different extents along the LOS over their projected area. Also, linear sizes perpendicular to the LOS increase with distance. In spite of these difficulties, however, several first steps have been taken in this direction.
### 3.1 Scaling Relations
One basic question is whether the clouds in the simulations reproduce the well-known Larson relations (see also the review by Blitz for a more recent account) $`\mathrm{\Delta }vR^{1/2}`$ and $`\rho R^1`$, where $`\mathrm{\Delta }v`$ is the velocity dispersion in the cloud, $`\rho `$ its mean density, and $`R`$ its characteristic size. In an analysis of two-dimensional MHD simulations of the ISM including self-gravity and stellar-like driving, Vázquez-Semadeni et al. found that, although with much scatter, a Larson-like velocity dispersion-to-size scaling of the form $`\mathrm{\Delta }vR^{0.4}`$ is observed for clouds defined as the connected regions in the flow with densities above a given threshold. This result is roughly consistent with observational surveys giving scaling exponents between 0.4 and 0.7. However, the density-size relation is not verified in the simulations. Instead, small clouds with low densities, which are transient and not gravitationally bound, are formed in large quantities in the simulations. Rather than being satisfied by all clouds, the Larson density-size relation appears to be an upper bound to the region populated by the clouds in a $`\rho `$-$`R`$ diagram. The same trend was observed in a sample of objects away from map intensity maxima. This supports suggestions that the density-size relation may be an artifact of the limitations on integration times of observational surveys, and that the $`\mathrm{\Delta }v`$-$`R`$ relation, satisfied by all clouds, may originate from the Burgers-like spectrum of molecular cloud turbulence. In this scenario, only those clouds which become sufficiently self gravitating and are not strongly disturbed (i.e., in near virial equilibrium), satisfy the density-size relation.
### 3.2 Synthetic Line Profiles
Another important means of comparison between simulations and observations are the spectral line profiles, since their shapes (generally characterized by their first few moments) reflect the velocity structure of the flow in the observed regions. Line profiles from the simulations are constructed as density-weighted velocity histograms along each LOS.
Falgarone et al. compared the line spectra produced in a high-resolution 3D simulation of weakly compressible turbulence with observational data, concluding that both sets of spectral lines are very similar, in terms of the range of values of the variance and the kurtosis they present. The similarity is greater in this case than that achieved by other models of clouds (constructed with random uncorrelated velocity fields or with isolated clumps in an interclump medium) which do not account for the spatial correlations derived from the continuum nature of fluid turbulence. Similar results have been obtained from randomly generated flows with an imposed Kolmogorov spectrum. However, a more recent study based on the PDFs of the velocity centroids, rather than on the line profiles, suggests that in turn nearly incompressible turbulence fails to capture some features of the centroid PDFs of both molecular and HI regions, which exhibit a larger degree of non-Gaussianity than those derived from incompressible or weakly compressible turbulence.
Line profiles from simulations of strongly compressible MHD turbulence, including radiative transfer, as well as other diagnostics, have recently been compared to molecular line data by Padoan and coworkers to support their suggestion that molecular clouds may actually be in a super-Alfvénic regime, rather than a sub-Alfvénic one, as generally believed. Their tests were based on two 3D simulations of MHD isothermal turbulence without self-gravity, one super-Alfvénic, the other sub-Alfvénic. First, they noted that simulated line profiles from the super-Alfvénic run seem to reproduce the observed growth of line width with integrated antenna temperature better than those from the sub-Alfvénic run. Secondly, the super-Alfvénic run seems to better match the observed trend of magnetic field strength vs. gas density. Finally, the super-Alfvénic simulation also more closely reproduces the observational trend of the dispersion of extinction vs. mean extinction along selected lines of sight. However, one caveat remains before the super-Alfvénic model can be accepted: the simulations lacked self-gravity, which could have pushed the results of the sub-Alfvénic simulation closer to the observational results, and the super-Alfvénic run away from them. Therefore, similar experiments are needed with self-gravitating runs in order to confirm this possibility.
### 3.3 Higher-Order Statistics and Fractality Analyses
The methods described in the previous section have taken into account only the velocity information in the spectral data “cubes”. We now briefly discuss methods that have tried to take spatial information into consideration as well.
Spatial structure is often described by means of the autocorrelation function (ACF), which measures the probability of finding equal values of a given physical variable at two different positions in space, as a function of their separation.<sup>2</sup><sup>2</sup>2However, it has been pointed out by Scalo and Houlahan that one limitation of the ACF is that it cannot distinguish between hierarchically nested or randomly distributed structure. Early studies in this direction measured the ACF of column density and of the line velocity centroids, attempting to determine whether characteristic lengths exist in the ISM, with mixed results. Recently, a variant of this approach termed the Spectral Correlation Function (SCF) has been introduced. The SCF measures the quadratic difference between line spectra (on a channel-by-channel basis) at different positions on a spectral-line map, in an attempt to include spectral as well as spatial information in the statistical description. So far, the method has been used to measure the angle-averaged correlation between the spectrum at a given position in a map and at its nearest neighbors, allowing the characterization of the small-scale variability of the spectra, and a comparison between CO maps and simulations of isothermal turbulence under various regimes (purely hydrodynamic, MHD, and self-gravitating). In that work, differences between the values of the SCF for weakly compressible purely hydrodynamic turbulence and for the Ursa Major molecular cloud indicate that simulation is not as accurate a model for the Ursa Major cloud as previously claimed by Falgarone et al. on the basis of line profile shapes (see sec. 3.2). Comparison of HI data with non-isothermal simulations has proven more difficult, because the thermal broadening of the warm gas swamps the velocity structure of the cold gas.
The recognition that the structure of the ISM may have a turbulent origin has also prompted searches for fractal properties of interstellar clouds (although see the contribution by Combes, this volume, for an alternative scenario originating the fractal structure). Early studies in this direction started by measuring the fractal dimension of the clouds in column density or intensity maps of selected regions, by means of the area-perimeter scaling in the projected clouds, finding dimensions near 1.4. Recent measurements of their area-perimeter relation in 2D simulations find similar values. However, these values are surprisingly close to the projected fractal dimensions of clouds in the Earth’s atmosphere, suggesting that the fractal dimension may be a significantly degenerate diagnostic which may not be capable of distinguishing between different physical regimes. Indeed, a more recent study by Chappell and Scalo has shown that the column density maps of various regions actually have a well-defined multifractal structure, so that attempting to measure a single fractal dimension for the clouds in the maps may erase much of the structural information. Furthermore, these authors emphasize that the methods used to determine fractal dimensions of clouds rely on the definition of “clouds” by means of some rather arbitrary criterion (such as thresholding the column density field), while the multifractal spectrum determination uses the structural information of the whole field. The multifractal spectrum of the regions studied appears to correlate fairly well with the geometric forms seen visually, potentially providing a means for quantitative structure classification schemes. The multifractal properties of numerical simulations of ISM turbulence are currently being investigated.
A method for determining the line width-size scaling of spectral maps independently of any specific definition of “clouds” in a spectral map has been introduced recently by Heyer and collaborators. The method uses the statistical technique known as Principal Component Analysis (PCA) to define a set of spectral profiles (eigenvectors) which form a “natural” basis for the spectral maps (in the sense that it reflects the main trends of the intensity data among the velocity channels). Eigenimages, which represent the intensity structures as “filtered” by the basis spectra, are generated by projecting the original spectra onto the eigenvectors. By then measuring the ACF of the eigenvectors and of the eigenimages, the relationship between the magnitude of velocity differences and the spatial scales over which these differences occur can be extracted. In order to “calibrate” the expected value of the exponent $`\alpha `$ in this relation (such that $`\mathrm{\Delta }vR^\alpha `$, where $`\mathrm{\Delta }v`$ is the velocity difference and $`R`$ is the size), “pseudo-simulations” of fractional Brownian noise with a prescribed spectrum were produced and “observed” using a radiative transfer simulator. It was found that the scaling exponent is related to the spectral index $`\beta `$ by $`\alpha =\beta /3`$. Recent calibrations with actual hydrodynamic and MHD simulations in 3D are roughly consistent with this result, although the resolutions available in 3D simulations are still insufficient to develop clear power-law turbulent spectra, so the results are not conclusive. Tests with higher resolution 2D simulations apparently produce a different calibration, $`\alpha =\beta /4`$.
The origin of these scalings is not well understood yet. In fact, they suggest that the very nature of the line width-size relation derived through PCA is unknown, since it does not follow the same scaling as, say, the second-order structure function, defined as $`F_2(r)=[𝐮(𝐱)𝐮(𝐱+𝐫)]^2`$, where the brackets denote a volume average. This function gives the mean quadratic velocity difference between positions separated a distance $`r`$. Yet, $`F_2`$ is related to the energy spectrum by
$$F_2(r)=4_0^{\mathrm{}}E(k)\left(1\frac{\mathrm{sin}kr}{kr}\right)𝑑r,$$
(1)
so that, if $`E(k)k^\beta `$, then $`F_2(r)r^\eta `$, with $`\eta =(\beta 1)/2`$. $`\eta `$ is thus not functionally related to $`\beta `$ in the same manner as $`\alpha `$. The discrepancy is probably related to the fact that the above scaling refers to the structure function of the actual 3D velocity field, while in spectroscopic data every velocity interval contains the contribution of many fluid parcels (possibly far apart from each other), and only one of the three velocity components is observed. Thus, the true nature of the spatial and velocity increments in spectral data cubes remains unknown.
Finally, a method for structure analysis similar to the power spectrum, but using a wavelet rather than a Fourier basis is described in the contribution by Ossenkopf et al. (this volume).
## 4 Effects of Projection on Morphology
One advantage of 3D simulations is that they contain more structural information than even spectroscopic data “cubes”. While the latter only refer to two spatial and one velocity dimensions, 3D numerical simulations provide information on the 3D structure of all physical variables. (Of course, their downside is that they are necessarily limited in resolution and in the number of physical processes that can be included.) This allows an investigation of the 3D structures that generate the patterns observed in the position-position-velocity (PPV) space of the spectroscopic channel maps. To this end, channel maps are constructed from the simulations by integrating the density field along one direction (the line of sight, or LOS), and then selecting the contribution of each parcel along the LOS by its LOS-velocity. This is equivalent to constructing density-weighted velocity histograms (the “line spectra”) at each position in the plane perpendicular to the LOS.
A rather unexpected result has recently been found independently, using different approaches, by several groups. It appears that, at least under certain conditions, the projected spatial structure in the channel maps is dominated by the spatial distribution of the velocity field, rather than by the 3D density field. Pichardo et al. have shown this in a 3D simulation of the ISM at intermediate scales (3–300 pc) by noting that the pixel-to-pixel correlation between channel maps and thin slices of the 3D velocity field tends to be larger on average than the correlation between channel maps and slices of the 3D density field. Independently, Lazarian and Pogosyan have shown analytically that, for cases with an underlying one-to-one mapping between the position along the LOS and the LOS-velocity (as for an expanding universe or the HI gas distribution in the Galaxy), and with uncorrelated random density and velocity fields with well-defined spectral indices, the power spectrum of the projected density field is dominated by the spectrum of the velocity field for density spectral indices steeper than $`3`$, unless the velocity channels are very wide (as is clearly the case in the limit of a single velocity channel, in which the velocity dependence is integrated out). Finally, Heyer and Brunt have noticed that, in their pseudo-simulations (cf. sec. 3.3 above), channel maps and PCA-derived $`\mathrm{\Delta }v`$-$`R`$ relations produced with and without density-weighting are notoriously similar, suggesting that the effect of the density weighting is relatively minor. A similar effect was noticed by Falgarone et al. about the shapes of synthetic line profiles. In summary, it appears that the spatial structure of the velocity field is at least as important as that of the density field in determining what is observed in projection on the plane of the sky.
A related effect has been observed by Pichardo et al. The morphology observed in the channel maps appears to contain much more small-scale structure than either the density or the velocity 3D fields. This is reflected in the power spectra of the channel maps and of 2D slices through the 3D density and velocity fields, the latter two having steeper slopes and falling off much more rapidly than the former. This phenomenon has been interpreted by those authors as a consequence of a pseudo-random sampling of fluid parcels along the LOS by the velocity selection performed when constructing a channel map. This introduces an additional ingredient of variability between neighboring LOSs, which causes artificial small-scale variability in the channel maps.
It can be concluded from this section that, for a fully turbulent ISM, the structure seen observationally, through spectroscopic observations, may differ from the actual 3D structures present in the medium. In particular, this suggests that structure-finding algorithms operating on spectroscopic data cubes may not identify exactly the same structures than would be obtained on the actual 3D spatial data, as already pointed out by various authors. These effects may be decreased, however, in cases when the observed regions contain well-defined “objects” which may be picked out by the observing process, such as shells, bipolar flows, etc.
## 5 Conclusions
In this review I have discussed recent results from numerical simulations of turbulence in the ISM. I first reviewed the scenario of interstellar clouds as turbulent density fluctuations. Work on the production of gravitationally bound structures in globally stable media by turbulent compressions was summarized, in particular the necessary Mach numbers (for 3D compressions) and the constraints on the effective polytropic exponent $`\gamma _{\mathrm{eff}}`$ for $`n`$-dimensional compressions. Three implications were then discussed. First, the near thermal pressure balance observed in the ISM (except for molecular clouds) may not be a confining agent for clouds, but rather a relatively fortuitous consequence of the prevailing heating and cooling mechanisms, which render the medium nearly isobaric, in the presence of turbulence-induced density fluctuations. Second, it appears unlikely that nearly hydrostatic cores may be produced within the turbulent ISM unless some very specific variations in $`\gamma _{\mathrm{eff}}`$ occur during a gravitational contraction induced by the turbulence. Third, the time scales associated with clouds may be smaller than those estimated from the clouds’ characteristic dimensions and their velocity dispersions, since the relevant velocities may instead be those of the larger, external flow streams that produced the clouds at their collision interfaces.
I then proceeded to review recent results from various attempts to relate numerical simulations to observational data, from early qualitative comparisons of spectral line profiles and surveys of clouds in 2D simulations (which suggested the existence of a whole population of low-column density clouds that do not satisfy Larson’s density-size relation), to recent approaches using more sophisticated statistical techniques such as the Spectral Correlation Function, Principal Component Analysis, and fractal dimensions and multifractal spectra, mostly aiming at characterizing the morphology of interstellar structures in a statistically meaningful way, and determining whether the structures developing in turbulence simulations reproduce their properties. Some of these methods are only being developed now, but they are already providing a quantitative method for discriminating between turbulence simulations with different parameter choices as the most suitable models for specific interstellar regimes, as well as providing a basis for interpreting observational data in terms of the simulations.
Finally, I discussed the relationship between the actual 3D spatial structure of the density and velocity fields, and that of the projected “intensity” field in channel maps. Recent works, using various approaches, suggest that the structure, both morphological and statistical (power spectrum) of the 2D intensity field is dominated by the velocity field, rather than by the density. Additionally, since the total intensity in every LOS of a channel map is constructed by “selecting” scattered fluid parcels (on the basis of their LOS velocity) along the LOS, spurious small-scale variability is introduced into the structure seen in the channel maps, as the set of sampled parcels varies from one LOS to the next. These results suggest that the structure in channel maps bears a very complex and non-trivial relationship to the structures actually existing in the ISM. Further work in this area is likely to produce numerous unexpected and exciting results in the near future.
## Acknowledgments
I am grateful to acknowledge Chris Brunt, Mark Heyer, Alex Lazarian, Volker Ossenkopf, Dimitri Pogosyan and John Scalo for stimulating discussions and useful comments. This work has received financial support from grants CRAY/UNAM SC-008397 and UNAM/DGAPA IN119998.
|
no-problem/9908/hep-lat9908015.html
|
ar5iv
|
text
|
# Improving the sign problem in QCD at finite density Talk presented by V. Laliena
## 1 The Polyakov loop and the phase of the determinant
It is well known that the euclidean path integral representation of the QCD partition function at finite chemical potential suffers from the so-called sign problem: the fermion determinant is complex and the theory cannot be simulated with the usual Monte Carlo method. One could still try to get results with the simple brute force method, that is, simulate a positive measure which contains the pure gauge action and the modulus of the determinant, treating the phase as an observable. Then, the expectation value of any observable $`𝒪`$ is given by the ratio $`𝒪=𝒪\mathrm{exp}i\theta _m/\mathrm{cos}\theta _m`$, where $`_m`$ denotes the expectation value using the modulus of the determinant as a probability measure, and $`\theta `$ is the phase of the determinant (PD). Unfortunately, the expectation value of $`\mathrm{cos}\theta `$ is a positive quantity exponentially small with the volume. Since the relative error $`𝒪`$ is given by
$$\frac{\delta 𝒪}{𝒪}=\frac{\delta 𝒪\mathrm{exp}i\theta _m}{𝒪\mathrm{exp}i\theta _m}+\frac{\delta \mathrm{cos}\theta _m}{\mathrm{cos}\theta _m},$$
(1)
$`\mathrm{cos}\theta _m`$ must be measured very accurately to achieve a given accuracy for $`𝒪`$. This requires statistic growing exponentially with the volume. Obviously, this is, from the numerical point of view, an almost hopeless task.
Due to the up to now unsurmountable difficulties with QCD with light quarks at finite density, increasing attention is being paid to the limit of infinitely heavy quarks -. The sign problem still remains in this limit, but numerical computations are easier . For large quark mass $`m`$, the logarithm of the fermion determinant can be expanded in powers of $`1/(2m)`$ (we focus our discussion on staggered fermions). The first term sensitive to the chemical potential $`\mu `$ is
$$\left(\frac{e^\mu }{2m}\right)^{N_T}\underset{\stackrel{}{x}}{}\mathrm{Tr}L_\stackrel{}{x}+\left(\frac{e^\mu }{2m}\right)^{N_T}\underset{\stackrel{}{x}}{}\mathrm{Tr}L_\stackrel{}{x}^{},$$
(2)
where $`N_T`$ is the number of lattice points in the temporal direction, $`\stackrel{}{x}`$ labels the sites in a given temporal slice, and $`L_\stackrel{}{x}`$ is the ordered product of all temporal links attached to $`\stackrel{}{x}`$. Hence, to this order, the PD is proportional to the imaginary part of the Polyakov loop (IPPL): $`\theta =cV_sL_i`$, with $`L_i=1/V_s_\stackrel{}{x}\mathrm{ImTr}L_\stackrel{}{x}`$ and $`c=2\mathrm{sinh}(\mu N_T)/(2m)^{N_T}`$. Of course, higher order corrections will destroy this linear relationship.
The static limit ($`m,\mu \mathrm{}`$ with $`c=[\mathrm{exp}(\mu )/2m]^{N_T}`$ fixed) has been studied in . Analyzing the data of , we found a strong correlation between the PD and the IPPL, as can be seen in Figs. 1 and 2 . The strong correlation displayed in Fig. 1 is not surprising, since the data correspond to $`c0.05`$, which is small enough for the linear relation previously discussed to become essentially exact. Fig. 2 is more interesting, since $`c1.66`$ is not small. There, we can see that the linear correlation still holds, though the width of the band is much larger than in Fig. 1. Very recently, a paper confirming these findings from a continuum analysis has appeared .
## 2 Fixing the imaginary part of the Polyakov loop
Given the correlation between the PD and the IPPL, it is plausible that, at least for heavy enough quarks, the fluctuations of the PD would be suppressed to a large extent provided we constrain our path integral to configurations with real Polyakov loop. To see whether this is possible we write the partition function as $`𝒵=𝑑p_i\mathrm{exp}\left(V_s(p_i)\right)`$, where $`V_s`$ is the spatial lattice volume and
$$e^{V_s}=[dU]e^{S_g(U)}det\mathrm{\Delta }(U)\delta (p_iL_i).$$
(3)
For $`V_s\mathrm{}`$ the integral in $`p_i`$ is saturated by the saddle point $`p_i^{sp}`$, which is in general complex, since $``$ is.
It is easy to show that the saddle point is indeed purely imaginary. The expectation value of the IPPL coincides with $`p_i^{sp}`$, and therefore
$$p_i^{sp}=\frac{L_i\mathrm{cos}\theta _m}{\mathrm{cos}\theta _m}+i\frac{L_i\mathrm{sin}\theta _m}{\mathrm{cos}\theta _m}.$$
(4)
For each gauge configuration we also have its complex conjugate, for which $`L_i`$ changes sign. From the loop expansion of the logarithm of the fermion determinant, we see that the modulus and the phase depend only on the real and imaginary part of the loops respectively. Hence, the modulus and the pure gauge action do not change, while $`\theta `$ changes sign. The first term in the r.h.s. of (4) vanishes, so that $`p_i^{sp}`$ is purely imaginary.
The existence of a saddle point for the partition function implies the equivalence between canonical and microcanonical ensembles. One can constrain an observable to its saddle point value. This only makes sense when the saddle point is real. Therefore, we can only constrain the IPPL to zero in those cases where its expectation value is zero. We expect this to happen at zero temperature . At finite temperature, the expectation value of the Polyakov loop gives the free energy of a heavy quark, and its complex conjugate that of a heavy antiquark. Since these two free energies should be different, the expectation value of the IPPL cannot be zero. However, if the temperature is small, the expectation value of IPPL should be exponentially small with $`1/T`$. The static limit at strong coupling, which can be solved analytically , confirms this. In this case, $`p_i^{sp}=(c^2c)/(c^3+1)`$, with $`c=(e^\mu /2m)^{N_T}`$, in agreement with our expectation. Therefore, we can constrain the IPPL to zero in simulations of the low temperature phase of QCD at finite density.
## 3 Constrained Monte Carlo
To test our ideas we developed a HMC algorithm in which the molecular dynamics is forced to evolve on the surface of zero IPPL. This is easily achieved by introducing a Lagrange multiplier which must be computed at each step of the molecular dynamics by solving a non-linear equation, which is the condition that the constraint must be obeyed at each step. It can be shown that the dynamics is reversible and that detailed balance is satisfied. The additional cost in CPU time caused by the constraint is small: 10% more than the unconstrained case for quenched simulations, and completely negligible with dynamical fermions.
As a preliminary test, we made several quenched runs to check the gain of a constrained simulation in comparison with an unconstrained one. We worked with a $`4^3\times 6`$ lattice at $`\beta =1.0`$ to ensure that we were in the low temperature phase. We got a set of 300 equilibrium quenched configurations and we diagonalized its associated massless staggered fermion matrix, for $`\mu =0.5,1.0,1.5`$ and $`2.0`$. Having the eigenvalues of the massless matrix, it is very cheap to get the determinant for any value of the mass.
Let us describe our results. If the mass is large, there is no sign problem in the small and large $`\mu `$ regions. Simulations are easy but not interesting there. When $`\mu `$ is of the order $`m`$ the sign problem becomes severe: this is the region where the onset transition, separating the zero from the finite density phases, takes place. It is very interesting to determine it accurately. Fig. 3a displays $`\mathrm{cos}\theta `$ as a function of $`m`$, for $`\mu =1.5`$. The logarithmic scale allows us to see the relative error entering Eq. (1). In some cases, the relative error in the unconstrained simulation is one order of magnitude larger than in the constrained one. Notice the severe sign problem signaling a transition for $`m1.75`$. The transition region is very broad in the unconstrained simulation, and in fact covers most of the finite density region, between saturation (low $`m`$) and zero density (high $`m`$). Actually, almost nothing can be inferred about the finite density phase in the unconstrained case. In the constrained simulation, the sign problem occurs in a much narrower window, so that the onset transition as well as the finite density phase could be analysed. From Fig. 3a, we see that $`\mu =1.5`$ is nearly critical for $`m[1.75,1.82]`$. As expected, the constraint in the IPPL improves the sign problem and makes numerical simulations feasible. Fig. 3b displays the distribution of $`\theta /\pi `$ for $`\mu =1.5`$ and $`m=1.5`$. The difference between the constrained and unconstrained cases is manifest. An extended version of this work has recently appeared .
We thank the authors of Ref. , especially Doug Toussaint, for making their data available to us. Ph. de F. thanks Mike Creutz for helpful discussions. V.L. acknowledges useful discussions with R. Aloisio, V. Azcoiti and A. Galante.
|
no-problem/9908/hep-ph9908386.html
|
ar5iv
|
text
|
# Fermilab Conf-99/222-T Lattice determinations of the strange quark massTalk presented at KAON’99, University of Chicago, June 1999
## 1 Introduction
The importance of the strange quark mass, as a fundamental parameter of the Standard Model (SM) and as an input to many interesting quantities, has been highlighted in many reviews, eg in Ref . A first principles calculation of $`m_s`$ is possible in lattice QCD but to date there has been a rather large spread in values from lattice calculations. This review aims to clarify the situation by explaining the particular systematic errors and their effects and illustrating the emerging consenus.
In addition, a discussion of the strange quark mass is timely given the recent results from KTeV and NA48 for $`ϵ^{}/ϵ`$ which firmly establish direct CP-violation in the SM and when combined with previous measurements give a world average $`ϵ^{}/ϵ=(21.2\pm 2.8)\times 10^4`$. This is in stark disagreement with the theoretical predictions which favour a low $`ϵ^{}/ϵ`$ .
Although in principle $`ϵ^{}/ϵ`$ does not depend directly on $`m_s`$ in practice it has been an input in current phenomenological analyses. This dependence arises because the matrix elements of the gluonic, $`Q_6_0`$, and electroweak, $`Q_8_2`$, penguin operators<sup>1</sup><sup>1</sup>1keeping only the numerically dominant contributions for simplicity are of the form $`\pi \pi |Q_i|K`$ and final state interactions make them notoriously difficult to calculate directly. They have been, therefore, parameterised in terms of bag parameters, $`_i`$, the strange quark mass, $`m_s`$ and the top quark mass, $`m_t`$, as discussed in detail in Ref. . A recent review of lattice calculations of the matrix elements is in Ref. . In this talk I will focus on some recent and careful lattice determinations of $`m_s`$, illustrating the reasons for the large spread in earlier results.
## 2 The stange quark mass from lattice QCD
In lattice QCD, $`m_s`$ is determined in two ways, each of which relies on calculations of experimentally measured quantities to fix the lattice bare coupling and quark masses. The 1P-1S Charmonium splitting, $`M_\rho `$ and $`r_0`$ are some of the parameters typically chosen to fix the inverse lattice spacing, $`a^1`$. To determine $`m_s`$ either $`M_K`$ or $`M_\varphi `$ is used. It is an artefact of the quenched approximation that $`m_s`$ depends on the choice of input parameters, so that some of the spread in answers from lattice QCD can be attributed to different choices here. Naturally, some quantities are better choices than others being less sensitive to quenching or having smaller systematic errors.
The quark mass can be determined from hadron spectroscopy, using chiral perturbation theory to match a lattice calculation of $`M_K`$ (or $`M_\varphi `$) to its experimental value with
$$M_{PS}^2=B_{PS}\frac{(m_i+m_j)}{2}+\mathrm{}\text{or}M_V=A_V+B_V\frac{(m_i+m_j)}{2}+\mathrm{}$$
(1)
This is the hadron spectrum or vector Ward identity (VWI) method.
Alternatively, the axial Ward identity (AWI): $`_\mu A_\mu (x)=(m_i+m_j)P(x)`$ imposed at quark masses to correspond to either the experimentally measured $`M_K`$ or $`M_\varphi `$ determines $`m_s`$.
The lattice bare masses and matrix elements are related to their continuum counterparts, in say the $`\overline{MS}`$ scheme, by the renormalisation coefficients, $`Z_s`$ or $`Z_{(A,P)}`$, calculated perturbatively or nonperturbatively,
$`m_s^{\overline{MS}}(\mu )=Z_s^1(\mu ,ma)m_q^0`$ , $`(m_s+\overline{m})^{\overline{MS}}(\mu )={\displaystyle \frac{Z_A(ma)}{Z_P(\mu ,ma)}}{\displaystyle \frac{_\mu A_\mu J(0)}{P(x)J(0)}}.`$
$`m_s`$ has been calculated in all three lattice fermion formalisms: Wilson, staggered and domain wall. Although the domain wall fermion results are extremely interesting, since this approach has the good flavour structure of Wilson fermions while preserving chiral symmetry, the results for $`m_s`$ are still preliminary so I will focus on results with Wilson and staggered fermions. A description of the domain wall formalism and results can be found in Ref. .
Comparing results from these different methods provides a nice check of lattice calculations.
## 3 Main uncertainties in the calculation
The difference in early lattice results can be understood in terms of the treatment of systematic uncertainties in these particular calculations. The largest of these are discretisation errors, calculation of renormalisation coefficients and the quenched approximation.
1. Discretisation Errors : The Wilson action has discretisation errors of $`𝒪(a)`$, so for a reliable result one needs fine lattices and a continuum extrapolation, $`a0`$. See Figure 1 for the CP-PACS collaboration’s quenched Wilson results . The Sheikholeslami-Wohlert (SW) clover action includes a term $`c_{SW}\overline{\mathrm{\Psi }}\sigma _{\mu \nu }F_{\mu \nu }\mathrm{\Psi }`$ and discretisation errors start at $`𝒪(\alpha _sa)`$, when $`c_{SW}`$ is determined perturbatively. The remaining $`a`$-dependence must be removed by continuum extrapolation, but the slope of the extrapolation is milder . A nonperturbative determination of $`c_{SW}`$ gives an $`𝒪(a)`$-improved action, which should futher reduce the lattice spacing dependence. Recent results from the APE, ALPHA/UKQCD and QCDSF collaborations use this approach . The latter two groups include continuum extrapolations and find significant $`a`$-dependence ($`15\%`$ between the finest lattice and $`a=0`$ as found by ALPHA/UKQCD). In the case of the more commonly used VWI approach the slope of the extrapolation in $`a`$ is positive and therefore, $`m_s`$ at finite lattice spacing is too high, even with improvement.
The staggered fermion action is $`𝒪(a)`$-improved so the lattice spacing dependence should be mild.
2. Renormalisation coefficients : $`Z_S`$ and $`Z_{(A,P)}`$ can be determined perturbatively or nonperturbatively. A nonperturbative calculation is preferable as it removes any perturbative ambiguity. This was pioneered by the APE and ALPHA groups .
For Wilson fermions perturbative corrections are smaller and therefore more reliable in the VWI approach (ie. for $`Z_S`$) than in the AWI approach. In Ref. the difference between nonperturbative results and boosted perturbation theory is $`10\%`$ for $`Z_S`$ and $`30\%`$ for $`Z_P`$ at $`a^12.6`$ GeV. For staggered fermions the perturbative coefficients are large and positive so the results are unreliable and nonperturbative renormalisation is essential. The perturbative staggered results are therefore too low and this effect combined with the too high values of $`m_s`$ from Wilson results at finite lattice spacing explain much of the spread in lattice results.
3. Quenching : Most calculations are done in the quenched approximation - neglecting internal quark loops - as a computational expedient. An estimate of this approximation, based on phenomenological arguments, was made in . The authors estimated that unquenching lowers $`m_s`$ by $`2040\%`$. They also argued that $`M_K`$ rather than $`M_\varphi `$ is a better choice of input parameter since it is less sensitive to quenching. Unquenched calculations by CP-PACS have shown that these estimates were of the correct size and sign .
A number of clear trends are therefore identified:
* There is significant $`a`$-dependence in the Wilson action results which raises $`m_s`$ at finite lattice spacing. Although this is milder for the improved actions it is still present, as pointed out in Refs. .
* Using perturbative improvement, the VWI and AWI methods differ at finite lattice spacing but agree after continuum extrapolation. This indicates the methods have discretisation errors larger than the perturbative uncertainty. Nonperturbative renormalistion has a larger effect on AWI results, bringing them into agreement with VWI results at finite lattice spacing. However, discretisation errors remain a significant uncertainty and without a continuum extrapolation lead to an overestimate of $`m_s`$.
* Perturbative renormlisation of staggered fermions results in an underestimate of $`m_s`$. Nonperturbative renormalisation is essential.
* A lower value of $`m_s`$ is expected from an unquenched calculation.
## 4 Recent results for $`m_s`$
The systematic uncertainties in the lattice determination of $`m_s`$ are now well understood. Some recent results which I believe provide a definitive value of $`m_s`$ in quenched QCD and an unquenched result are now discussed.
### 4.1 Quenched results
Table 1 compares a number of recent calculations of $`m_s`$. The JLQCD, ALPHA/UKQCD and QCDSF groups have removed all uncertainties within the quenched approximation. JLQCD use staggered fermions and nonperturbative renormalisation . They observe mild $`a`$-dependence, as expected and take the continuum limit. The effect of nonperturbative renormalisation is considerable, again as expected: $`+18\%`$ when compared to the perturbative result.
The ALPHA/UKQCD collaborations and QCDSF use a nonperturbatively improved SW action and renormalisation and include a continuum extrapolation. This explains the difference between their results and that of the APE group (which has not been extrapolated to $`a=0`$). Interestingly, ALPHA/UKQCD, QCDSF and the Fermilab and LANL results for $`m_s`$ are in agreement. The difference in analyses is nonperturbative versus perturbative renormalisation, indicating that the perturbative result for the VWI method is reliable (for Wilson fermions).
### 4.2 An unquenched result
There are a number of new preliminary unquenched calculations of $`m_s`$ however, CP-PACS have recently completed their analysis , shown in Figure 1, so I will concentrate on this. Since unquenching requires a huge increase in computing time it is prudent to use coarser (less time consuming) lattice spacings. This in turn requires improved actions to control the discretisation effects. CP-PACS use a perturbatively improved quark and gluon action and extrapolate to the continuum limit. The perturbative renormalisation is reliable with a remaining perturbative error of $`𝒪(\alpha _s^2)`$, for Wilson fermions. The final result is $`m_s^{\overline{MS}}(2\text{GeV})=84(7)\text{MeV}`$.
Although this result disagrees with bounds derived from the positivity of the spectral function it remains unclear at what scale, $`\mu `$, perturbative QCD and thus the bound itself becomes reliable. CP-PACS conclude that unquenching lowers $`m_s`$ \- compare the filled and open symbols in Figure 1. As in the quenched case the VWI and AWI methods differ at finite lattice spacing but extrapolate to the same result - compare the $``$ and $`\mathrm{}`$ symbols. Finally, the strange quark mass obtained from the K and $`\varphi `$ mesons yields consistent continuum values in full QCD: 84(7) MeV and 87(11) MeV respectively.
## 5 Conclusions
There has been much progress this year in lattice calculations of $`m_s`$. Current computing power and theoretical understanding are sufficient to determine $`m_s`$ to great precision. A calculation removing all uncertainties would include unquenched simulations, a continuum extrapolation and nonperturbative renormalisation and can be done in the short term. Simulations at $`n_f=2`$ and $`4`$ with an interpolation to $`n_f=3`$ are also a possibility.
Finally, I look at the implications for $`ϵ^{}/ϵ`$ from current theoretical calculations given the recent lattice calculations of $`m_s`$. The dependence is shown in Figure 2 from the analytic expression
$$ϵ^{}/ϵ=\text{Im}\lambda _t\left[c_0+\left(c_6_6^{(1/2)}+c_8_8^{(3/2)}\right)\left(\frac{M_K}{m_s(m_c)+m_d(m_c)}\right)^2\right]$$
(2)
and input from lattice calculations for $`_i`$ . The values of other SM parameters are from Ref. .
The lines represent the effect of varying the bag parameters and/or the Wilson coefficients and the band is the unquenched $`m_s`$ from CP-PACS, run to $`m_c`$. Further reducing the uncertainty on $`m_s`$ is more straightforward than for the $`_i`$ and can constrain theoretical calculations of $`ϵ^{}/ϵ`$. Clearly the lower values of $`m_s`$ give higher $`ϵ^{}/ϵ`$ values, in better agreement with experiment!
|
no-problem/9908/hep-ph9908525.html
|
ar5iv
|
text
|
# References
On Neutrino-Mixing-Generated Lepton Asymmetry
and the Primordial Helium-4 Abundance
M. V. Chizhov<sup>1</sup><sup>1</sup>1Permanent address: Centre for Space Research and Technologies, Faculty of Physics,
University of Sofia, 1164 Sofia, Bulgaria and D. P. Kirilova<sup>2</sup><sup>2</sup>2Permanent address: Institute of Astronomy, Bulgarian Academy of Sciences,
blvd. Tsarigradsko Shosse 72, 1784 Sofia, Bulgaria
The Abdus Salam International Centre for Theoretical Physics,
Strada Costiera 11, 34014 Trieste, Italy
## Abstract
In this article we discuss lepton asymmetry effect on BBN with neutrino oscillations. We argue that asymmetry much smaller than 0.01, although not big enough to influence directly the nucleosynthesis kinetics, can effect considerably BBN indirectly via neutrino oscillations. Namely, it distorts neutrino spectrum and changes neutrino density evolution and the pattern of oscillations (either suppressing or enhancing them), which in turn effect the primordial synthesis of elements. We show that the results of the paper X. Shi et al., Phys. Rev. D 60, 063002 (1999), based on the assumption that only $`L>0.01`$ will influence helium-4 production, are not valid. Instead, the precise constraints on neutrino mixing parameters from BBN are presented.
There exists an interesting interplay between lepton asymmetry and neutrino oscillations in the early Universe. As it was noticed in neutrino oscillations can generate lepton asymmetry, besides their well known ability to erase it . On the other hand lepton asymmetry (no matter if neutrino-mixing generated or pre-existing one) can suppress neutrino oscillations and has also the remarkable ability to enhance them . Consequently, in the presence of neutrino oscillations, lepton asymmetry exerts much complex influence on Big Bang Nucleosynthesis (BBN) via oscillations, than in the simple case without oscillations.
In this work we will discuss the indirect effect of lepton asymmetry on primordial nucleosynthesis via neutrino oscillations. This paper is provoked by the publication “Neutrino-Mixing-Generated Lepton Asymmetry and the Primordial He-4 Abundance” by X. Shi, G. Fuller, and K. Abazajian, published in Phys. Rev. D 60, 063002 (1999) ref. (hereafter SFA). As we understood from their paper and some other recent publications there exists some shallow understanding of the role of the lepton asymmetry in BBN with oscillations. And we would like on the first place to clarify this subject.
In SFA the study of the lepton asymmetry effect on BBN is based on the assumption that only asymmetry bigger than 0.01 at the freeze-out of the $`np`$ transitions may have an appreciable impact on the primordial abundance of helium-4 $`Y_p`$. Hence, the authors estimate the effect of the asymmetry on BBN after it has been enhanced up to 0.01. Certainly such consideration is valid for the simple case of nucleosynthesis without oscillations! There are exhaustive studies on that subject , which results, concerning neutrino degeneracy effect on nucleosynthesis, the authors of SFA reproduce in general.
However, in the case of nucleosynthesis with oscillations the assumption that only asymmetry bigger than 0.01 effects nucleosynthesis, is no longer valid. It was first noticed in the original works , that in the case of BBN with neutrino oscillations even a very small lepton asymmetries $`L<<0.01`$ (either initially present , or dynamically ‘neutrino-mixing’ generated ), although not big enough to influence nucleosynthesis directly, may considerably effect BBN indirectly through oscillations. In these works a very precise account of the evolution of the neutrino and antineutrino distribution functions and their spectral distortions, and the evolution of the asymmetry was provided in the BBN calculations.<sup>3</sup><sup>3</sup>3We are really sorry that the authors of SFA had to rediscover the importance of this account, but we cannot agree neither that they were the first to provide the account, nor that they provided this account accurately.
In the present work we calculate the net effect of small lepton asymmetries $`L<<0.01`$ on BBN and obtain precise cosmological constraints on neutrino mixing parameters.
In the presence of oscillations, lepton asymmetry affects BBN indirectly through its feedback effect on:
(1) the evolution of the neutrino and antineutrino number densities , which play an essential role in the kinetics of nucleons at $`n/p`$-freeze-out;
(2) the neutrino and antineutrino spectrum distortion , which is important for the correct calculation of the neutrino number densities and weak interaction rates in $`np`$ transitions (see the following eq. (2));
(3) the neutrino oscillation pattern. Namely, $`L`$ may suppress or enhance oscillations, leading, correspondingly, to underproduction or overproduction of primordial helium-4 in comparison with the case without asymmetry account. The suppression may be strong enough to allow substantial alleviation of the nucleosynthesis bounds on the neutrino mixing parameters. The effect on BBN of a suppression due to a relic neutrino asymmetry was discussed first in and calculated in detail with the account of (1) and (2) in . While the suppression due to neutrino-mixing generated asymmetry and its effect on BBN was calculated first in . It was recently shown also that lepton asymmetry is capable of enhancing the oscillations and thus strengthening of the BBN bounds on the neutrino oscillation parameters.
These three effects are typical for the case of BBN with oscillations.
(4) In case when $`L`$ is or grows big enough $`>0.01`$ it can also influence directly the kinetics of the $`np`$ transitions, depending on the sign of $`L`$.
It is essential that in the presence of oscillations lepton asymmetry has a more complex influence on BBN (1)-(4) than in the simple case without oscillations. The correct study should follow selfconsistently the evolution of the neutrino ensembles, the evolution of the lepton asymmetry as well as the evolution of the neutron and proton number densities. So that the complete effect of the asymmetry throughout its evolution (growth or damping) during the nucleosynthesis epoch could be registered. Such an exact study was provided for small neutrino mass differences $`\delta m^210^7`$ eV<sup>2</sup> for the resonant case in and in the nonresonant case in .
In what follows we present the results of a precise investigation of the asymmetry effect on BBN via neutrino oscillations and provide a comparison with an artificial case without the account of asymmetry in order to extract the net effect of the asymmetry on BBN. Finally, we obtain accurate cosmological constraints on the oscillation parameters.
We discuss the case of active-sterile neutrino oscillations assuming mixing present just in the electron sector $`\nu _i=U_{il}\nu _l`$ ($`l=e,s`$), following the line of work in ref. . The set of kinetic equations describing simultaneously the evolution of the neutrino and antineutrino density matrix $`\rho `$ and $`\overline{\rho }`$ and the evolution of the neutron number density $`n_n`$ in momentum space reads:
$`{\displaystyle \frac{\rho (t)}{t}}=Hp_\nu {\displaystyle \frac{\rho (t)}{p_\nu }}+`$
$`+i[_o,\rho (t)]+i\sqrt{2}G_F\left(\pm Q/M_W^2\right)N_\gamma [\alpha ,\rho (t)]+\mathrm{O}\left(G_F^2\right),`$ (1)
$`\left(n_n/t\right)=Hp_n\left(n_n/p_n\right)+`$
$`+{\displaystyle d\mathrm{\Omega }(e^{},p,\nu )|𝒜(e^{}p\nu n)|^2\left[n_e^{}n_p(1\rho _{LL})n_n\rho _{LL}(1n_e^{})\right]}`$
$`{\displaystyle d\mathrm{\Omega }(e^+,p,\stackrel{~}{\nu })|𝒜(e^+np\stackrel{~}{\nu })|^2\left[n_{e^+}n_n(1\overline{\rho }_{LL})n_p\overline{\rho }_{LL}(1n_{e^+})\right]}.`$ (2)
where $`\alpha _{ij}=U_{ie}^{}U_{je}`$, $`p_\nu `$ is the momentum of electron neutrino, $`n`$ stands for the number density of the interacting particles, $`\mathrm{d}\mathrm{\Omega }(i,j,k)`$ is a phase space factor and $`𝒜`$ is the amplitude of the corresponding process. The sign plus in front of $``$ corresponds to neutrino ensemble, while minus - to antineutrino ensemble. Actually, we solve nine equations selfconsistently: four equations for the components of the neutrino density matrix, another four for the antineutrino density matrix following from eq. (1), and one for the neutron number density eq. (2).
The first term in the right hand side of the equations (1) and (2) describes the effect of Universe expansion. The second term in (1) is responsible for neutrino oscillations, the third accounts for forward neutrino scattering off the medium and the last one accounts for second order interaction effects of neutrinos with the medium. $`_o`$ is the free neutrino Hamiltonian. $``$ is proportional to the fermion asymmetry of the plasma and is essentially expressed through the neutrino asymmetries $`2L_{\nu _e}+L_{\nu _\mu }+L_{\nu _\tau }`$, where $`L_{\mu ,\tau }(N_{\mu ,\tau }N_{\overline{\mu },\overline{\tau }})/N_\gamma `$ and $`L_{\nu _e}\mathrm{d}^3p(\rho _{LL}\overline{\rho }_{LL})/N_\gamma `$. The ‘nonlocal’ term $`Q`$ arises as an $`W/Z`$ propagator effect, $`QE_\nu T`$. It is important for the nonequilibrium active-sterile neutrino oscillations to provide simultaneous account of the different competing processes, namely: neutrino oscillations, Hubble expansion and weak interaction processes.
Neutrino and antineutrino ensembles evolve differently as far as the background is not $`CP`$ symmetric. Besides, the evolution of neutrino and antineutrino ensembles may become strongly coupled due to the growing electron asymmetry term and hence, the evolution of $`\rho `$ and $`\overline{\rho }`$ must be considered simultaneously.
Moreover, it is extremely important for the correct account of the role of the asymmetry on BBN to study the asymmetry evolution and the neutron number density evolution in $`p`$-space selfconsistently with the evolution of neutrino and antineutrino ensembles involved in oscillations! This looks obvious as far as there exists asymmetry-oscillations interplay – oscillations change neutrino-antineutrino asymmetry and it in turn affects oscillations, and, besides, neutrino $`\rho _{LL}`$ and antineutrino $`\overline{\rho }_{LL}`$ number densities enter the kinetic equations for nucleons. However, usually in many papers the growth of asymmetry is calculated, and then, when it has reached values around $`0.01`$, its influence on BBN kinetics is estimated. Thus, the asymmetry influence (1)-(3) on BBN during its growth till $`0.01`$ cannot be caught. We will demonstrate in this work that this very influence may give up to $`10\%`$ relative change in primordial helium-4. Therefore, the indirect influence of lepton asymmetry on BBN should be carefully accounted for during asymmetry’s full evolution.
It is essential also, that the equations should follow neutrino evolution in momentum space, i.e. enabling to account precisely for the distortion of the neutrino spectrum due to oscillations and asymmetry. This approach was demonstrated in detail for the case of small mass differences and it helped to precise the constraints on the neutrino squared mass differences $`\delta m^2`$ by almost an order of magnitude in comparison with the previous studies (see fig. 8 from ). Working with mean energies and equilibrium spectrum is tempting of course due to the simplicity of the analysis, however, is not correct. We have stressed in our previous works the importance of the proper account of the spectrum distortion and asymmetry for the BBN with active-sterile oscillations and we have provided this account in . Besides, many papers have discussed separately the questions of the dynamical evolution of the asymmetry (see for example ) or of the correct account of the spectrum distortion for nonequilibrium neutrinos .<sup>4</sup><sup>4</sup>4Therefore, it is quite amazing now in 1999 to see published statement in (SFA) about the existence in literature only of “BBN calculations based on a constant asymmetry and a thermal neutrino spectrum” “overly simplistic” and with “inaccurate results”. It is easy to judge that this same paper (SFA) may be considered overly simplistic comparing the calculated in it “semianalytically” distorted spectrum distributions with the precisely calculated spectra presented in .
It is really not an easy task to solve exactly the system of eqs. (1)-(2). Especially, in the case of a rapid asymmetry growth more than 1000 bins may be required for the accurate description of the neutrino spectrum, however, this is the correct way to study this topic. We have described the spectrum using in general 1000 bins, and sometimes for the resonant case up to 5000 bins. The equations were integrated for the characteristic period from the electron neutrino decoupling till the $`n/p`$ freeze-out at 0.3 MeV. We have calculated the value of the primordially produced helium-4 with neutrino oscillations for the full range of the model’s parameters values, namely for $`\mathrm{sin}^2(2\theta )`$ ranging from $`10^3`$ to maximal mixing and $`\delta m^210^7`$ $`eV^2`$. For smaller mixing parameters the effect on helium-4 is negligible . The exact feedback effect of the asymmetry on the neutrino ensembles evolution, neutrino spectrum distortion and neutrino oscillations, was numerically followed. Hence, the total effect of the asymmetry on BBN, indirect via its interplay with oscillations and direct on the kinetics of $`np`$ transitions was obtained numerically.
In fig. 1 the impact in helium-4 due to oscillations and asymmetry is presented as a function of the neutrino square mass differences. For comparison the curve corresponding to the artificial case without the account of the asymmetry is presented also. The difference between the two curves measures the net asymmetry effect on BBN with oscillations. It is obvious, that for the range of oscillation parameters discussed, the total effect of the asymmetry is a reduction in $`Y_p`$ in comparison with the case without asymmetry account. This reduction can be as big as $`10\%`$, which is considerable on the background of our recent knowledge from primordial helium measurements . As it is obvious from the figure, small $`\delta m^2`$ are also constrained from BBN considerations. The obtained constraints on $`\delta m^2`$ are by several orders of magnitude more severe than the constraints obtained in SFA (see fig.4 there<sup>5</sup><sup>5</sup>5or the same figure reproduced in other publication of the same authors, namely fig.3 in the first reference in ) on the basis only of the kinetic effect of the asymmetry.
In fig. 2 we present a comparison of the iso-helium-4 contours, $`Y_p=0.245`$, for the resonant case, obtained without the account of the asymmetry, with the contours obtained with the account of the asymmetry. The area to the left of the curves is the allowed region of the oscillation parameters.
The numerical analysis showed that in the case of small mass differences we discuss and naturally small initial asymmetry, the growth of the asymmetry is less than 4 orders of magnitude. Hence, beginning with asymmetries of the order of the baryon one, the asymmetry does not grow enough to influence directly $`np`$ transitions. Consequently, the apparently great asymmetry effect (as seen from the curves) is totally due to the indirect effects (1-3) of the asymmetry on BBN. The maximal asymmetry effect is around $`10\%`$ ’underproduction’ of $`Y_p`$ in comparison with the case of BBN with oscillations but without the asymmetry account. The total effect of oscillations, with the complete account of the asymmetry effects, is still overproduction of helium-4, although considerably smaller than in the calculations neglecting asymmetry. Therefore, nucleosynthesis constraints on the mixing parameters of neutrino are alleviated considerably due to the asymmetry effect.
The case of nonresonant active-sterile oscillations was already discussed and investigated in detail in . It was shown that the effect of the asymmetry on BBN with oscillations, in case it was initially of the order of the baryon one, is negligible. However, in case it was initially bigger than $`10^7`$, it may have also crucial effect on BBN through its effect on oscillations . In the last work a complete exact numerical study of the asymmetry effect on BBN with oscillations was provided for a wide range of initial asymmetry values ($`10^{10}10^2`$). In fig. 4 of the original paper the iso-helium contours for the case with pre-existing asymmetry ($`L=10^6`$) and the case without asymmetry effect (dashed curves) were presented. It is obvious that in the discussed nonresonant case the strong asymmetry effect again is due to its indirect influence on nucleosynthesis. However, it is not so straightforward. For the nonresonant case the asymmetry account reflects into alleviating BBN constraints on mixing parameters for big $`\theta `$ due to suppression of oscillations but strengthening the constraints for small $`\theta `$ due to enhancement of oscillations. For more details see the original paper .
We would like also to stress, that in the nonresonant case of small mass differences oscillations, due to the complex interplay between oscillations and asymmetry, antineutrinos and neutrinos undergo resonance almost simultaneously. This is easy to grasp as far as the asymmetry has a fast oscillating sign-changing behavior due to which both the neutrino and antineutrino ensembles are able to experience resonance. Consequently, the effect on helium-4 does not depend on the initial sign of $`L`$, in case the asymmetry is small enough not to have a direct kinetic effect on $`np`$ transitions, i.e. $`L<<10^2`$ (contrary to the case of direct $`L`$ influence when the sign of $`L`$ is important, as far as in one case it leads to overproduction and in the other to underproduction of helium-4 .)
And at last but not least, we are really amused, that the authors of SFA after frankly declaring that they do not know if the evolution of the lepton asymmetry represents a true chaos or not, do continue working with this not clear understanding, and, moreover, they continue exploiting it fabricating models and constraints , before clarifying the situation with the “chaotic” behavior of $`L`$.
In this work we have proved that lepton asymmetry, by orders of magnitude less than $`0.01`$, although not big enough to influence nucleosynthesis directly, can considerably effect nucleosynthesis indirectly via oscillations, changing the pattern of neutrino oscillations, neutrino densities evolution and neutrino spectrum. In the resonant case we have obtained precise cosmological constraint on neutrino oscillation parameters $`\delta m^2`$ and $`\theta `$ accounting for the dynamical evolution of the neutrino asymmetry, its interplay with oscillations and its effect on primordial production of helium-4. <sup>6</sup><sup>6</sup>6 The constraints for the nonresonant case were obtained in our previous work .
The constraints and conclusions of previous works concerning asymmetry effect on BBN with oscillations will change considerably when a proper selfconsistent account for (a) the complete effect of the asymmetry (1)-(4) during its whole evolution in nucleosynthesis epoch; (b) the neutrino spectrum distortion; (c) and the exact kinetics of nucleons is provided using the kinetic equations in momentum space. The role of the mixing-generated neutrino asymmetry in BBN is considerable and should be accounted for precisely.
We are thankful to ICTP, Trieste, for the financial help and hospitality during the preparation of this work. D.K. is grateful to prof. Sciama for the participation into the astrophysics program this summer and for useful discussions.
Figure captions
Figure 1. The relative change in the primordial yield of helium-4 as a function of the neutrino squared mass differences in case of BBN with oscillations for $`\mathrm{sin}^2(2\theta )=0.05`$. The solid curve shows the complete effect of oscillations with the account of the asymmetry. The dashed curve shows solely the effect of oscillations neglecting the asymmetry.
Figure 2. On the $`\delta m^2\theta `$ plane iso-helium-4 contour $`Y_p=0.245`$, calculated in the discussed model of BBN with active-sterile neutrino oscillations and the account of the complete asymmetry effect, is shown. The dashed curve presents a comparison with the same case, but without the asymmetry account. The area to the left of the curves is the allowed region of the oscillation parameters.
|
no-problem/9908/cs9908006.html
|
ar5iv
|
text
|
# Computational Geometry Column 36
## 1 One Cut Suffices
The first result is remarkable in its generality:
###### Theorem 1
Any planar straight-line drawing may be cut out of one sheet of paper by a single straight cut, after appropriate folding \[DDL98, DDL99\].
The drawing need not be connected; it may include adjoining polygons, nested polygons, floating lines segments, and isolated points. The algorithm of Demaine, Demaine, and Lubiw computes a crease pattern whose folding produces a flat origami that aligns all edges of the drawing on the same line $`L`$. Removal of $`L`$ from the paper “cuts out” the drawing.<sup>1</sup><sup>1</sup>1 It is possible that $`L`$ will lie along folds, e.g., when the drawing consists of a single line segment.
We illustrate the results of their algorithm applied to a polygon in the shape of the letter H in Fig. 1(a1). Fold the sheet first in half along the horizontal bisector of the H (a1), and then in half again along the vertical bisector (a2). Now many of the polygon’s edges lie on top of one another. Next, fold along the diagonal bisector of the right angle illustrated (a3) to align the adjacent edges. Continuing in this manner, after five folds, all edges lie on the same vertical line, and cutting along the arrow shown in Fig. 1(a6) removes the H.
The use of bisectors is a natural technique for overlapping the edges incident to a vertex, and suggests that the medial axis or Voronoi diagram may play a role. In fact the appropriate concept here is the straight skeleton \[AA96\]. For a polygon, this skeleton is defined by the tracks vertices follow when the shape shrinks via inward, parallel movement of the edges. For the H-polygon, the skeleton is particularly simple, but more complex shapes lead to shrinking “events” which disconnect the shape; then each is shrunk recursively. The skeleton may be defined for general straight-line plane graphs, developing both interior and exterior to faces. Fig. 1(b) shows the complete skeleton for the H-shape.
For this simple shape, all creases lie on lines containing skeleton edges. More complex shapes, for example the butterfly in Fig. 2, require in addition perpendiculars incident to skeleton vertices, which (perhaps) recursively generate more perpendiculars based on other cut edges. This recursive phenomena means that the number of creases is unbounded in terms of the number of vertices or minimum “feature” size of the drawing. This flaw has subsequently been circumvented by an algorithm based on disk-packing \[BDEH98\].
## 2 Wrapping Polyhedra
Akiyama posed the question of whether any (perhaps nonconvex) polyhedron may be “wrapped” by a single piece of paper \[Aki97\]. Portions of paper may be hidden by folding and tucking under, but the final result should exactly cover the faces of the polyhedron without requiring the paper to be cut. His question was answered and extended with this theorem:
###### Theorem 2
Any polyhedron may be wrapped with a sufficiently large square sheet of paper. This implies that any connected, planar, polygonal region may be covered by a flat origami folded from a single square of paper. Moreover, any $`2`$-coloring of the faces may be realized with paper whose two sides are those colors \[DDM99\].
Demaine, Demaine, and Mitchell provide three distinct algorithms for achieving such a folding, each with different properties and tradeoffs among desirable quantities. The first is based on Hamiltonian triangulations, the second on straight skeletons, and the third on convex decompositions. I will illustrate the general idea by folding a silhouette for a polygon in the shape of the letter I.
All three methods share the same first step: accordian-fold the paper into a strip; see Fig. 3(a). The methods differ on how this strip is used to cover the faces. The convex decomposition method starts with a partition of the faces into convex pieces, and then covers each face in the order determined by a traversal of a spanning tree of the partition dual. An optimized version of their algorithm could achieve the simple covering shown in Fig. 3(b). In this example no particular coloring was sought, but one can see there is freedom on the choice between mountain and valley folds, freedom which ultimately can be exploited to achieve any given $`2`$-coloring.
Although one method in \[DDM99\], the “zig-zag milling” method that follows a Hamiltonian triangulation, hides an arbitrarily small fraction of the strip’s area, the accordian-fold step wastes much of the original square paper. It remains open to achieve the same universality with a more efficient wrapping.
|
no-problem/9908/nucl-th9908041.html
|
ar5iv
|
text
|
# Multi-Phonon 𝛾-Vibrational Bands and the Triaxial Projected Shell Model
\[
## Abstract
We present a fully quantum-mechanical, microscopic, unified treatment of ground-state band and multi-phonon $`\gamma `$-vibrational bands using shell model diagonalization with the triaxial projected shell model. The results agree very well with data on the g- and $`\gamma `$-band spectra in <sup>156-170</sup>Er, as well as with recently measured $`4^+`$ 2-phonon $`\gamma `$-bandhead energies in <sup>166</sup>Er and <sup>168</sup>Er. Multi-phonon $`\gamma `$-excitation energies are predicted.
\]
The atomic nucleus is a many-body system with pronounced shell effects that can have intrinsic deformation. In addition, it can, according to the semi-classical collective model, undergo dynamical oscillations around the equilibrium shape, resulting in various low-lying collective excitations. Ellipsoidal oscillation of the shape is commonly termed a $`\gamma `$-vibration .
Thanks to advances in high-resolution $`\gamma `$-ray detectors, high quality measurements not only of high-spin states but also of low-spin states are now commonly available. As a consequence long-sought multi-phonon $`\gamma `$-vibrational states have been discovered in a series of experiments over the last decade . However, the status of unified theoretical descriptions for ground-state band (g-band) and multi-phonon $`\gamma `$-vibrational bands ($`\gamma `$-band) is not so satisfactory. In the present work, we attempt a consistent description of these low-lying bands using an approach based on the Projected Shell Model (PSM) .
In its original form, the PSM uses an axially deformed basis. The shell model diagonalization is carried out within the space spanned by the angular momentum projected quasiparticle (qp) vacuum, 2- and 4-qp states. In this sense, the PSM is a Tamm–Dancoff approach and one expects that the collectivity of low-lying states may be strongly affected by mixing many 2- and 4-qp states. Indeed, a multi-qp admixture can cause significant effects in band crossing regions .
However, in the low-spin region before any band crossings ($`I10`$), the admixture is very weak and the calculated g-band always exhibits the characteristics of an axially symmetric rotor. For example, the ordinary PSM fails to describe the steep increase of moment of inertia at low spins in transitional nuclei . Quite recently, the restriction to an axially deformed basis in the PSM was removed by two of the present authors (JAS and KH). It was shown that the observed steep increase of moment of inertia for transitional nuclei can be well described if one introduces triaxiality in the deformed basis and performs 3-dimensional angular momentum projection . This approach is called the Triaxial Projected Shell Model (TPSM).
Another important issue is whether the PSM can describe bands built on collective vibrational states. The usual treatment of the $`\gamma `$-band based on the Tamm–Dancoff or on the Random Phase Approximation assumes different coupling constants for the $`\mu =0`$ and $`\mu =\pm 2`$ parts of the QQ-force, with the former related to the mean-field deformation and the latter adjusted to the $`\gamma `$-bandhead energy. In the PSM, as in the ordinary shell model, such an adjustment is not permitted because the Hamiltonian must be rotation-invariant and thus these two coupling constants must be equal: One cannot simply fit the theoretical $`\gamma `$-bandhead to the experimental one by modifying the QQ-force in that manner.
On the other hand, one might hope that inclusion of many 2-qp states could introduce a collective contribution that would produce the desired low-lying $`\gamma `$-state. But such attempts have failed. Because of the large pairing gap, the energy of the lowest 2-qp state is above 1.5 MeV and is much higher than the actual $`\gamma `$-bandhead energy, which typically lies between 0.5 and 1 MeV in rare-earth nuclei. The QQ-force is too weak to lower the theoretical $`\gamma `$-band energy by such a large amount in a limited basis. Calculations including about one thousand 2- and 4-qp states do not lead to low-lying excited states that look like the experimental $`\gamma `$-band . One therefore has to conclude that it is not practical to describe the $`\gamma `$-vibrational state in terms of multi-qp states in the framework of the axial PSM .
In the present paper, the TPSM extension of the PSM and the computer code developed in are used to study multi-phonon $`\gamma `$-bands. (Although the present theory is not based on a vibrational phonon excitation mechanism as in other models , we shall use the conventional vibrational terminology in our discussion.) We shall show the following: (1) For well deformed nuclei, introduction of triaxiality in the basis does not destroy the good agreement for the g-bands obtained previously in the axial PSM calculations (for example, those presented in Ref. ). (2) However, it produces new excited states ($`\gamma `$-bands) at the correct energies that do not occur in the axial PSM. (3) For transitional nuclei, use of a basis of fixed triaxiality improves the g-band moments of inertia, as already shown in , and at the same time produces realistic $`\gamma `$-bands. (4) By a single diagonalization of the Hamiltonian (with the same parameters in the deformed basis), we obtain not only the g- and $`\gamma `$-band, but also higher excited bands that can be identified as the multi-phonon $`\gamma `$-bands and these compare very well with recently measured $`4^+`$ 2-phonon $`\gamma `$-bands. (5) Finally, we make predictions for the 2- and 3-phonon $`\gamma `$-band (referred to as $`2\gamma `$\- and $`3\gamma `$-band hereafter) of those Er isotopes treated here for which no measurement has yet been reported.
Since an extensive review of the PSM exists (see Ref. and references cited therein), we shall describe the model only briefly. The PSM (TPSM) closely follows the shell model philosophy and is, in fact, a shell model truncated in a deformed basis. One uses a Nilsson potential having axial (triaxial) deformation to generate the deformed single-particle states. The Nilsson spin–orbit force parameters $`\kappa `$ and $`\mu `$ are essential in reproducing correct shell fillings. For rare-earth nuclei, we use the early compilation of Nilsson et al without modification. For the axial deformation parameter $`ϵ`$ in the Nilsson model, we take the values given in Ref. . Thus, for the TPSM, the triaxial deformation $`ϵ^{}`$ is the single adjustable parameter. The static pairing correlations are treated by the usual BCS approximation to establish the Nilsson+BCS basis. The 3-dimensional angular momentum projection is then carried out on the Nilsson+BCS qp-states to obtain the many-body basis, and the Hamiltonian is diagonalized in this projected basis.
In the present work, we consider only low-spin states where no band crossing with any multi-qp band occurs in the yrast region. Thus, the many-body basis may be restricted to the projected triaxial qp vacuum state:
$$\left\{\widehat{P}_{MK}^I|\mathrm{\Phi }>,0KI\right\},$$
(1)
where $`|\mathrm{\Phi }>`$ represents the triaxial qp vacuum state. This is the simplest possible configuration space for an even–even nucleus. Note that only one state is possible for spin $`I=0`$ (the ground state). Thus, multi-qp components have to be taken into account if one wants to describe $`I=0`$ excited states (see further discussion below). The diagonalization is performed over a chain of Er isotopes up to spin $`I=10`$.
As in the usual PSM calculations, we use the Hamiltonian
$$\widehat{H}=\widehat{H}_0\frac{1}{2}\chi \underset{\mu }{}\widehat{Q}_\mu ^{}\widehat{Q}_\mu G_M\widehat{P}^{}\widehat{P}G_Q\underset{\mu }{}\widehat{P}_\mu ^{}\widehat{P}_\mu ,$$
(2)
so that the corresponding Nilsson Hamiltonian (with triaxiality) is given by
$$\widehat{H}_N=\widehat{H}_0\frac{2}{3}\mathrm{}\omega \left\{ϵ\widehat{Q}_0+ϵ^{}\frac{\widehat{Q}_{+2}+\widehat{Q}_2}{\sqrt{2}}\right\}.$$
(3)
Here $`\widehat{H}_0`$ is the spherical single-particle Hamiltonian, which contains a proper spin–orbit force as mentioned before, while the interaction strengths are taken as follows. The QQ-force strength $`\chi `$ is adjusted such that the physical quadrupole deformation $`ϵ`$ is obtained as a result of the self-consistent mean-field (HFB) calculation . The monopole pairing strength $`G_M`$ is of the standard form $`G_M=\left[21.2413.86(NZ)/A\right]/A`$, with “$``$” for neutrons and “$`+`$” for protons, which approximately reproduces the observed odd–even mass differences in this mass region. This choice of $`G_M`$ is appropriate for the single-particle space employed in the PSM, where three major shells are used for each type of nucleons ($`N=4,5,6`$ for neutrons and $`N=3,4,5`$ for protons). The quadrupole pairing strength $`G_Q`$ is assumed to be proportional to $`G_M`$, the proportionality constant being fixed as usual to be in the range 0.16 – 0.18. These interaction strengths are consistent with those used previously for the same mass region .
Let us first consider a well-deformed nucleus <sup>168</sup>Er, which is generally considered to be axially symmetric. In fact, previous (axial) PSM calculation for this nucleus gave an excellent description of the yrast band up to a very high spin . Fig. 1a shows the calculated energies as functions of the triaxiality parameter $`ϵ^{}`$ for angular momenta up to $`I=10`$. In addition to the usual g-band with spins $`I=0,2,4,\mathrm{}`$, a new set of rotational states with spins $`I=2,3,4,\mathrm{}`$ appears. This figure looks similar to the one shown by Davydov and Filippov , but now obtained in terms of a fully microscopic theory. Unlike the irrotational flow model, the PSM spectrum depends not only on the deformation parameters but also on the shell filling of the nucleus in question. We see that, for the g-band of <sup>168</sup>Er, the energies as functions of triaxiality are nearly flat and their values remain close to those at zero triaxiality. Thus, the triaxial basis has no significant effect on the g-band for a well-deformed nucleus and does not destroy the good g-band result obtained with an axially deformed basis.
However, it has a drastic effect on new excited bands (second and higher excited bands are not shown in the figure). Their excitation energies are indeed very high for axial symmetry, but come down quickly as the triaxiality in the basis increases. At $`ϵ^{}=0.13`$, the first excited band reproduces the observed $`\gamma `$-band in <sup>168</sup>Er (while preserving the good g-band agreement). It should be noted that the excited bands studied in this paper are obtained by introducing $`\gamma `$-degree of freedom in the basis (quasiparticle vacuum). They are collective excitations, but not quasiparticle excitations. We may thus identify the first excited band as the $`\gamma `$-band, the second excited band as the 2$`\gamma `$-band, the third excited band as the 3$`\gamma `$-band, etc.
The above results can be understood by studying the $`K`$-mixing coefficients for each projected $`K`$-state (see Eq. (1)) in the total wavefunctions. It is found that for this well-deformed, axially symmetric nucleus, $`K`$-mixing is negligibly small. States in the g-band are essentially the projected $`K=0`$ state for any $`ϵ^{}`$. That is why the basis triaxiality does not destroy the result obtained with an axially deformed basis. The excited bands are also built by rather pure projected $`K`$-states. For example, the first excited band with the bandhead spin $`I=2`$ is manily the projected $`K=2`$ state and the second excited band with the bandhead spin $`I=4`$ is the projected $`K=4`$ state. A small amount of $`K`$-mixing can be seen only for states with higher total spin if triaxiality in the basis is sufficientlly large.
Fig. 1b illustrates another example, the transitional nucleus <sup>156</sup>Er. We see that the energies for the g-band are no longer constant, but clearly vary as functions of triaxiality. This feature is expected for a $`\gamma `$-soft nucleus. For the excited bands, triaxiality in the basis has a similar effect as we have seen for well-deformed nucleus: it drastically lowers their energies to those of the observed $`\gamma `$-band.
A rather different picture of $`K`$-mixing is observed for this $`\gamma `$-soft nucleus. The states are no longer pure projected $`K`$-states, but highly mixed. For example, the two $`I=2`$ states (the one in the g-band and the other one being the bandhead of the first excited band (the $`\gamma `$-band) are mixed from the projected $`K=0`$ and $`K=2`$ states. At $`ϵ^{}=0.13`$, the $`I=2`$ state of the g-band is contaminated by the projected $`K=2`$ state with a weight of about 1/4, and the $`I=2`$ state of the first excited band contains the projected $`K=0`$ state with a weight of about 1/4. Stronger $`K`$-mixing is seen for states with higher total spin and larger basis triaxiality.
Fig. 2 presents results for a chain of Er isotopes with neutron numbers from $`N=88`$ to 102. This covers both transitional ($`N90`$) and well-deformed ($`N98`$) nuclei. The theoretical results are compared with available data for both g- and $`\gamma `$-band up to $`I=10`$. The axial and triaxial deformation parameters used in the present calculations are listed in Table I. The triaxial parameter $`ϵ^{}=0.13`$ giving the correct position of the $`\gamma `$-band for <sup>168</sup>Er and <sup>156</sup>Er corresponds to $`\gamma =25.5^{}`$ in terms of the usual gamma parameter, if one uses as a very rough estimate $`\gamma \mathrm{tan}^1(ϵ^{}/ϵ)`$.
A microscopic description of transitional nuclei has always been challenging. The nuclei discussed here with neutron number around 90 have g-bands that are quasi-rotational but with considerable vibrational character. The ground-state energy surface of a transitional nucleus was shown to have a shallow minimum at a finite $`\gamma `$-deformation in HFB calculations . It has been demonstrated that such a shallow minimum becomes a prominent minimum when projected onto spin $`I=0`$ . The necessity of introducing triaxiality in the PSM basis to describe the observed g-band moment of inertia in transitional nuclei was demonstrated in Ref. . We now see that, with the same triaxiality, the first excited TPSM band reproduces also the observed $`\gamma `$-band. By adjusting a single parameter $`ϵ^{}`$ in the TPSM, the spectra of both the g- and $`\gamma `$-band are described simultaneously and consistently by the Hamiltonian (2) diagonalized within the Hilbert space (1).
Next, let us turn to a discussion of multi-phonon $`\gamma `$-bands. In Fig. 3, we plot all the states for spins $`I10`$ obtained after diagonalization within our projected triaxial basis for two nuclei in which a 2$`\gamma `$-band has been reported. For <sup>168</sup>Er, the second excited theoretical band agrees beautifully with the new $`4^+`$ $`2\gamma `$-band reported in Ref. . For <sup>166</sup>Er, the observed $`4^+`$ $`2\gamma `$-bandhead is also well reproduced. Since our theory agrees very well with the g-band and the (1-phonon) $`\gamma `$-band observed in these nuclei, the present results support strongly the interpretation of these data as 2$`\gamma `$-bands.
To our knowledge, no 3$`\gamma `$-band has yet been seen experimentally. According to our calculations, they should appear between 3 and 3.6 MeV. In Table I, we list the theoretical values for the $`2^+`$ $`\gamma `$-, $`4^+`$ $`2\gamma `$\- and $`6^+`$ $`3\gamma `$-bandhead energies. As Table I shows, the predicted $`\gamma `$-vibrational spectra are quite anharmonic. Anharmonic $`\gamma `$-vibrations have been discussed by several authors . This anharmonicity is a straightforward consequence of the present microscopic theory. This may be contrasted with earlier models that found it necessary to introduce explicit anharmonicities to reproduce the $`\gamma `$-band spacings .
Finally, we mention briefly the $`0^+`$ excited states. Unlike the usual collective models based on phonon excitations, a $`0^+`$ collective excited state does not exist in the present calculation. Excited $`0^+`$ states can occur if we include multi-qp states on top of the present vacuum configuration. However, since the states constructed in this way are mainly qp in character, the collectivity of such a $`0^+`$ excited state is generally expected to be much weaker than that of a 2-phonon $`\gamma `$-state, which should have a large E2-decay probability to a 1-phonon $`\gamma `$-state. Furthermore, such states should depend strongly on the shell fillings. Therefore, the nature of the $`4^+`$ 2-phonon excited state is kinematical while the $`0^+`$ 2-phonon excited state is dynamical. There has been one experiment reporting a $`0^+`$ excited state in <sup>166</sup>Er ; the measured B(E2:$`0^+2_\gamma ^+`$) is enhanced, suggesting that this $`0^+`$ excited state is of 2-phonon nature. At present, this is a single observed example of the $`0^+`$ 2-phonon excited state. Whether this observation can be reproduced by the TPSM with inclusion of qp states remains to be seen.
To summarize, we have applied the Triaxial Projected Shell Model to some Er isotopes to investigate multi-phonon $`\gamma `$ vibrational bands. The shell model diagonalization is not carried out in a spherical basis as for a conventional shell model, but in a deformed basis with triaxiality. It is found that this simultaneously improves the description of the g-bands in transitional nuclei and leads to a consistent description of multi-phonon $`\gamma `$-bands in both transitional and well-deformed nuclei. The newly observed $`4^+`$ $`2\gamma `$-bands are reproduced by the same calculation, thus supporting their experimental assignment, and the bandhead energies of as yet unobserved $`6^+`$ $`3\gamma `$-bands are predicted.
Thus, our unified view of the g- and multi-phonon $`\gamma `$-bands agrees surprisingly well with the existing data, even though we have used the simplest possible configuration space. The origin of the $`\gamma `$-bands discussed in the present paper is kinematical rather than dynamical, indicating a microscopic connection between the $`\gamma `$-excited states and the nuclear ground state properties. We are presently investigating various intra- and inter-band B(E2)-values to test the theory further. These results will be discussed in terms of $`K`$-mixing and reported in a longer paper.
Dr. Kenji Hara worked on the present paper until his last day. This Letter is dedicated to the memory of his lifetime contributions to the Projected Shell Model. This work was supported in part by Conacyt (Mexico).
$``$ deceased.
|
no-problem/9908/hep-ex9908068.html
|
ar5iv
|
text
|
# Study of QED processes 𝑒⁺𝑒⁻→{𝑒⁺𝑒⁻𝛾,𝑒⁺𝑒⁻𝛾𝛾} with the SND detector at VEPP-2M
## 1 Introduction
Quantum electrodynamics (QED) describes electromagnetic interactions between electrons and photons with high accuracy. QED is usually tested in different types of experiments, for example:
* high accuracy ($`10^6`$) experiments where high order QED corrections at small momentum transfer are tested, for example, anomalous magnetic moments of leptons, Lamb shift, etc.;
* experiments with $`e^+e^{}`$ colliding beams where QED is tested at large momentum transfer, for example:
+ $`e^+e^{}\gamma \gamma (\gamma \mathrm{})`$,
+ $`e^+e^{}e^+e^{}(\gamma ,\gamma \gamma \mathrm{})`$,
+ $`e^+e^{}\mu ^+\mu ^{}(\gamma \mathrm{})`$,
+ $`e^+e^{}\tau ^+\tau ^{}(\gamma \mathrm{})`$.
This work is devoted to the study of the following QED processes with large angles between all particles :
$$e^+e^{}e^+e^{}\gamma ,$$
(1)
$$e^+e^{}e^+e^{}\gamma \gamma .$$
(2)
This study is important for several reasons. First, to check QED as the cross sections and differential distributions can be precisely calculated and compared with observed ones. Second, possible hypothetical leptons, for example heavy (or excited) electron gipel ( the existence of such particle is ruled out by recent LEP measurements: $`m_e^{}>8591`$ GeVPDG ), can manifest themselves in the invariant mass spectra of the final particles. Third, these processes could be a source of background for the vector meson decays with electrons and photons in the final state. For example, process (2) is the background in the study of decays of $`\varphi \eta e^+e^{},\eta 2\gamma `$ and $`\varphi \eta \gamma `$, $`\eta e^+e^{}\gamma `$. And finally, it is necessary to take into account process (1) for the luminosity measurements with accuracy $`1\%`$.
The processes (1) and (2) were studied in different experiments in different energy regions. Some of these experiments are listed in Table 1.
## 2 Detector, experiment
The experiment Prep.96 ; pr97 was carried out with the SND detector (Fig.1) at the VEPP-2M colliderVEPP2M in the energy region of the $`\varphi `$-meson resonance $`2E=0.9851.04`$ GeV. The SNDSND detector is a general purpose nonmagnetic detector with solid angle coverage $`90\%`$ of $`4\pi `$. It consists of a spherical 3 layer calorimeter based on NaI(Tl) crystals, two drift chambers and a muon system. The list of the SND main parameters is shown in Table2. The data were recorded in six successive scans at 14 different values of the beam energy with the integrated luminosity $`\mathrm{\Delta }L=4.1\text{pb}^1`$. The accuracy of the luminosity determinationpr97 is estimated to be 3%.
## 3 Simulation
Monte Carlo simulation was used for comparison of the experimental results with theoretical predictions. Full simulation of the detector was made on the base of UNIMOD2 program UNIMOD2 . The process (1) was simulated according to formulae of the $`\alpha ^3`$ order from Ref.Baier . The details of the implementation of these formulae into event generator program are described in Ref.TDUM .
For the process (2) formulae of the $`\alpha ^4`$ order of differential cross section, calculated with the method of helicity amplitudes TDEEGG were used. These formulae are valid when all angles between final particles are large. So the simulation was performed under a condition that all angles are larger than $`15^{}`$.
The radiative correction for process (1) was calculated using formulae from Ref.Kuraev . The corrected cross section can be written as: $`\sigma _{th}=\sigma _B(1+\delta )`$, where $`\sigma _B`$ is an $`\alpha ^3`$ Born cross section and $`\delta `$ \- calculated radiative correction. The radiation of virtual and soft photons as well as hard photon emission close to the direction of motion of one of the initial or final charged particles were taken into account. These formulae were integrated over phase space as close as possible to the experimental acceptance. The decrease in the registration efficiency due to lost radiative photon was taken into account in calculations of contribution from hard photon radiation. As a result $`\delta =(10\pm 3)\%`$ was obtained. The error originates from two main sources: the formula for differential cross section of virtual and soft photon radiation corrections is incomplete ( $`3\%`$), estimation of the efficiency dependence due to the loss of radiative photon ($`1\%`$).
## 4 Data Analysis
At the first stage of data analysis the following selection criteria, common for both processes, were applied:
* number of charged particles $`N_{cp}=2`$
* number of photons $`1N_\gamma 3`$
* both tracks originate from the interaction region: distance between tracks and beam axis in $`R\varphi `$ plane $`R_{1,2}<0.5`$cm, Z coordinate of the closest to the beam axis point on the track $`|Z_{1,2}|<10`$ cm
* polar angles of all particles $`36^o<\theta <144^o`$
* acollinearity angle of charged particles in the plane transverse to the beam axis $`|\mathrm{\Delta }\varphi _{ee}|=|180^o|\varphi _1\varphi _2||>5^o`$
* normalized total energy deposition $`E_{tot}/2E_0>0.8`$
* normalized total momentum$`P_{tot}/E_{tot}<0.15`$
* minimal energy of charged particle $`E_{emin}>10`$ MeV
* minimal energy of photon $`E_{\gamma min}>20`$ MeV
* no hits in muon system
Nearly 90000 events passed these cuts for use in further analysis.
### 4.1 Process $`e^+e^{}e^+e^{}\gamma `$
For the selection of events from process (1) a kinematic fit imposing 4-momentum conservation was applied. The parameter $`\chi ^2`$, describing the degree of energy-momentum balance in the event, was calculated. For the selection of events from the process $`e^+e^{}e^+e^{}\gamma `$ an additional cut was imposed:
* $`\chi ^2<15`$
The number of thus selected events in experiment and simulation of process (1) as well as for some background processes are shown in the Table 3.
The corresponding energy, angular and invariant mass distributions after kinematic fit are shown in Fig.2,3. The statistical errors in these figures are comparable with the marker size. The peaks in Fig.2a,b,c originate from quasi-elastic events of process (1) with radiation of a soft photon with energy $`E_\gamma /E_01`$. There is good agreement between experimental data and MC simulation. There are no traces of heavy lepton in the invariant mass spectrum in Fig.3c. Some minor differences in the spectra (Fig.2d, 3a) could be attributed to imprecise simulation of angular differential nonlinearity for photons caused by granularity of the calorimeter.
The estimated detection efficiency for the described selection criteria is equal to $`59.8\pm 1.0\%`$ (error is statistical). It was defined with respect to simulation under following the conditions: polar angle of final particles $`36^{}<\theta <144^{}`$, azimuth acollinearity angle $`\mathrm{\Delta }\varphi _{ee}>5^{}`$, spatial angle between final particles is $`\theta _{ee,e\gamma }>20^{}`$ minimal energies for charged particles and photons are equal to 10 and 20 MeV respectively. The systematic error on the measured cross section is determined by normalization uncertainty (3%), limited MC statistics (1.7%) and uncertainties in the selection efficiency (1.5%). In total it is equal to 3.8%.
The energy dependence of the cross section of process (1) is shown in Fig.4. The measurements were fitted using the following function:
$$\sigma (E)=\sigma _0(E)(E_0^2/E^2)+W\sigma _\varphi (E),$$
(3)
where the first term has the energy dependence typical of QED processes and the second corresponds to a contribution from $`\varphi `$-meson decays with cross section $`\sigma _\varphi `$. The fitting parameters are $`\sigma _0`$ — the cross section at the energy $`E_0=1020`$MeV and $`W`$ determines resonance background contribution. The main part of this background for process 1 comes from $`\varphi \pi ^+\pi ^{}\pi ^0`$ decay.
Fitting gives no peak from $`\varphi `$-meson decays (fig.4). The fitted experimental cross section is $`\sigma _0=30.01\pm 0.12\pm 1.2`$ nb and the expected QED cross section with radiative corrections is $`\sigma _{th}=29.7\pm 0.3\pm 1.0`$ nb. The observed difference ($``$ 1%) is within the systematic error.
### 4.2 Process $`e^+e^{}e^+e^{}\gamma \gamma `$
For the selection of events from the process $`e^+e^{}e^+e^{}\gamma \gamma `$, the following additional cuts were imposed:
* number of photons $`2N_\gamma 3`$,
* $`\chi ^2<15`$,
* to suppress the contribution from $`e^+e^{}\pi ^+\pi ^{}\pi ^0`$ region $`110<M_{\gamma \gamma }<170`$MeV was excluded,
* minimal energy of photons $`E_{\gamma min}=50`$ MeV.
Here $`\chi ^2`$ \- is the kinematic fit parameter obtained under the assumption that events come from process (2). The number of events which passed these selection criteria in the experiment and Monte Carlo simulation of process (2) and background processes are shown in Table 4.
Energy, angular and invariant mass distributions after kinematic fit are shown in Fig.5,6. Similar to process (1) the peaks are seen from quasi-elastic scattering with emission of soft photons (Fig.5a,b,c). The peak in photon energy spectra (Fig.5b) near $`E_\gamma /E_0=0.7`$ corresponds to the recoil photon energy in radiative decays: $`\varphi \eta \gamma `$, $`\eta e^+e^{}\gamma ,\pi ^+\pi ^{}\gamma `$. Some enhancement in the two photon invariant mass spectrum (Fig.6b) near the $`\eta `$-mass appears from the decay $`\varphi \eta e^+e^{},\eta \gamma \gamma `$. There are also no visible traces of heavy lepton production in the $`M_{e\gamma }`$ spectrum (Fig.6d).
The detection efficiency was determined from simulation in nearly the same conditions as for process (1) : polar angle of final particles $`36^{}<\theta <144^{}`$, azimuth acollinearity angle $`\mathrm{\Delta }\varphi _{ee}>5^{}`$, spatial angle between final particles $`\theta _{ee,e\gamma ,\gamma \gamma }>20^{}`$, minimal energies for charged particles and photons are equal to 10 and 50 MeV respectively, The value of detection efficiency was found to be 33.6$`\pm `$ 1.5%.
The fitting of the energy dependence of the cross section of process (2) was done using formula (3). The result is shown in Fig.7. The contribution from $`\varphi `$ decays is seen as a peak at the $`\varphi `$ mass. The significance of the peak is $``$ 1.5 of standard deviation. The processes $`\varphi \eta e^+e^{},\eta \gamma \gamma `$ and $`\varphi \eta \gamma ,\eta e^+e^{}\gamma `$, mentioned above, constitute the main contribution to the peak.
The fitted value of experimental cross section $`\sigma _0=0.457\pm 0.039\pm 0.026`$ nb was found in good agreement with the calculated QED cross section $`\sigma _{MC}=0.458\pm 0.010`$ nb. The systematic error included into $`\sigma _0`$ is determined by normalization uncertainty (3%), limited MC statistics (4.5%) and uncertainties on the selection efficiency (2.%). In total it is equal to 5.8%.
## 5 Conclusions
In the experiment with the SND detector at the VEPP-2M collider the $`e^+e^{}e^+e^{}\gamma `$ and $`e^+e^{}e^+e^{}\gamma \gamma `$ QED processes with particles produced at large angles were studied. A total of 73692 events of the process $`e^+e^{}e^+e^{}\gamma `$ was observed. For the process $`e^+e^{}e^+e^{}\gamma \gamma `$ 698 events were observed where 649 events are from the QED process (2). Number of events observed in different energy points for both processes are shown in tables 5,6. The cross sections and differential distributions of produced particles were compared with MC simulation. No significant deviations from QED were found within limits of measurement errors, which are equal to 3.8% and 10.3% for processes (1) and (2) respectively.
###### Acknowledgements.
Acknowledgements. This work was supported in part by Russian Foundation of Basic Researches (grant No.96-15-96327); and STP “Integration” (Grant No 274).
|
no-problem/9908/astro-ph9908350.html
|
ar5iv
|
text
|
# Limitations of ad hoc “SKA+VLBI” configurations & the need to extend SKA to trans-continental dimensions
## 1 Introduction
The technique of Very Long Baseline Interferometry (VLBI) permits astronomers to generate milli and sub-milliarcsecond resolution images of galactic and extra-galactic radio sources. VLBI has evolved rapidly in the last decade with significant improvements in resolution, polarisation imaging and spectral-line capabilities. In terms of raw sensitivity, however, the gains have been more modest, especially at cm-wavelengths. Although the technique of phase-referencing has recently permitted the detection and (limited) imaging of sources at the mJy flux level, the current state-of-the-art r.m.s. image noise level is still limited to $`30\mu `$Jy/beam at cm-wavelengths (for a typical on-source observing run of 12 hours). The recent introduction of the 1 Gbit/sec MkIV system and other technical improvements promise to improve this by a factor of 3 or so, thus reducing the r.m.s. image noise level to $`10\mu `$Jy/beam.
While this is all very encouraging, a more sobering thought is that even at these r.m.s. noise levels, the overlap between the radio sky and the sky at optical and infra-red wavebands is rather limited. It is only by going deeper – much deeper – that the optical and radio source counts become comparable. If complimentary observations are to be achieved, and radio astronomy is to remain at the very forefront of astrophysical research, it is imperative that noise levels are reduced by at least two orders of magnitude.
The Square Kilometer Array, SKA, currently offers the best possibility of achieving these kind of noise levels. However, sensitivity is not the only issue, one must also consider what angular resolution is required - not simply to avoid the limitations imposed by source confusion but to properly investigate the radio morphology of the sources that will dominate the $`\mu `$Jy and sub-$`\mu `$Jy radio source population.
The tendency for faint sources to be considerably smaller than their brighter counterparts has been known for some time (Oort 1987 and Fletcher et al. 1998 ), and is strikingly confirmed by the relatively high detection rates of recent VLBI surveys of faint mJy radio sources (Garrington, Garrett & Polatidis 1999 ). For compact AGN this phenomenon can be easily understood in terms of synchrotron self-absorption theory: for a given magnetic field strength, smaller sources will also be fainter sources. High resolution imaging is therefore of considerable importance to the study of faint AGN.
Perhaps a more significant factor in this discussion, however, is the emergence of a new population of radio sources as suggested by the flattening radio source counts at sub-mJy flux density levels, now confirmed with the recent radio studies of the Hubble Deep Field (Richards et al. 1998 and Muxlow et al. 1999 ). These pioneering observations strongly suggest that the bulk of the $`\mu `$Jy radio source population is dominated by distant starburst galaxies, rather than AGN.
The nearest and best studied starburst galaxy is undoubtedly M82. Located only $`3`$ Mpc away, its radio emission is concentrated within the central few kpc of the galaxy and is dominated by radio emission from both recent and relic supernova remnants (SNRs). Fig. 1 shows a superb, wide-field EVN $`\lambda 18`$ cm image of M82 produced by Pedlar et al. (1999) .
If M82 is typical of higher redshift starbursts (and recent VLBI observations by Smith et al. 1999 of the more vigorous and distant starburst, Arp 220, suggest that it may well be), we can expect to detect with SKA individual SNR in starburst galaxies out to cosmological distances - at least in terms of sensitivity. But this is only part of the story. If the SNR are distributed on scales similar to that observed in both Arp 220 and M82, then at $`z=1.5`$ the bulk of the radio emission will occupy a region of sky no greater than 60 milliarcseconds across. Thus in order to properly resolve these systems into their constituent parts, resolutions of a few mas are required.
While this is not the only argument for high resolution SKA observations (see contributions by Gurvits, Krichbaum, Snellen, Phinney, Roy, Koopmans & Fender - these proceedings) it is a very powerful one: the idea that SKA will only barely resolve the dominant sources of radio emission in the sky (Wilkinson’s “bread and butter sources” - these proceedings) is surely unthinkable, at least for a true “next-generation” instrument.
In this paper, I discuss the ways in which the angular resolution of SKA can be extended towards the milliarcsecond scale. I do not make the conventional assumption that SKA’s only contribution to the field of VLBI is as an ultra-sensitive, phased-array “add-on” to existing VLBI networks. Although this scenario is considered other options are investigated and indeed preferred, including the extension of SKA to trans-continental dimensions.
I first consider in section 2 a few minor technicalities regarding SKA and VLBI baseline sensitivity (in particular the possibility of employing in-beam phase referencing techniques), and the need for a wide-field approach to VLBI observations at these sub-$`\mu `$Jy levels. In Section 3 I present the first realistic simulations of various SKA-VLBI configurations including an extended version of SKA (designated “SKA<sup>++</sup>”), in which half of the SKA antennas are located within an array of 50 km and the other half are separated by trans-continental distances. A discussion of the main results and conclusions are presented in sections 4 and 5 respectively.
## 2 SKA-VLBI: minor technicalities
Throughout this paper I adopt the nominal SKA parameters of Taylor & Braun (1999) i.e. thirty, 200-m diameter elements with a total observing bandwidth of 1500 MHz at $`\lambda 6`$ cm, 2-bit sampled data and a total sensitivity figure of $`2\times 10^4`$m<sup>2</sup>/K. In this section we discuss some minor technicalities that have not been previously considered in earlier SKA-VLBI discussions.
### 2.1 SKA & VLBI baseline sensitivity
The $`7\sigma `$ baseline detection level between a phased-array SKA, (i.e. SKA<sub>PA</sub> with 80% of the total collecting area formed by those antenna elements lying within 50 km of each other, SEFD $`0.17`$ Jy) and a single 25-m VLBA antenna (SEFD $`290`$Jy) is $`60\mu `$Jy, assuming a coherent integration time of 300 seconds at $`\lambda 6`$ cm. At these levels of sensitivity the radio source counts are fairly well understood: the recent VLA HDF observations of Richards et al. (1998) , together with earlier VLA observations including those of Windhorst et al. (1995) , suggest the source count is of order 20S($`\mu `$Jy)<sup>-1</sup> arcmin<sup>-2</sup>. Thus within the FWHM of a 25-m antenna’s primary beam, we can expect to find at least 15 sources above the $`10\sigma `$ noise level, of which 5 can reasonably be expected to be stronger than the $`30\sigma `$ noise level. Naturally these latter sources can be used as “in-beam” (phase) calibrators, able to provide continuous and accurate instrumental corrections without the need for conventional phase-referencing (note that in this scenario the multiple beam capability of SKA in its phased-up mode is assumed since the field of view of the phased-array is otherwise rather limited). At $`\lambda `$18cm the situation is even better with at least 45 potential calibrator sources in the primary beam of a 25-m antenna. In short, in-beam calibration will be possible in the vast majority of cases at cm wavelengths.
### 2.2 Imaging large fields-of-view
At the SKA detection level of 100 nanoJy we can (with a little extrapolation!) predict a source count of at least $`100`$ sources/arcmin<sup>2</sup>. Clearly the sky will be densely populated with radio sources separated by only a few arcseconds, perhaps less, if clustering is important (as one might expect). Independent of whether we consider SKA as a standalone array (with baseline lengths of at least 1000 km) or as part of a VLBI network, we can expect to image hundreds of sources simultaneously from just a single area of sky covered by one (single element) SKA beam. Already the application of wide-field imaging techniques is beginning to find a place in VLBI (Garrett et al. 1999 ) but in the era of SKA, a wide-field imaging mode will be the de facto mode of operation - even at milliarcsecond resolutions. This will require at least the full spectral resolution of SKA at the longest wavelengths ($`10^4`$ spectral channels) and sub-second integration times at the shortest cm wavelengths in order to avoid smearing. A typical 12 hour run by the SKA-VLBI configurations discussed further in this paper, will result in a substantial but hopefully not unmanageable data size of $`1`$ Tera Byte.
## 3 SKA & VLBI: issues of sensitivity and weight
Various authors have previously considered the sensitivity gain one achieves by including SKA as part of a large VLBI network or extending it to trans-continental baselines (e.g. Schilizzi & Gurvits, section 2.5.2 in Taylor and Braun ). In this section we extend these calculations by taking into account the effect of data weighting and the necessary trade-off between sensitivity and resolution for some proposed SKA-VLBI configurations.
### 3.1 Weighty matters
At face value the inclusion of SKA as part of a large VLBI network results in an array with superb uv-coverage, high resolution and sub-$`\mu `$Jy sensitivity. However, predictions of image noise levels (and uv-coverage) which do not take into account the relative weights of the contributing baselines, can be misleading. For example, if we consider an array formed by the individual SKA elements (SKA<sub>1</sub>), in the nominal configuration of Taylor & Braun, observing together with a global VLBI array (GVLBI), then for naturally weighted data the array is entirely dominated by the very sensitive baselines formed between SKA elements. Since the vast majority of these present baseline lengths of order 50 km or less, the dirty beam associated with such naturally weighted data does not even begin to provide the sort of milliarcsecond (mas) resolution expected from a VLBI array of global dimensions. Alternatively if SKA is included as a phased-array, SKA<sub>PA</sub>, the situation is even more extreme in terms of the effective uv-coverage since only the SKA<sub>PA</sub> baselines actually contribute to the synthesised image. In either case, the only way to achieve uniform uv-coverage is to abandon natural weighting and re-weight (i.e. weight-up) the noisier baselines, thus increasing the image noise level by factors of several - well beyond the original expectation.
### 3.2 SKA-VLBI data simulations
In order to investigate these effects semi-quantitatively, I have generated three simulated SKA (and VLBI) visibility data sets. In order to serve as a reference point, simulated data were first generated for the nominal SKA configuration of Taylor & Braun (1999) . Two additional options were considered with respect to how SKA provides high sensitivity observations with milliarcsecond scale resolution: (i) SKA contributes as a sensitive, phased-array “add-on” to the existing global VLBI network (“SKA<sub>PA</sub>-GVLBI”), and (ii) SKA is extended to trans-continental baselines, SKA<sup>++</sup> but 1/2 of the antennas still remain within 50 km of each other in order to maintain good brightness sensitivity at arcsecond resolution.
#### 3.2.1 Source Model
The source model used to produce the simulated $`\lambda 6`$ cm data is shown in Fig. 2. It represents a “best guess” of what the $`\mu `$Jy source population might look like – essentially it is based on an M82/Arp 220-type starburst, projected back to a redshift of $`z1.52`$. The radio emission is concentrated within the inner few kpc (120 mas) of the galaxy, and dominated by young SNRs, relic SNRs and compact HII regions. In addition, I have added a low-luminosity AGN slightly offset from the plane of star formation which accounts for $`20\%`$ of the total flux density of 14 $`\mu `$Jy. Although an AGN has not yet been identified in either Arp 220 or M82, it might not be too surprising to find such low-luminosity AGN in some starburst galaxies. The faint radio sources identified with starburst galaxies in the HDF, appear to have very distorted morphologies, suggesting they were recently involved in interactions with their nearest companions or complete mergers. While this is certainly a trigger for rapid bursts of star formation, it may also initiate AGN activity (or re-initiate it) in the centres of these galaxies.
#### 3.2.2 Data Generation
The AIPS task UVCON was used to generate the $`\lambda 6`$ cm simulated data sets. The nominal SKA configuration (Taylor & Braun 1999 ) was initialy assumed, with the array arbitrarily centred on Dwingeloo, the Netherlands. Data were generated between hour angles of $`\pm 6`$ hours (as calculated at the centre of the array). UVCON adds Gaussian noise to each visibility based on the specified antenna characteristics (diameters, efficiency, noise temperature, data sampling/rate etc). For the elements of SKA the following parameters were chosen: 30 identical elements of 200 m diameter and 60 K system temperatures with a combined sensitivity figure of $`2\times 10^4`$m<sup>2</sup>/K.
#### 3.2.3 Simulated SKA Images
Fig. 3 shows a simulated $`\lambda 6`$ cm image of the model source generated by the nominal SKA configuration. The data were Fourier transformed and CLEANed using the AIPS task IMAGR. The image was produced with (Robust=-2) uniform weighting (see Briggs 1995 for a discussion of Robust weighting). This weights the data at a level which is intermediate between natural weighting (the case in which visibility weights are simply proportion to the inverse of the r.m.s. noise squared) and pure uniform weighting (all data points have equal weights irrespective of their variance and the local data density in the uv-plane). This weighting is necessary in order for the nominal SKA configuration to provide the 10 mas resolution one expects for an array in which the longest baselines are $`1000`$ km. The naturally weighted image provides only 20 mas resolution since the uv-plane is so densely populated by the inner 50 km region of the array where 80% of the collecting area resides. However, even the 10 mas resolution obtained from the uniformly weighted data is not sufficient to do much better than partially resolve the radio source. The noise in this image is $`0.05\mu `$Jy/beam, almost twice as high as the noise in the naturally weighted image.
#### 3.2.4 Simulated “SKA<sub>PA</sub>+GVLBI” Images
Fig. 4 shows a simulated $`\lambda 6`$ cm image of the model source generated by a global VLBI network supplemented by the inner 80% of the nominal SKA configuration, phased-up to form a single, highly sensitive VLBI antenna, “SKA<sub>PA</sub>”. This is the traditional SKA-VLBI configuration that is often assumed to be SKA’s default contribution to VLBI. The global VLBI network used in these simulations includes 17 of the largest antennas in the world, including the Effelsberg 100-m, VLA<sub>27</sub>, Greenbank 100-m, DSN 70-m and the new 70-m and 45-m antennas currently under construction in Sardinia (IRA) and Yebes (OAN). As for the previous SKA simulation, we assume the VLBI antennas can also deliver or record data at 6 Gbits/sec (a 2-bit/4-level sampled, single polarisation, 1500 MHz wide IF band). The naturally weighted image has an r.m.s. noise level of 0.17$`\mu `$Jy/beam and provides a resolution of 1 mas. The core of the AGN is barely detected but the other sources fall well below the noise level. The image (and the effective uv-coverage) are completely dominated by SKA<sub>PA</sub> baselines, the other inter-VLBI antenna baselines have no effect on the image whatsoever. Although this latter effect is yet to be investigated in any detail, the ability of this array to image even moderately extended structures is likely to be limited. Brute force modification of the antenna weights would be required in order to improve the effective coverage but the corresponding impact on sensitivity would be severe.
#### 3.2.5 Simulated “SKA<sup>++</sup>” Images
Fig. 5 shows a simulated $`\lambda 6`$ cm image of the model source generated by an extended SKA configuration, “SKA<sup>++</sup>”. In this scenario half of the SKA antennas remain within the inner 50 km of the array but the other half are distributed around the world (in this case the precise locations are a subset of the current EVN and VLBA antenna sites). Note that the inner elements of SKA contribute to the observations as individual antennas, not as a phased array (though note that in order to provide the in-beam calibration described in section 2.1, an additional phased array beam may also be required). A “robust=0” uniformly weighted image has an r.m.s. noise level of $`0.02\mu `$Jy/beam and provides a resolution of 1.8 mas. The combination of high sensitivity and resolution allows us to resolve the individual SNR from each other and the AGN (which also shows a two-sided jet). The vast majority of the individual SNR themselves remain unresolved; space VLBI resolutions such as those achievable by the proposed ARISE mission (Ulvestad & Linfield 1998 ) might be able to detect and thus resolve the brighter remnants (the sensitivity of a combined SKA+ARISE configuration requires its own detailed study, see Gurvits these proceedings).
## 4 Discussion
The SKA can make a useful contribution to high resolution radio astronomy. However, the nominal SKA configuration with baseline lengths $`<1000`$ km may not provide enough resolution to adequately resolve the structure of the vast majority of the faint, extragalactic radio sources it detects. In my opinion this is a major flaw in the proposed configuration. Higher resolution can be achieved in at least two ways, the conventional option is for SKA to participate within a VLBI network as a highly sensitive phased-array add-on. The second less conventional, but in my opinion preferred option, is for SKA to be extended to trans-continental baselines, SKA<sup>++</sup>.
The conventional option will allow images to be made with noise levels of $`0.17\mu `$Jy/beam, a factor of 60 better than what can be achieved by VLBI today, even in the era of 1 Gbit/sec MkIV recording. The uv-coverage will, however, remain limited, and there is a danger that the coordination and flexibility of the network will be plagued by the problems that beset existing, non-homogeneous, ad hoc arrays.
The SKA<sup>++</sup> option will allow images to be made with noise levels around $`0.02\mu `$Jy/beam, a factor of 500 better than what can be achieved today and almost an order of magnitude better than the conventional phased-array option. In the SKA<sup>++</sup> configuration described here, half the antennas are still located within the inner 50 km region of the array, thus satisfying other SKA programmes which require high surface brightness sensitivity at arcsecond resolution. This homogeneous array offers superb uv-coverage, flexibility in operation and all the other benefits associated with SKA, in particular, multiple beams for phase-referencing (although at wavelengths $`6`$ cm this may not be required since there will almost always be enough “in-beam” calibrators). The possibilities arising from a SKA<sup>++</sup> instrument are quite simply staggering: with a 1 arcmin field of view, SKA<sup>++</sup> in a single 12 hour run, could easily detect and image over $`1001000`$ sources simultaneously with arcsecond, sub-arcsecond and milliarcsecond resolution.
The feasibility of connecting together SKA elements in real-time over large distances appears feasible, even by today’s standards. Considerable activity in the connection of telescopes by optical fibres is on-going around the world with the recent link between the VLBA antenna at Pie Town and the VLA (a distance of $`100`$ km), being the most recent success story. The main difficulties are now considered to be economic rather than technical (Whitney et al. 1999 ). With the reasonable expectation that trans-continental fibre connections will fall in price over the next 2 decades, SKA<sup>++</sup> is a realistic proposal which requires serious consideration and more detailed investigation.
## 5 Summary: the need for a higher resolution SKA
The quest for higher angular resolution has been one of the key driving forces in observational astronomy, together with improved sensitivity and new spectral bands. Despite the fact that optical telescopes have always enjoyed a natural advantage in terms of source number counts, the ability of radio interferometers such as the VLA, MERLIN and VLBI to generate sub-arcsecond and milliarcsecond resolution images has allowed them to stay at the very forefront of astrophysics. Comparable radio instruments, in terms of sensitivity but with inferior resolution, have been significantly disadvantaged.
Optical astronomers are now designing the next generation of ground and space based telescopes (e.g. the VLTI & NGST). These will have comparable or better resolution than that currently proposed for SKA. Similarly, it is now clear that optical and infra-red interferometry will take a giant leap forward in terms of sensitivity and resolution, in the form of the armada of space-based interferometry missions (e.g. Gaia, Darwin, SIM etc) currently proposed. On the same time scales envisaged for the completion of SKA, these next generation instruments will provide optical and infra-red astronomers with the ability to perform micro-arcsecond astrometry (allowing the direct detection of nearby extra-solar planets) and sub-milliarsecond resolution imaging of a wide variety of celestial objects. The importance of complimentary, high resolution radio observations will become clear as the surfaces of nearby stars, the ejecta of novae and supernovae, accretion disks and jets around young stars and x-ray binary systems, not to mention the environment around the central engines of extra-galactic objects (normal galaxies and AGN) become the favoured targets of these space-based instruments.
The next generation of radio telescope will surely provide astronomers with unprecedented sensitivity - that much is clear. However, the majority of radio sources it detects will most likely require milliarcsecond resolution to be adequately resolved. Simply relying on occasional, ad hoc “SKA+VLBI” observations to provide this resolution is not, in my opinion, a satisfactory solution. A self-contained SKA can provide milliarcsecond resolution by extending the array to trans-continental dimensions. By retaining 50% of the array’s collecting area within a region no larger than 50 km, the surface brightness sensitivity of the array at arcsec resolution is hardly compromised. In this way SKA can be a truly global, next generation radio telescope with unrivaled capabilities over a wide range of angular resolution and surface brightness sensitivity.
## Acknowledgments
I’d like to thank Richard Schilizzi for reading the text of this paper critically, and for suggesting several useful improvements that were incorporated into the final version.
## References
|
no-problem/9908/cond-mat9908039.html
|
ar5iv
|
text
|
# Transport Properties in (Na,Ca)Co2O4 Ceramics
## Abstract
The resistivity and thermopower of polycrystalline Na<sub>1.1-x</sub>Ca<sub>x</sub>Co<sub>2</sub>O<sub>4</sub> were measured and analyzed. Both the quantities increase with $`x`$, suggesting that the carrier density is decreased by the substitutions of Ca<sup>2+</sup> for Na<sup>+</sup>. Considering that the temperature dependence of the resistivity show a characteristic change with $`x`$, the conduction mechanism is unlikely to come from a simple electron-phonon scattering. As a reference for NaCo<sub>2</sub>O<sub>4</sub>, single crystals of a two-dimensional Co oxide Bi<sub>2-x</sub>Pb$`{}_{x}{}^{}M_{3}^{}`$Co<sub>2</sub>O<sub>9</sub> ($`M=`$Sr and Ba) were studied. The Pb substitution decreases the resistivity, leaving the thermopower nearly intact.
## 1 Introduction
A search for new thermoelectric (TE) materials is an old problem that has been reexamined to date . Even though binary compounds might have been thoroughly studied, a thermoelectric material of higher performance might sleep in ternary, quaternary or more complicated compounds. A filled skutterudite is an example for newly discovered TE materials .
Very recently Terasaki, Sasago and Uchinokura have found that a layered Co oxide NaCo<sub>2</sub>O<sub>4</sub>, whose crystal structure is schematically drawn in Fig. 1, shows large thermopower (100 $`\mu `$V/K at 300 K) and low resistivity (200 $`\mu \mathrm{\Omega }`$cm at 300 K) along the $`a`$ axis . A striking feature of this compound is that the thermopower of 100 $`\mu `$V/K is realized in the carrier density of 10<sup>21</sup> cm<sup>-3</sup>. This is difficult to explain in the framework of the conventional band picture, and there should exist a mechanism to enhance the thermopower.
In conventional TE materials, the TE performance is optimized near a carrier density of $`10^{19}`$ cm<sup>-3</sup>. Thus we expect that the TE performance of NaCo<sub>2</sub>O<sub>4</sub> may be improved by the reduction of the carrier density. The easiest way to change the carrier density is to substitute a divalent cation such as Ca<sup>2+</sup> for a monovalent Na<sup>+</sup>. Motivated by this, we measured and analyzed the resistivity and thermopower of polycrystalline (Na,Ca)Co<sub>2</sub>O<sub>4</sub>. Another way is to study a two-dimensional Co oxide with low carrier density. For this purpose Bi<sub>2</sub>Sr<sub>3</sub>Co<sub>2</sub>O<sub>9</sub> (The crystal structure is shown in Fig. 2) is most suitable, because the resistivity is lowered by the Ba-substitution for Sr and the Pb substitution for Bi . In addition, the optical reflectivity shows a small Drude weight . We report on its transport properties in the latter part of the proceedings.
## 2 Experimental
Polycrystalline samples of Na<sub>1.1-x</sub>Ca<sub>x</sub>Co<sub>2</sub>O<sub>4</sub> were prepared in a conventional solid-state reaction . An appropriate mixture of powdered NaCO<sub>3</sub>, CaCO<sub>3</sub> and Co<sub>3</sub>O<sub>4</sub> was calcined at 860C for 12 h. The product was finely ground, pressed into a pellet, and sintered at 800C for 6 h. The x-ray diffraction pattern showed no trace of impurities. Note that the a tiny trace of impurity phases was detected in the product from stoichiometric mixture (Na:Co=1:2), which indicates the evaporation of a small amount of Na. Thus we added excess Na of 10 at.% to prepare NaCo<sub>2</sub>O<sub>4</sub>.
Single crystals of Bi$`{}_{2}{}^{}M_{3}^{}`$Co<sub>2</sub>O<sub>9</sub> ($`M`$=Sr and Ba) and Bi<sub>2-x</sub>Pb<sub>x</sub>Sr<sub>3</sub>Co<sub>2</sub>O<sub>9</sub> were prepared by a self-flux technique. Crystals were platelike with typical dimensions of 1$`\times `$1$`\times `$0.01 mm<sup>3</sup>. As for the Pb substitution, we prepared two different crystals with nominal compositions of $`x`$=0.2 and 0.4.
Resistivity ($`\rho `$) was measured through a four-probe method. Thermopower ($`S`$) was measured with a nano-voltmeter (HP 34420A), where a typical resolution was 5–10 nV. Two edges of a sample was pasted on Cu sheets working as a heat bath, and the temperature gradient of 0.5–1 K was measured through a differential thermocouple made of copper-constantan. The contributions from copper leads were carefully subtracted.
## 3 Results and Discussion
Let us begin with the effect of excess Na on the resistivity of Na<sub>1.1+x</sub>Co<sub>2</sub>O<sub>4</sub>. Na is an element difficult to control. First, it is quite volatile above 800C. Secondly, residual Na is rarely observed in x-ray diffraction patterns, because it often exists as deliquesced NaOH in the grain boundary. Moreover the Na site in NaCo<sub>2</sub>O<sub>4</sub> is 50% vacant, that is, the Na content $`x`$ can change from 0 to 1. Figure 3 shows the temperature dependence of $`\rho `$ of polycrystalline Na<sub>1.1+x</sub>Co<sub>2</sub>O<sub>4</sub>. We attributed the increase of $`\rho `$ with $`x`$ to the excess Na in the grain boundaries, and regarded the sample for $`x=0`$ as the parent material for Ca substitution.
Figures 4(a) and 4(b) show $`\rho `$ and $`S`$ of polycrystalline samples of Na<sub>1.1-x</sub>Ca<sub>x</sub>Co<sub>2</sub>O<sub>4</sub>. The magnitudes of $`\rho `$ and $`S`$ increase with $`x`$, suggesting that the carrier density is reduced by the Ca substitution. This is naturally understood from a viewpoint of Co valence. In NaCo<sub>2</sub>O<sub>4</sub> the formal valence ($`p`$) of Co is $`3.5+`$, i.e., Co<sup>3+</sup>:Co<sup>4+</sup>=1:1. The Ca substitution decreases $`p`$ down to $`3+`$, which corresponds to the configuration of $`(3d)^6`$. Since the six electrons fully occupied the three $`d\gamma `$ bands in the low spin state, oxides with Co<sup>3+</sup> are often insulating. As expected, the power factor $`S^2/\rho `$ is improved in $`x`$0.15 by 20%. It should be emphasized that the Ca substitution changes the temperature dependence of $`\rho `$. For example, while $`\rho `$ for $`x`$=0 shows a positive curvature below 100 K, $`\rho `$ for $`x`$=0.35 shows a negative curvature. This indicates that the scattering rate depends strongly on the carrier density, which is unlikely to arise from the electron-phonon scattering.
Next we discuss the thermoeletric properties of (Bi, Pb)$`{}_{2}{}^{}M_{3}^{}`$Co<sub>2</sub>O<sub>9</sub>. In Figs. 5(a) and 5(b), $`\rho `$ and $`S`$ of Bi$`{}_{2}{}^{}M_{3}^{}`$Co<sub>2</sub>O<sub>9</sub> single crystals along the in-plane direction are plotted as a function of temperature. $`\rho `$ of the present samples reproduces the data in the literature, where the electric conduction for $`M`$=Ba is more metallic than that for $`M`$=Ba . Note that the magnitude of $`S`$ is above 100 $`\mu `$V/K for both samples, owing to the small carrier density.
In contrast to the Ba substitution for Sr, Pb not only works as an acceptor, but also modifies the electronic states of Bi<sub>2</sub>Sr<sub>3</sub>Co<sub>2</sub>O<sub>9</sub>. Figures 6(a) and 6(b) show $`\rho `$ and $`S`$ of Bi<sub>2-x</sub>Pb<sub>x</sub>Sr<sub>3</sub>Co<sub>2</sub>O<sub>9</sub> single crystals along the in-plane direction. $`\rho `$ is decreased drastically by the Pb substitution, which looks similar to Fig. 5(a). However, $`S`$ remains nearly unchanged upon the Pb substitution, which clearly indicates that the Ba and Pb substitutions affect the electronic states differently. The anomalous electronic states of Bi<sub>2-x</sub>Pb<sub>x</sub>Sr<sub>3</sub>Co<sub>2</sub>O<sub>9</sub> are also suggested by the large negative magnetoresistance at low temperatures .
## 4 Summary
In summary, we prepared polycrystalline samples of Na<sub>1.1-x</sub>Ca<sub>x</sub>Co<sub>2</sub>O<sub>4</sub> and single-crystal samples of Bi<sub>2-x</sub>Pb$`{}_{x}{}^{}M_{3}^{}`$Co<sub>2</sub>O<sub>9</sub> ($`M`$= Sr and Ba; $`x`$=0, 0.2 and 0.4). In Na<sub>1.1-x</sub>Ca<sub>x</sub>Co<sub>2</sub>O<sub>4</sub>, both the resistivity and the thermopower increase with $`x`$, which suggests that the carrier density is decreased by the Ca substitution. In Bi<sub>2-x</sub>Pb$`{}_{x}{}^{}M_{3}^{}`$Co<sub>2</sub>O<sub>9</sub>, the resistivity is decreased by the Ba and Pb substitutions, but the doping effects on the thermopower are different. While the Ba substitution decreases the thermopower, the Pb substitution hardly changes the thermopower. This is a direct example that the resistivity can be lowered while the thermopower is kept large.
## Acknowledgements
The authors would like to thank M. Takano, S. Nakamura, K. Fukuda, S. Kurihara and K. Kohn for fruitful discussions. They also appreciate H. Yakabe, K. Nakamura, K. Fujita and K. Kikuchi for collaboration. They are indebted to I. Tsukada for showing us the unpublished data of (Bi,Pb)<sub>2</sub>Sr<sub>3</sub>Co<sub>2</sub>O<sub>9</sub>.
|
no-problem/9908/cond-mat9908071.html
|
ar5iv
|
text
|
# Modified Extrapolation Length Renormalization Group Equation
## I Introduction
Since the first renormalization group (RG) analysis of surface critical behavior by Lubensky and Rubin a number of subsequent advances in the technique have enabled computation of exponents to $`O(ϵ^2)`$, critical amplitudes, and various cross-over functions. In particular, Diehl and Dietrich have systematically developed a formalism whereby the power and elegance of the field theoretic method has been fully exploited.
For the case of an $`O(1)`$ system confined to the half-space $`z>0`$ it suffices to consider the reduced Hamiltionian
$$[\varphi ]=d^dx\left[\frac{1}{2}(\varphi )^2+\frac{r}{2}\varphi ^2+\frac{u}{4!}\varphi ^4\right]+𝑑S\frac{c}{2}\varphi ^2$$
(1)
the presence of the bounding surface manifesting itself as an additional surface interaction. The parameter $`c`$ takes account of the local enhancement of the reduced temperature in the vicinity of the surface. At lowest order the surface term results in the boundary condition $`\varphi ^{}(0)=c\varphi (0)`$ and thus $`1/c`$ corresponds to the distance over which the order parameter falls to zero when extrapolated away from the surface. For $`c>0`$ the surface orders with the bulk while for $`c<0`$ there is an enhanced tendency to order at the surface. The “special” transition with $`c=0`$ divides these two regimes whereas the “ordinary” transition with $`c=\mathrm{}`$ corresponds to a state where ordering at the surface is completely supressed.
An issue of considerable interest is the manner in which various quantities behave close to the ordinary transition and to this end an expansion in the bare extrapolation length $`1/c`$ has been developed. Among the results is the finding that at the ordinary point energy related quantities involving $`\varphi ^2`$ averages exhibit behavior characterized by relations involving bulk exponents. This in turn is a direct consequence of a vanishing anomalous exponent $`\eta _c`$ associated with the extrapolation length.
The canonical scaling of $`c(l)`$ near the ordinary point is interesting in light of the fact that all analyses addressing the cross-over behavior from the special to ordinary point have utilized a linear RG equation which in dimensional regularization reads
$$\frac{dc}{dl}=(1+\eta _c)c$$
(2)
with $`e^l`$ corresponding to the block spin size. For finite $`c(l)`$ Eq. (2) results from a straightforward application of the field theoretic method to a bulk system with a planar bounding surface. Clearly equation (2) yields a flow which is independent of the proximity to the special transition and thus does not display the expected classical behavior at large $`c(l)`$. It is readily verified, however, that this disparity in scaling behavior from that inferred from the $`1/c`$ expansion is compensated for by the crossover functions exhibiting logarithmic singularities at large $`c(l)`$. When exponentiated, these singularities lead to powers of $`c(l)`$ that in effect undo the incorrect large $`c`$ behavior of Eq. (2) and in turn lead to the appropriate exponents at the ordinary point.
With the above in mind a question of immediate interest is to what extent it is possible to deduce a flow for $`c(l)`$ that correctly interpolates between the special and ordinary points. Constructing such a flow is not immediately obvious since for finite $`c`$ the linear RG Eq. (2) results from the standard program of renormalizing all relevant surface operators. It has been pointed out, however, that near the ordinary point additional care must be used in categorizing operators as relevant or irrelevant in the RG sense. In particular, in the context of the $`1/c`$ expansion it happens that insertions of the formally irrelevant interaction $`(_n\varphi )^2`$ must be considered.
In the following we outline an approach based on the physical notion of an extrapolation length that gives rise to a non-linear RG equation exhibiting the correct behavior both at the special and ordinary fixed points. As in the case of the $`1/c`$ expansion the operator $`(_n\varphi )^2`$ is found to play a important role in the analysis. In addition to yielding an RG flow with the sought-after behavior at both fixed points, the method further elucidates the connection between the extrapolation length and parameter $`c`$. This insight is of interest in its own right since conventional wisdom holds that the connection loses meaning beyond mean-field theory. Making use of an analysis based on phase shifts, we will demonstrate that there is a means of extending the notion of an extrapolation length that remains intact when fluctuations are taken into account. This approach may have the potential for further development since the use of phase shifts in many-body systems is a concept often incorporated into perturbative analyses of arbitrary order.
## II Scattering Formulation
Our analysis begins with the reduced Hamiltonian of Eq. (1) with the undestanding that all volume integrations are to be taken over the half-space $`z>0`$. Within mean field theory, straightforward variation of the Hamiltonian (1) gives rise to the boundary condition
$$c\varphi (0)=\varphi ^{}(0)$$
(3)
Beyond mean field theory there is an approach in which the connection between $`c`$ and an extrapolation length is still apparent. The key observation is that the oscillatory nature of the modes leads to the extrapolation length manifesting itself as a phase shift. In particular, taking the free Hamiltonian $`_0`$ to be
$$_0=d^dx\frac{1}{2}\left[(\varphi )^2+r\varphi ^2\right]+𝑑S\frac{c}{2}\varphi ^2$$
(4)
the modes which diagonalize $`_0`$ satisfy the boundary condition Eq. (3) and are of the form
$$\varphi _k=e^{ikz}f_ke^{ikz}$$
(5)
where the scattering amplitude $`f_k`$ and phase shifts are given by
$$f_k=\frac{c+ik}{cik}=e^{2i\delta _k},\mathrm{tan}\delta _k=k/c$$
(6)
At this level of approximation it is apparent that the presence of the surface interaction serves as an effective scattering potential characterized by phase shifts $`\delta _k`$. Conversely, given phase shifts $`\delta _k`$ the associated inverse extrapolation length satisfies
$$\underset{k0}{lim}\frac{\delta _k}{k}=\frac{1}{c}$$
(7)
The inclusion of fluctuations will alter the effective surface potential. To ascertain how fluctuations influence the extrapolation length one can appeal to the manner in which the phase shifts are modified and then use Eq. (7).
We now carry out this program by addressing the lowest order, one-loop corrections. For a given self energy $`\mathrm{\Sigma }`$ the modes satisfy
$$\frac{d^2\varphi }{dz^2}+(t+q^2E_k)\varphi =\sigma (z)\varphi $$
(8)
where $`\sigma (z)=\mathrm{\Sigma }(z)\mathrm{\Sigma }(\mathrm{})`$, and $`t=r+\mathrm{\Sigma }(\mathrm{})`$ is the suitably shifted bulk reduced temperature. Since we will ultimately be implementing the RG, let us assume that $`\sigma ϵ`$ which then permits Eq. (8) to be solved using standard perturbation theory. In the present circumstance in which results appropriate to one-loop order are sought, first order perturbation theory suffices. We are led to a modified scattering amplitude
$$f_r=f\frac{1}{2ik}𝑑z[\varphi _k^0(z)]^2\sigma (z)$$
(9)
The corresponding fluctuation corrected $`c_r`$ is obtained by considering the small $`k`$ limit of Eq. (7). Noting that $`f1+2ik/c`$ it follows that
$$\frac{1}{c_r}=\frac{1}{c}𝑑z(z+1/c)^2\sigma (z)$$
(10)
which is valid to $`O(ϵ)`$. In the event that higher order corrections are desired it is necessary to take account of additional perturbative corrections to Eq. (8).
## III Renormalization Group
The above results can now be used to determine the renormalization group equation for $`c(l)`$. We proceed with a momentum-shell approach. To this end, we note that translational invariance in directions parallel to the surface allows one to write
$$\varphi (𝐱)=\underset{q}{}\varphi _q(z)e^{i𝐪𝐲}$$
(11)
Integrating out all modes with parallel momentum in the shell $`e^{\mathrm{\Delta }l}<q<1`$ and using the result that the averages obey
$$\varphi _q(z)\varphi _q(z^{})=\frac{1}{2\kappa }\left[e^{\kappa |zz^{}|}ae^{\kappa (z+z^{})}\right]$$
(12)
with
$$a=\frac{c\kappa }{c+\kappa }$$
(13)
and $`\kappa ^2=q^2+t`$, one finds for the subtracted self energy
$$\sigma (z)=\frac{uK_{d1}}{4\kappa _1}\mathrm{\Delta }la_1e^{2\kappa _1z}$$
(14)
where the 1-subscript refers to all quantities being evaluated at $`q=1`$. Rescaling lengths so that $`ce^{\mathrm{\Delta }l}c`$, one arrives at the RG equation
$$\frac{dc}{dl}=c\frac{u^{}K_{d1}}{8}\left[\frac{c\kappa _1}{\kappa _1^3}+\frac{c^2}{2\kappa _1^4}\frac{(c\kappa _1)}{(c+\kappa _1)}\right]$$
(15)
where $`u`$ has now been set to its fixed point value $`u^{}K_{d1}=8ϵ/3`$. For comparison we note that the corresponding equation resulting from a standard momentum shell approach in which only the interactions $`\varphi ^2,\varphi _n\varphi `$ are considered reads
$$\frac{dc}{dl}=c\frac{u^{}K_{d1}}{8}\left[\frac{c\kappa _1}{\kappa _1^3}\right]$$
(16)
Equation (refeq:rc1) is the hard cut-off version of the dimensionally regularized result (2). This latter equation being linear in $`c`$ implies a flow
$$c(l)=e^{l\varphi /\nu }$$
(17)
that is independent of the proximity to special transition. Deviations between equations (16, 15) begin to appear when $`c(l)1`$. The third non-linear term appearing in Eq. (15) for finite $`c`$ is ultraviolet convergent and corresponds to the inclusion of contributions from the formally irrelevant $`(_n\varphi )^2`$ vertex. However, if this last term is expanded in $`1/c`$, it is clear that corrections to the shift and exponent of $`c`$ occur, and that successive terms become increasingly ultraviolet divergent.
Equation (15) leads to some interesting results, which we now address. For the sake of illustration assume that the system is close enough to criticality so that cross-over to the ordinary point has already occurred while $`r(l)1`$. In this case Eq. (15) reduces to
$$\frac{dc}{dl}=c\frac{u^{}K_{d1}}{8}\left[c1+\frac{c^2}{2}\frac{(c1)}{(c+1)}\right]$$
(18)
Although it is possible to solve Eq. (18) exactly, only results accurate to $`O(ϵ)`$ will be considered. There are several ways to go about solving Eq. (18), one of which proceeds by iteratively solving the differential equation to $`O(ϵ)`$. This leads to the explicit solution
$`c(l)`$ $`=`$ $`b(l){\displaystyle \frac{u^{}K_{d1}}{8}}\left[1+{\displaystyle \frac{b(l)^2}{2}}b(l)\mathrm{ln}(1+b(l))\right].`$ (19)
$`b(l)`$ $`=`$ $`b(0)e^{(1+\eta _c)l}`$ (20)
with $`b(0)=c(0)+u^{}K_{d1}/8`$, and $`\eta _c=ϵ/3`$. Another method approximates the roots to the resulting cubic on the right hand side of Eq. (18) and leads directly to the implicit form
$$\left[\frac{c(l)\eta _c}{c(0)\eta _c}\right]^{1\eta _c}\left[\frac{1+c(l)+\eta _c}{1+c(0)+\eta _c}\right]^{\eta _\mathrm{c}}\left[\frac{2/\eta _c+c(0)}{2/\eta _c+c(l)}\right]=e^l$$
(22)
which can also be shown to follow from exponentiation of (20). Inspection of the above results reveals that for $`c(l)1`$ the flow is characterized by $`\eta _c0`$ while close to the ordinary point $`c(l)e^l`$ thus implying a vanishing $`\eta _c`$. Stated differently, the cross-over exponent for the extrapolation length $`\lambda (l)=1/c(l)`$ at the ordinary point is $`\varphi _{ord}=\nu `$. Another interesting feature of Eq. (22) is that it yields an ordinary fixed point of order $`c(\mathrm{})1/ϵ`$.
Past analyses, employing the momentum-shell technique to surface related phenomena, have encountered various technical difficulties. We, therefore, first consider how this method leads to the standard linear equation (2) before attempting an alternate derivation of the modified RG equation (15). As degrees of freedom are integrated out, additional interactions are generated. This is accommodated by taking the surface interaction to be of the form
$$V(z)=\underset{m}{}v_m\delta ^{(m)}(z)$$
(23)
with $`\delta ^{(m)}(z)`$ referring to a $`m^{\mathrm{th}}`$ derivative. For given $`V(z)`$ the coefficients $`v_m`$ are determined by
$$v_m=\frac{()^m}{m!}_0^{\mathrm{}}z^mV(z)𝑑z$$
(24)
Consider the one-loop contribution, which results in the surface interaction $`V(z)=\sigma (z)`$ given by Eq. (14). After rescaling the surface spins by a factor $`e^{\mathrm{\Delta }l(1\eta _1)/2}`$ one finds that the coefficients $`v_m`$ satisfy the recursion relations:
$$\frac{dv_m}{dl}=(1m\eta _1)v_m\frac{uK_{d1}}{2}\frac{()^ma_1}{(2\kappa _1)^{m+2}}$$
(25)
The vertex involving $`\varphi ^2\delta ^{}(z)`$, or equivalently $`\delta (z)\varphi _n\varphi `$ results from the boundary term associated with $`(\varphi )^2`$. Analogous to what is done in bulk phenomena the factor $`\eta _1`$ is chosen so that $`v_1=1/2`$ remains fixed. This leads to the result
$$\eta _1=\frac{u^{}K_{d1}}{8\kappa _1^3}a_1$$
(26)
When this value for $`\eta _1`$ is inserted into Eq. (25) for $`v_0`$, one ends up with the linear RG equation (2). It is interesting that the non-linearity associated with the factor $`a_1`$ is entirely cancelled. Inspection of Eq. (25) governing $`v_m`$ reveals that all interactions with $`m2`$ are irrelevant. However, in the context of calculating various scaling functions, these interactions with $`m2`$ must, in fact, be considered to account for all $`O(ϵ)`$ contributions.
It is possible under certain circumstances to interpret the contribution to $`c`$ from $`\eta _1`$ as feeding in from the $`v_1`$ vertex. This becomes evident upon considering the contribution each surface term makes when inserted into a propagator with legs off the surface. Recall that the $`m^{\mathrm{th}}`$ vertex involves a factor $`\delta ^{(m)}(z)`$, which leads, after an integration by parts, to an interaction $`\delta (z)_z^m[\varphi ^2]`$. The boundary condition then effectively relates this to a term proportional to $`\delta (z)\varphi ^2`$ and leads to a contribution from $`v_m`$ feeding into the recursion for $`c`$. In such case it is possible that the higher order surface interactions will influence the behavior of $`c`$.
To see how the above reasoning leads to the modified RG Eq. (15) assume that all two point interactions ultimately will modify a propagator with legs off the surface and that these legs each carry a transverse momentum q with associated factor $`\kappa _0=\sqrt{q^2+r(l)}`$. The assumption that the legs are off the surface effectively leads to the replacements:
$`_n^{2m+1}\varphi `$ $``$ $`c\kappa _0^{2m}\varphi `$ (27)
$`_n^{2m}\varphi `$ $``$ $`\kappa _0^{2m}\varphi `$ (28)
in all surface interactions. Note that the $`\delta `$ function singularity associated with two or more derivatives makes no contribution because of the fact that the legs are off the surface. When momenta in a thin shell are integrated out, each $`v_m`$ receives a contribution
$$\mathrm{\Delta }v_m=\frac{\mathrm{\Delta }l}{2}uK_{d1}\frac{()^ma_1}{(2\kappa _1)^{m+2}}$$
(29)
For the moment, we will ignore any contribution from an anomalous surface spin rescaling factor $`\eta _1`$. Integrating by parts and invoking the correspondence (28), the $`v_1`$ interaction leads to a term $`\mathrm{\Delta }v_1\delta (z)c\varphi ^2`$. Similarly, the $`v_2`$ vertex involves $`^2[\varphi ^2]`$ and thus leads to a term $`2\mathrm{\Delta }v_2\delta (z)(c^2+\kappa _0^2)\varphi ^2`$. It follows that the total effective contribution from the $`v_1,v_2`$ interactions to $`v_0`$ is
$$\mathrm{\Delta }v_0=\mathrm{\Delta }v_1c+2\mathrm{\Delta }v_2(c^2+\kappa _0^2)$$
(30)
Rescaling spins and lengths, using Eq. (29), and for the moment ignoring the last term involving $`\kappa _0`$, one arrives at the modified RG Eq. (15). Alternatively, identifying $`\mathrm{\Delta }v_m`$ with the moments of $`V(z)`$ using Eq. (24) one recovers Eq. (10) derived from the scattering theory approach. Though $`\eta _1`$ was ignored, the final result is the same when anomalous spin rescaling is included. If spins are rescaled so that $`v_1`$ remains fixed, while there is no contribution to $`v_0`$ in the form of $`\mathrm{\Delta }v_1`$, there is a contribution from $`\eta _1`$ which, because of Eq. (26), yields an identical result.
The above analysis arbitrarily neglected the contributions from the $`v_2`$ vertex in addition to all interactions with $`m>2`$. To determine under what circumstances this is justified note that the vertex $`v_m`$ makes a contribution to $`\mathrm{\Delta }v_0`$ of order
$$\frac{\kappa _0^{m2}}{\kappa _1^{m+2}}\left[(c+\kappa _0)^2+()^m(c\kappa _0)^2\right]a_1$$
(31)
and becomes increasingly negligible for $`\kappa _0\kappa _11`$. This latter condition is satisfied sufficiently close to the critical point when leg momenta $`\text{q}1`$. For the current situation of interest here this condition is well satisfied. However, in view of this assumption, our derivation strictly applies only to the RG Equation (18), in which $`r(l)`$ was neglected.
Generally, when $`r(l)`$ is not negligible it is possible to sum the higher order corrections that were neglected in the above analysis. However, the resulting equation differs from that found using phase shifts(15). The differences arise from the two methods reflecting different conventions on the finite part of $`c(l)`$. This is, of course, compensated for by making a correspondingly different subtraction, depending on which flow is used.
## IV Concluding Remarks
We have presented a method for identifying an effective surface enhancement $`c(l)`$ which utilizes the scattering phase shifts of the localized part of the self energy $`\sigma (z)`$. The resulting cross-over behavior in $`c(l)`$ is found to arise from the inclusion of contributions of various higher order surface interactions, in particular $`(_n\varphi )^2`$. It is interesting that the relatively simple connection involving phase shifts implicitly includes such higher order corrections. Furthermore, the method appeals to characteristics of the entire (smooth) surface interaction rather that its constituent localized (delta function) pieces and thus may lead to further insights into surface phenomena. Indeed, though often convenient, the use of hyper-localized surface distributions is somewhat unphysical and occasionally leads to pathological quantities requiring special limiting procedures and interpretations.
The task of determining a scaling field with the correct scaling behavior at both fixed points is important in its own right. We have performed a preliminary analysis of the scaling functions for the surface susceptibility and surface free energy and find, as expected, that the use of a modified flow similar to Eq. (20) eliminates the logarithmic singularities otherwise found in these quantities. This in turn suggests that the logarithmic singularities in these two quantities are due entirely to the cross-over in $`c(l)`$.
Within his calculation of the local susceptibility Goldschmidt also addressed the exponentiation of the logarithmic singularities appearing in this quantity. In this particular scaling function, however, the introduction of our modified flow is not sufficient to eliminate the singularity. We have also verified this is also the case for the layer susceptibility. This is to be expected, however, since both these quantities involve at least one external point on the surface.
|
no-problem/9908/cond-mat9908311.html
|
ar5iv
|
text
|
# Field-Theoretical Analysis of Singularities at Critical End Points
## 1 Introduction
Critical end points are ubiquitous in nature. They occur when a line of critical temperatures $`T_\mathrm{c}(g)`$, depending on a nonordering field $`g`$ such as chemical potential or pressure, terminates at a line $`g_\sigma (T)`$ of discontinuous phase transitions . Two familiar examples are: (i) the critical end point (CEP) of a binary fluid mixture where the critical line of demixing ends on the liquid-gas coexistence curve; (ii) the CEP of $`{}_{}{}^{H}4`$, the terminus of the lambda line on the gas-phase boundary. On the critical (or lambda) line the disordered and ordered phases separated by it become *identical* critical phases; in the case of a binary fluid with components A and B, the disordered phase corresponds to a homogeneously mixed fluid $`\alpha \beta `$, and the ordered ones to an A-rich phase $`\alpha `$ and a B-rich phase $`\beta `$; in the case of $`{}_{}{}^{H}4`$, the disordered and ordered phases are normalfluid and superfluid, respectively. A crucial feature of a CEP is that a *critical* phase *coexists* with a *noncritical* (‘spectator’) phase $`\gamma `$ there.
Although CEPs were encountered in numerous studies of bulk and interfacial critical phenomena \[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and their references\] in the past decades, they have rarely been investigated for their own sake. This may be due to the expectation that the critical phenomena at a CEP should not differ in any significant way from critical phenomena along the critical line $`T_\mathrm{c}(g)`$ . However, recently it has been pointed out that even the *bulk thermodynamics* of a CEP should exhibit new critical singularities, not observable on the critical line. On the basis of the phenomenological theory of scaling it was predicted that the first-order phase boundary $`g_\sigma (T)`$ should vary near the CEP $`(T=T_\mathrm{e},g=g_\mathrm{e}g_\sigma (T_\mathrm{e}))`$ as
$$g_\sigma (T)g_\sigma ^{\mathrm{reg}}(T)\frac{X_\pm ^0}{(2\alpha )(1\alpha )}\left|t\right|^{2\alpha }$$
(1)
in the limit $`t(TT_\mathrm{e})/T_\mathrm{e}0\pm `$, where $`g_\sigma ^{\mathrm{reg}}(T)`$ is *regular* in $`T`$. Furthermore, the amplitude ratio $`X_+^0/X_{}^0`$ should be equal to the usual *universal* (and hence $`g`$ *independent*) ratio $`A_+/A_{}`$ of specific heat amplitudes $`A_\pm `$. These are defined by writing the specific heat singularity at constant $`gg_e`$ on the critical line as $`A_\pm (g)\left|TT_\mathrm{c}(g)\right|^\alpha `$. In other words, the singularities displayed by $`g_\sigma (T)`$ should be of the same form as those of the bulk free energy of the disordered ($`\alpha \beta `$) and ordered ($`\alpha +\beta `$) phases near $`T_\mathrm{c}(g)`$.
The phenomenological scaling arguments leading to (1) can be extended in a straightforward fashion to determine the singularities $`\rho _g`$, the thermodynamic density conjugate to the nonordering field $`g`$, should display as the CEP is approached along the coexistence boundary . They yield the singular part
$$\rho _g^{\mathrm{sing}}\rho _g\rho _g^{\mathrm{reg}}(T)U_\pm ^0\left|t\right|^\beta +V_\pm ^0\left|t\right|^{1\alpha }.$$
(2)
Having in mind binary fluid mixtures, we take the CEP to lie on the liquid (rather than the gas) side of the coexistence boundary. The quantity $`\rho _g`$ may be identified as the total density of the fluid. For a hypothetical *symmetric* binary fluid whose properties are invariant with regard to simultaneous interchange of its two constituents A and B and their respective chemical potentials $`\mu _\mathrm{A}`$ and $`\mu _\mathrm{B}`$, the amplitudes $`U_\pm ^0`$ would vanish. More generally, this would be true for systems that are describable by a continuum Hamiltonian which is *even* in the order parameter field $`\varphi `$. Just as $`X_+^0/X_{}^0`$, the ratios $`V_+^0/V_{}^0`$ as well as $`U_+^0/U_{}^0`$ (if $`U_\pm ^00`$) are *universal* and can be expressed in terms of standard universal amplitude combinations .
The $`|t|^{2\alpha }`$ singularity of (1) has been checked by Monte Carlo calculations and verified for exactly solvable spherical models ; the $`|t|^{1\alpha }`$ singularity of (2) is consistent with the jump in the slope of $`\rho _g(T)`$ found in mean field and density functional calculations and has also been seen in Monte Carlo simulations .
Here we will address the issue of CEP singularities via the *field-theoretic renormalization group (RG) method*. This approach is known to provide both a conceptually reliable basis of the modern theory of critical phenomena as well as powerful calculational tools (see, e.g., ). Surprisingly, it has not yet been applied with much success to the study of CEPs. We are aware of only one such work that goes beyond the Landau approximation, an $`(ϵ=4d)`$-expansion study of a scalar $`\varphi ^8`$ model with negative $`\varphi ^6`$ term . Its one-loop result is that the critical line and the CEP are controlled by the same, standard $`O(ϵ)`$ fixed point. Unfortunately, the model investigated has *rather special features*: its first-order line does not extend into the disordered phase; as its CEP is approached from the disordered phase, the order parameter $`\varphi `$ becomes critical *and* exhibits a jump to a nonvanishing value upon entering the ordered phase; and no critical fluctuations occur in its ordered phase. Hence it clearly *does not reflect* the typical CEP situation in which the two-phase coexistence surface bounded by the critical line $`T_\mathrm{c}(g)`$ meets the spectator phase boundary in a triple line; its applicability appears to be quite limited.
One should also note that the above RG scenario *differs* from the one found in position-space RG calculations of lattice models with conventional CEPs. In the latter scenario the critical line and the CEP are mapped onto *separate* fixed points, where the CEP fixed point has two relevant RG eigenexponents that are identical to those of the former, plus the additional one $`y=d`$, characteristic of discontinuity fixed points , but absent in .
We conclude that systematic field-theoretic RG studies of appropriate models are urgently needed. A first obvious goal one would hope to achieve is a systematic derivation of the singularities in (1) and (2). This involves showing the equivalence of critical behavior at the CEP and on the critical line.<sup>1</sup><sup>1</sup>1In their excellent survey of the present state of the theory of CEP singularities, Fisher and Barbosa warn that this equivalence, with matching critical spectra of the corresponding two fixed points, need not be an invariable rule, even though their work confirms it, just as our own. Provided the above RG scenario with two separate fixed points prevails, one must prove that the associated critical spectra match, demonstrate the existence of the discontinuity eigenexponent $`y=d`$ and clarify its significance.
We have recently carried out such an investigation. In the sequel, we will briefly describe the main steps of our procedure and our findings. A more detailed exposition of our work will be given elsewhere .
## 2 Models
First, we must choose an appropriate continuum model. Natural candidates are models whose Hamiltonian $`[\varphi ,\psi ]`$ depends on *two* fluctuating densities: a (primary) order parameter field $`\varphi (𝒙)`$ and a secondary (noncritical) density $`\psi (𝒙)`$. The form of $``$ can be guessed on purely phenomenological grounds, but can also be derived by starting from an appropriate lattice model, such as the Blume-Emery-Griffiths model on a $`d`$-dimensional simple cubic lattice. This is a classical spin $`S=1`$ model with Hamiltonian
$`_{\mathrm{BEG}}[𝑺]`$ $`=`$ $`{\displaystyle \underset{𝒊,𝒋}{}}\left[JS_𝒊S_𝒋+KS_𝒊^2S_𝒋^2+L\left(S_𝒊^2S_𝒋+S_𝒊S_𝒋^2\right)\right]`$ (3)
$`{\displaystyle \underset{𝒊}{}}\left(HS_𝒊+DS_𝒊^2\right),S_𝒊=0,\pm 1,`$
where $`𝒊,𝒋`$ indicates summation over nearest-neighbor pairs of sites. We presume the interaction constants $`K`$ and $`J`$ to be positive (‘ferromagnetic’), and $`L0`$. The quantities $`H`$ and $`D`$ correspond respectively to even and odd linear combinations of the chemical potentials $`\mu _\mathrm{A}`$ and $`\mu _\mathrm{B}`$ .
Performing a Gaussian (‘Kac-Hubbard-Stratonovich’) transformation with respect to both $`\{S_𝒊\}`$ and $`\{S_𝒊^2\}`$, one can map the model (3) exactly on a lattice field theory with fields $`\varphi _𝒊\mathrm{}`$ and $`\psi _𝒊\mathrm{}`$ . To make a continuum approximation, we replace these by smoothly interpolating fields $`\varphi (𝒙)`$ and $`\psi (𝒙)`$, and Taylor expand nearby differences $`\varphi _𝒊\varphi _𝒋`$ about their midpoint $`(𝒊+𝒋)/2`$. We thus arrive at a continuum model with the Hamiltonian
$$[\varphi ,\psi ]=_1[\varphi ]+_2[\psi ]+_{12}[\varphi ,\psi ],$$
(4)
$$_1[\varphi ]=d^dx\left[\frac{A}{2}\left(\varphi \right)^2+\frac{a_2}{2}\varphi ^2+\frac{a_4}{4}\varphi ^4h\varphi \right],$$
(5)
$$_2[\psi ]=d^dx\left[\frac{B}{2}\left(\psi \right)^2+\frac{b_2}{2}\psi ^2+\frac{b_4}{4}\psi ^4g\psi \right],$$
(6)
$$_{12}[\varphi ,\psi ]=d^dx\left[\psi \left(d_{11}\varphi +\frac{d_{21}}{2}\varphi ^2\right)+\mathrm{}\psi \left(e_{11}\varphi +\frac{e_{21}}{2}\varphi ^2\right)\right].$$
(7)
The $`\varphi ^3`$ and $`\psi ^3`$ terms have been eliminated by shifts $`\varphi (𝒙)\varphi (𝒙)+\varphi _0`$ and $`\psi (𝒙)\psi (𝒙)+\psi _0`$. Monomials of higher order and higher-order gradient terms have been dropped.
The terms retained in $``$ require explanation. Consider, first, the case of a *symmetric* CEP, in which $`[\varphi ,\psi ]=[\varphi ,\psi ]`$, i.e., $`h=d_{11}=e_{11}=0`$. Owing to this symmetry, $`\varphi `$ and $`\psi `$ do not ‘mix’ and hence may be chosen as the fields that become critical or remain noncritical at the CEP, respectively. To assess the relevance of contributions to $``$ via power counting, the coefficients $`A`$ and $`b_2`$ should be taken dimensionless. Thus $`\varphi `$ has the usual momentum dimension $`[\varphi ]=(d2)/2`$, while $`[\psi ]=d/2`$. In $`_1[\varphi ]`$, we have kept all monomials of the standard $`\varphi ^4`$ Hamiltonian, namely those (except $`\varphi ^3`$) having coefficients with nonnegative momentum dimensions for $`ϵ0`$. For the remaining interaction constants, one finds $`[g]=[e_{21}]=2ϵ/2`$, $`[d_{21}]=ϵ/2`$, $`[B]=2`$, and $`[b_4]=ϵ4`$. This suggests that $`B`$, $`e_{21}`$, and $`b_4`$ may be expected to be irrelevant in the RG sense and hence can be set to zero. If we did this, $``$ would reduce to the Hamiltonian of the dynamic model C ; it would be quadratic in $`\psi `$, so $`\psi `$ could be integrated out exactly. The resulting effective Hamiltonian would be identical to $`_1[\varphi ]`$, up to a change of its parameters $`a_2`$ and $`a_4`$, and an overall constant.
The terms $`B`$, $`e_{11}`$, and $`e_{21}`$ have been introduced because they play a role in the analysis of inhomogeneous states with a liquid-gas interface<sup>2</sup><sup>2</sup>2For $`B>0`$, the mean-field correlation length of $`\psi `$ and hence the width of the interface region of classical kink solutions for $`\psi `$ are nonzero. The terms $`e_{11}`$ and $`e_{21}`$ are significant for relating the problem of critical adsorption of the $`\alpha \beta `$-phase at the $`\alpha \beta |\gamma `$-interface to a wall problem. . Since our main focus here is on bulk critical behavior, we can indeed set $`B=e_{21}=0`$ in the sequel. However, $`b_4`$ must *not* be set to zero because, then, we would *not* be able to describe $`\alpha \beta `$-$`\gamma `$ coexistence, nor would the model have a CEP.
## 3 Landau theory and beyond
Application of the Landau approximation to the model (4)–(7) yields a phase diagram with a CEP and the correct topology, provided its parameter values are in the appropriate range . With the choices $`a_2<0`$, $`d_{21}>0`$ (aside from $`a_4>0`$ and $`b_4>0`$), one finds a critical line with
$$\psi =\psi _\mathrm{c}a_2/d_{21},b_2>b_{2\mathrm{e}}\frac{d_{21}^2}{2a_4}\frac{b_4a_2^2}{d_{21}^2},$$
(8)
and $`g=g_\mathrm{c}=b_2\psi _\mathrm{c}+b_4\psi _\mathrm{c}^3`$ that is truncated by the liquid-gas coexistence boundary at the CEP $`b_2=b_{2\mathrm{e}}`$, $`\psi =\psi _\mathrm{c}`$ (cf. case (a) in Figs. 8 and 9 of ). A detailed exposition of the Landau theory, with results for the phase boundaries and equilibrium values of $`\varphi `$ and $`\psi `$ in the various phases will be given in .
To go beyond Landau theory, we use perturbation theory in combination with the RG.<sup>3</sup><sup>3</sup>3Implicit in our analysis is the well-founded assumption that the CEP, the critical line, and the first-order line $`g_\sigma `$ will survive the inclusion of fluctuation corrections. Writing $`\psi =\psi _{\mathrm{ref}}+\stackrel{ˇ}{\psi }`$, we expand $``$ about a reference value $`\psi _{\mathrm{ref}}`$, which we take as the mean-field value of $`\psi `$ at a reference point in the $`\alpha \beta `$ phase away from the critical line. This gives
$$[\varphi ,\psi ]=[0,\psi _{\mathrm{ref}}]+^{}[\varphi ,\stackrel{ˇ}{\psi };\psi _{\mathrm{ref}}],$$
(9)
$$^{}[\varphi ,\stackrel{ˇ}{\psi }]=d^dx\left[\frac{A}{2}\left(\varphi \right)^2+\underset{k=2,4}{}\frac{\stackrel{ˇ}{a}_k}{k}\varphi ^k+\underset{l=1}{\overset{4}{}}\frac{\stackrel{ˇ}{b}_l}{l}\stackrel{ˇ}{\psi }^l+\frac{1}{2}d_{21}\varphi ^2\stackrel{ˇ}{\psi }\right],$$
(10)
with $`\stackrel{ˇ}{a}_2=a_2+d_{21}\psi _{\mathrm{ref}}`$, $`\stackrel{ˇ}{a}_4=a_4`$, $`\stackrel{ˇ}{b}_1=g+b_2\psi _{\mathrm{ref}}+b_4(\psi _{\mathrm{ref}})^3`$, $`\stackrel{ˇ}{b}_2=b_2+3b_4(\psi _{\mathrm{ref}})^2`$, $`\stackrel{ˇ}{b}_3=3b_4\psi _{\mathrm{ref}}`$, and $`\stackrel{ˇ}{b}_4=b_4`$, where $`\stackrel{ˇ}{b}_2>0`$.
If $`^{}`$ is taken into account by perturbation theory, the critical and first-order lines, and hence the CEP, get shifted. When studying the behavior near these lines, one must use their corresponding new locations that are compatible with the level of approximation. Suppose that a point $`(T_\mathrm{c}(g_\mathrm{c}),g_\mathrm{c})`$ on the critical line is approached, which may be the CEP ($`g_\mathrm{c}=g_\mathrm{e}`$). Let us ignore the $`\stackrel{ˇ}{\psi }^3`$ and $`\stackrel{ˇ}{\psi }^4`$ terms for the present. Then $``$ becomes quadratic in $`\stackrel{ˇ}{\psi }`$, so that $`\stackrel{ˇ}{\psi }`$ can be integrated out exactly. The resulting effective Hamiltonian $`_{\mathrm{eff}}[\varphi ]`$ is given by $`_1[\varphi ]`$, with the replacements $`a_2\stackrel{ˇ}{a}_2(\stackrel{ˇ}{b}_1d_{21}/\stackrel{ˇ}{b}_2)`$ and $`a_4\stackrel{ˇ}{a}_4d_{21}^2/2\stackrel{ˇ}{b}_2`$, plus a $`\varphi `$-independent term $`d^dxf_\mathrm{G}(\stackrel{ˇ}{b}_2,\stackrel{ˇ}{b}_1)`$ corresponding to the free energy of the Hamiltonian $`_\mathrm{G}[\stackrel{ˇ}{\psi }]=d^dx[(\stackrel{ˇ}{b}_2/2)\stackrel{ˇ}{\psi }^2+\stackrel{ˇ}{b}_1\stackrel{ˇ}{\psi }]`$. As usual, we may presume that parameters such as $`\stackrel{ˇ}{a}_2`$, $`\stackrel{ˇ}{a}_4`$ have a Taylor expansion in $`t=(TT_\mathrm{c}(g_\mathrm{c}))/T_\mathrm{c}(g_\mathrm{c})`$ and $`\delta g=(gg_\mathrm{c})/g_\mathrm{c}`$ near $`(T_c,g_c)`$. Both $`[0,\psi _{\mathrm{ref}}]`$ as well as $`f_\mathrm{G}`$ are regular at $`(T_c,g_c)`$. Hence the singular part of the total free energy results solely from $`_{\mathrm{eff}}`$. That it has the usual scaling form near criticality can be demonstrated via a standard field-theoretic RG analysis, either of $`_{\mathrm{eff}}`$ directly, or of the equivalent model-C-type Hamiltonian from which it originated. Thus the critical behavior on the critical line is the same as at the CEP, provided our omission of the $`\stackrel{ˇ}{\psi }^3`$ and $`\stackrel{ˇ}{\psi }^4`$ terms was justified.
It is instructive to consider these nonlinearities first in the case $`d_{21}=0`$ of a massive field $`\stackrel{ˇ}{\psi }`$ decoupled from $`\varphi `$. Let $`𝔊_{\mathrm{tra}}[\mathrm{\Psi }]`$ be the generator of transformations of Hamiltonians $`[\stackrel{ˇ}{\psi }]`$ induced by the change of variable $`\stackrel{ˇ}{\psi }\stackrel{ˇ}{\psi }+\mathrm{\Psi }`$:
$$𝔊_{\mathrm{tra}}[\mathrm{\Psi }][\stackrel{ˇ}{\psi }]d^dx\left[\mathrm{\Psi }(𝒙)\frac{\delta [\stackrel{ˇ}{\psi }]}{\delta \stackrel{ˇ}{\psi }(𝒙)}\frac{\delta \mathrm{\Psi }(𝒙)}{\delta \stackrel{ˇ}{\psi }(𝒙)}\right].$$
(11)
Choosing
$$\mathrm{\Psi }_k=\frac{\stackrel{ˇ}{b}_k}{k\stackrel{ˇ}{b}_2}\left[\stackrel{ˇ}{\psi }^{k1}+\delta (\mathrm{𝟎})\frac{k1}{\stackrel{ˇ}{b}_2}\stackrel{ˇ}{\psi }^{k3}\right],k=3,4,$$
(12)
gives
$$\frac{\stackrel{ˇ}{b}_k}{k}d^dx\stackrel{ˇ}{\psi }^k=𝔊_{\mathrm{tra}}\left[\mathrm{\Psi }_k\right]_\mathrm{G}[\stackrel{ˇ}{\psi }]+\delta _{k,4}c_4d^dx,k=3,4,$$
(13)
with $`c_4=3\stackrel{ˇ}{b}_4[\delta (\mathrm{𝟎})/2\stackrel{ˇ}{b}_2]^2`$, where $`\delta (\mathrm{𝟎})=d^dq/(2\pi )^d`$ is a cutoff-dependent constant. Hence, at the *Gaussian* fixed point $`_\mathrm{G}[\stackrel{ˇ}{\psi }]`$, the $`\stackrel{ˇ}{\psi }^3`$ and $`\stackrel{ˇ}{\psi }^4`$ terms correspond to a *redundant* operator, and a *redundant* operator plus a constant, respectively. More generally, Wegner \[22, Sec. III.G.2\] has shown that *any translationally invariant local operator* can be represented as a redundant operator plus a constant at such a noncritical Gaussian fixed point.
To generalize these considerations to the case $`_{21}0`$ with $`\stackrel{ˇ}{b}_10`$, we insert
$$𝔊_{\mathrm{tra}}\left[\mathrm{\Psi }_k\right]_\mathrm{G}[\stackrel{ˇ}{\psi }]=𝔊_{\mathrm{tra}}\left[\mathrm{\Psi }_k\right]^{}[\varphi ,\stackrel{ˇ}{\psi }]d^dx\left(\frac{d_{21}}{2}\varphi ^2+\stackrel{ˇ}{b}_1\right)\mathrm{\Psi }_k$$
(14)
into (13). The result tells us that, to first order in $`\stackrel{ˇ}{b}_3`$, the $`\stackrel{ˇ}{\psi }^3`$ term is equivalent to shifts of $`\stackrel{ˇ}{a}_2`$, $`\stackrel{ˇ}{b}_2`$, and the constant part of $``$, plus a generated $`\varphi ^2\stackrel{ˇ}{\psi }^2`$ contribution. Likewise, the $`\stackrel{ˇ}{\psi }^4`$ term corresponds (to order $`\stackrel{ˇ}{b}_4`$) to the generation of $`\varphi ^2\stackrel{ˇ}{\psi }^3`$ and $`\stackrel{ˇ}{\psi }^3`$ contributions, and shifts of $`d_{21}`$ and $`\stackrel{ˇ}{b}_1`$. Owing to the high naive dimensions $`2(d1)`$ and $`(5d4)/2`$ of the operators $`\varphi ^2\stackrel{ˇ}{\psi }^2`$ and $`\varphi ^2\stackrel{ˇ}{\psi }^3`$, we may trust that both produce only irrelevant corrections and hence may be dropped. Consequently, the effects of the $`\stackrel{ˇ}{\psi }^3`$ and $`\stackrel{ˇ}{\psi }^4`$ terms can be absorbed through shifts of the parameters $`\stackrel{ˇ}{a}_2,\mathrm{},\stackrel{ˇ}{b}_1`$ of the Hamiltonian with $`\stackrel{ˇ}{b}_3=\stackrel{ˇ}{b}_4=0`$, apart from irrelevant corrections. This means a change of the locations of the critical and first-order lines, and a corresponding adjustment of, e.g., the temperature scaling field. In summary, we arrive at an effective $`\varphi ^4`$ Hamiltonian $`_{\mathrm{eff}}[\varphi ]`$, *irrespective of whether the critical line or the CEP is approached*.
Next, we turn to the issue of the *discontinuity eigenexponent* $`y=d`$. Consider changes $`g_\mathrm{e}g_\mathrm{c}=g_\mathrm{e}+\delta g`$, $`\stackrel{ˇ}{a}_{2\mathrm{e}}\stackrel{ˇ}{a}_{2\mathrm{c}}=\stackrel{ˇ}{a}_{2\mathrm{e}}+\delta a_2`$, away from the CEP such that the theory *remains critical*. As we have seen above, varying $`g`$ (i.e., $`\stackrel{ˇ}{b}_1`$) alters the configuration-independent part of $``$, i.e., the coefficient, $`\mu _0`$, of the ‘volume operator’ $`d^dx`$. Since $`\mu _0`$ trivially scales with the exponent $`d`$ under RG transformations, it is formally relevant; but it does not contribute to the critical behavior at continuous phase transitions. Wegner therefore proposed to call it ‘special’ scaling field. The above variation within the critical manifold implies a change $`\delta \mu _0(T_\mathrm{c}T_\mathrm{e})`$ of $`\mu _0`$. This is the analog of the eigenperturbation with eigenexponent $`y=d`$ found at the CEP fixed point in position-space RG calculations .
The above analysis of the symmetric CEP can be extended to the *nonsymmetric* case, in which the $`\varphi \varphi `$ symmetry of $`[\varphi ,\psi ]`$ is broken . As expected, the principal modifications are of a geometrical nature: the first-order surface bounded by the critical line is no longer confined to the $`h=0`$ plane, and the ordering field $`h`$ and the temperature variable $`t`$ mix in the scaling fields.
Our analysis confirms that the singular part of the free energy of the $`\alpha \beta `$ and $`\alpha +\beta `$ phases has the usual scaling form anticipated in the phenomenological scaling theory, both on the critical line and at the CEP. Hence it is clear that the limiting forms (1) and (2) must hold and can be derived in a similar fashion as in phenomenological investigations . We will therefore restrict ourselves here to a few remarks. The singularity (1) in $`g_\sigma (T)`$ follows by exploiting the equality of the free energies (grand potential) of the liquid ($`\alpha \beta `$, $`\alpha `$, $`\beta `$) and the spectator ($`\gamma `$) phases at coexistence. To derive the behavior of the nonordering density $`\rho _g=\psi `$ near the CEP, we have generalized the field-theoretic RG analysis to the $`g`$-dependent model-C-type Hamiltonian that result in the symmetric and nonsymmetric cases. In the nonsymmetric case $`\psi `$ does not only couple to the energy density ($`|t|^{1\alpha }`$), but also to the order parameter, which produces the additional $`|t|^\beta `$ singularity in (2).
We gratefully acknowledge enjoyable discussions with B. N. Shalaev, helpful correspondence with R. K. P. Zia, and partial support by the Deutsche Forschungsgemeinschaft (DFG) via Sonderforschungsbereich 237 and the Leibniz program.
|
no-problem/9908/astro-ph9908210.html
|
ar5iv
|
text
|
# Deep spectroscopy of distant 3CR radio galaxies: the data
## 1 Introduction
The emission line properties of powerful distant ($`z\mathrm{}>0.5`$) radio galaxies are striking. Their emission line luminosities are large, with the rest–frame equivalent width of the \[OII\] 3727 line frequently exceeding 100Å (e.g. Spinrad 1982). Indeed, the strong correlation between emission line luminosity and radio power (e.g. Rawlings & Saunders 1991) was the key factor in enabling spectroscopic completeness to be achieved for a large sample of powerful radio galaxies (the revised 3CR catalogue; Laing et al. 1983). The line emission of the distant 3CR radio galaxies is also seen to be spatially extended over regions that can be as large as 100 kpc and is frequently elongated along the direction of the radio axis (e.g. McCarthy 1988, McCarthy et al. 1995).
The source of ionisation of this gas has been a long standing question. Robinson et al. \[Robinson et al. 1987\] found that optical emission line spectra of most low redshift ($`z\mathrm{}<0.1`$) radio galaxies are well explained using photoionisation models, and a similar result was found for a composite spectrum of radio galaxies with redshifts $`0.1<z<3`$ \[McCarthy 1993\]. Photoionisation models are also supported by orientation–based unification schemes of radio galaxies and radio–loud quasars (e.g. Barthel 1989), in which all radio galaxies host an obscured quasar nucleus: the flux of ionising photons required to produce the observed luminosities of the emission line regions can be shown to be comparable to that produced by radio–loud quasars at the same redshift (e.g. see McCarthy 1993). On the other hand, detailed studies of individual sources (e.g. 3C277.3; van Breugel et al 1985; 3C171; Clark et al. 1998) have revealed features such as enhanced nebular line emission, high velocity gas components, and large velocity dispersions coincident with the radio hotspots or with bends in the radio jets, indicating that the morphology and kinematics of the gas in some sources are dominated by shocks associated with the radio source. The ionisation state of the gas in these regions is also consistent with that expected from shock ionisation (e.g. Villar–Martín et al. 1999). Bicknell et al. \[Bicknell et al. 1997\] considered the energy input to the emission line regions of Gigahertz-Peaked Spectrum (GPS) and Compact Steep Spectrum (CSS) sources from the shocks associated with the advance of the radio jet and cocoon, and showed that the energy supplied by the shocks to the interstellar medium is sufficient to account for the observed line emission. The relative importance of shocks and photoionisation in producing the emission line properties of the general radio galaxy population therefore remains an open question.
Another important issue is the varied kinematics seen in the emission line regions. At low redshifts the emission line properties of the 3CR radio galaxies have been intensively studied (e.g. Baum et al. 1992 and references therein); a variety of kinematics are seen, from galaxies consistent with simple rotation through to those classified as ‘violent non-rotators’ with large turbulent velocities. At higher redshifts, McCarthy et al. \[McCarthy et al. 1995, McCarthy et al. 1996\] have studied a large sample of 3CR radio galaxies with low spectral and spatial resolution, and find that the velocity full–width–half–maxima (FWHM) are significantly higher than those at low redshifts (see also Baum et al. 1998), often exceeding 1000 km s <sup>-1</sup>, and large velocity shears are seen. The exceptional nature of the kinematics has been reinforced by more detailed studies of individual sources (e.g. Spinrad & Djorgovski 1984, Tadhunter 1991, Meisenheimer & Hippelein 1992, Hippelein & Meisenheimer 1992, Stockton et al. 1996, Neeser et al. 1997). The emission line properties of these high redshift radio galaxies are evidently more extreme than those at low redshift (and hence of lower radio power) in more than just their luminosities.
The origin of the emission line gas itself is another unresolved issue. Typically $`10^8`$ to $`10^9M_{}`$ of ionised gas are estimated to be present around these objects (McCarthy 1993 and references therein), significantly more than found in quiescent low redshift ellipticals. The gas may have an origin external to the radio galaxy, being either associated with the remnants of a galaxy merger \[Heckman et al. 1986, Baum & Heckman 1989\], or gas brought in by a massive cooling flow in a surrounding intracluster medium; some support for the latter hypothesis is given by the detection of extended X–ray emission around a number of powerful distant radio galaxies (Crawford and Fabian 1996 and references therein), although the higher than primordial metallicity of the gas (as indicated by the strong emission lines of, for example, oxygen, neon, magnesium and sulphur) dictates that the gas must have have been processed within stars at some point in its past. Alternatively, the gas may be left over from the formation phase of these massive galaxies, perhaps expelled from the galaxy either in a wind following an earlier starburst phase or more recently by the shocks associated with the radio source. If the gas has an origin external to the host galaxy, then it is important to know what the connection is, if any, between the origin of this gas and the onset of the radio source activity.
The properties of the continuum emission of powerful distant radio galaxies are equally interesting. At near infrared wavelengths the galaxies follow a tight K$`z`$ Hubble relation \[Lilly & Longair 1984\] and their host galaxies have colours and radial light profiles consistent with being giant elliptical galaxies which formed at large redshifts \[Best et al. 1998b\]. At optical wavelengths, however, powerful radio galaxies beyond redshift $`z0.6`$ show a strong, but variable, excess of blue emission, generally aligned along the radio axis \[McCarthy et al. 1987, Chambers et al. 1987\]. Using Hubble Space Telescope (HST) images of a sample of 3CR radio galaxies with redshifts $`z1`$, we have shown that the nature of this alignment differs greatly from galaxy to galaxy, in particular becoming weaker as the linear size of the radio source increases \[Best et al. 1996, Best et al. 1997\]. It is clear that a number of different physical processes contribute to the continuum alignment effect, but less clear which processes are the most important (for reviews see e.g. McCarthy 1993, Röttgering & Miley 1996).
To study the emission line gas properties of these galaxies, our multi–waveband imaging project on the redshift one 3CR galaxies has been expanded to include deep spectroscopic observations, producing a combined dataset of unparalleled quality. In the current paper the basic results of the spectroscopic program are presented. The layout is as follows. In Section 2, details concerning the sample selection, the observations and the data reduction are presented. Section 3 contains the direct results of these observations, in the form of extracted one–dimensional spectra, two–dimensional studies of the \[OII\] 3727 emission line structures, tables of spectral properties, and a brief description of the individual sources. The results are summarised in Section 4. An accompanying paper (Best et al 1999; hereafter Paper 2) investigates the emission line ratios and velocity structures of the sample as a whole, and the consequences of these for the origin of the ionisation and kinematics of these galaxies. A later paper will address the nature of the continuum emission.
Throughout the paper, values of the cosmological parameters of $`\mathrm{\Omega }=1`$ and $`H_0=50`$ km s <sup>-1</sup> Mpc<sup>-1</sup> are assumed. For this cosmology, 1 arcsec corresponds to 8.5 kpc at redshift $`z=1`$.
## 2 Observational Details
### 2.1 Sample selection and observational set-up
The galaxies were drawn from the essentially complete sample of 28 3CR radio galaxies with redshifts $`0.6<z<1.8`$ which we have intensively studied using the HST, the VLA and UKIRT (e.g. Best et al. 1997). From this sample, spectroscopic studies were restricted initially to those 18 galaxies with redshifts $`0.7<z<1.25`$, the upper redshift cut–off corresponding to that at which the 4000Å break is redshifted beyond an observed wavelength of 9000Å and the lower redshift cut-off being set by telescope time limitations. Of these 18 galaxies, 3C41 ($`z=0.795`$), 3C65 ($`z=1.176`$), 3C267 ($`z=1.144`$) and 3C277.2 ($`z=0.766`$) were subsequently excluded. In the cases of 3C65 and 3C267, their exclusion was due to partially cloudy conditions during one observing night resulting in a poor data quality for these two observations. The omission of 3C41 and 3C277.2 was due to constraints of telescope time at the relevant right ascensions: the decision not to observe these particular two galaxies was based solely upon them having the lowest redshifts in the sample at those right ascensions. The exclusion of these four galaxies should not introduce any significant selection effects.
The remaining 14 galaxies were observed on July 7-8 1997 and February 23-24 1998, using the duel–beam ISIS spectrograph on the William Herschel Telescope (WHT). The 5400Å dichroic was selected since it provided the highest throughput at short wavelengths. The R158B grating was used in combination with a Loral CCD in the blue arm of the spectrograph. This low spectral resolution ($`12`$Å) grating provided the largest wavelength coverage (3700Å) and maximized the signal–to–noise at short wavelengths by decreasing the importance of the CCD read-out noise, which even still was the dominant source of noise at wavelengths below about 3400Å. This set–up enabled accurate measurement of many emission line strengths and a determination of the slope of the continuum emission at sufficiently short wavelengths that any contribution of the evolved stellar population will be negligible; at such wavelengths the spectral characteristics of the aligned emission alone are measured and can be used to pin down the physical processes contributing to the alignment effect. During the July run, the wavelength range sampled by the CCD was set to span from below the minimum useful wavelength ($`3250`$Å) to longward of the dichroic. During the February run, the (different) Loral CCD had a charge trap in the dispersion direction at about pixel 1000, reducing the credibility of data at longer wavelengths; the wavelength range was tuned to sample from 3275Å up to the charge trap at about 5100Å.
In the red arm of the spectrograph the R316R grating was used in combination with TEK CCD, providing a spatial scale of 0.36 arcsec per pixel, a dispersion of 1.49Å per pixel and a spectral resolution of about 5Å. The wavelength range of about 1500Å was centred on the wavelength given in Table 1 for each galaxy, tuned to cover as much as possible of the range from approximately 3550Å to 4300Å in the rest–frame of the galaxy whilst remaining below a maximum observed wavelength of 9000Å. This higher spectral resolution set-up in the red arm allows a much more detailed investigation of the velocity structures of the emission line gas as seen in the very luminous \[OII\] 3727 emission line, but still provides sufficient wavelength coverage to include the 4000Å break, the Balmer continuum break at 3645Å and a number of Balmer emission lines.
### 2.2 Observations and data reduction
Long–slit spectra of the 14 galaxies were taken with total integration times of between 1.5 and 2 hours per galaxy; the observations were split between 3 or 4 separate exposures in the red arm to assist in the removal of cosmic rays; the blue arm observations were split between only 2 exposures since shorter exposures would have had a more significant read–noise contribution. The slit was orientated either along the radio axis or along the axis of elongation of the optical–UV emission. Full details of the observations are provided in Table 1.
The seeing was typically 0.8 to 1 arcsec during the July run, and between $`1`$ and $`1.25`$ arcsec during the February observations. The first half of the February 23 night was partially cloudy, hampering the observations of 3C65 and 3C267 as discussed above. The observations of 3C217 may have suffered partial cloud interference and be non–photometric, although the approximate agreement between the 7500Å flux density determined from the spectrum and that extracted from the equivalent region of an HST image of this galaxy, convolved to the same angular resolution, suggest that this was not significant. Conditions during the second half of that night and the other three nights were photometric.
The data were reduced using standard packages within the iraf noao reduction software. The raw data frames were corrected for overscan bias, and flat–fielded using observations of internal calibration lamps with the same instrumental set-up as the object exposures: ie. in the red arm, separate flat fields were constructed for each galaxy, since each was observed with a different observed wavelength range; these flat field observations were interspersed with the series of on–source exposures to minimise fringing effects. The sky background was removed, taking care not to include extended line emission in the sky bands. The different exposures of each galaxy were then combined, removing the cosmic ray events, and one dimensional spectra were extracted. The data were wavelength calibrated using observations of CuNe and CuAr arc lamps, and accurate flux calibration was achieved using observations of the spectrophotometric standard stars GD190, EG79, G9937 and LDS749b, again observed using exactly the same instrumental setup as each galaxy and corrected for airmass extinction.
### 2.3 The non-linearity of the Loral CCD
The far greater efficiency of the Loral CCD at wavelengths below 4000Å than any other CCD available at the time of these observations ($`75`$% as compared with $`35`$% quantum efficiency at 3500Å) offered an unrivalled opportunity for study at these wavelengths. However, the Loral CCD used during the July run had a slightly non–linear response curve, giving rise to a minor problem concerning flux calibration of the blue arm data from this run. In order to assess the extent of this problem, a sequence of flat field observations of the internal calibration lamps were taken, with exposures of 0, 1, 2, 3, 5, 7, 10, 15, 20, 30, 60, 120, 300, 120, 60, 30, 20, 15, 10, 7, 5, 3, 2, 1, 0 seconds, the average of the frames for each exposure time on the increasing and decreasing exposure time branches being taken to account for any systematic time variation in the intensity of the calibration lamp. A small (relatively uniform intensity) region of the CCD was selected and, after subtraction of the 0 second bias frame, the mean counts per pixel in that region was measured for each different exposure time frame. The frame providing on average just over 100 counts per pixel was arbitrarily declared to be ‘correct’; the ‘expected’ count level for this CCD region in the other frames was then calculated by scaling by the exposure time, and compared to the observed counts. This process was repeated for a large number of different regions on the CCD, and also with a smaller sample of flat–field observations taken the following night.
The results of this analysis are presented in Figure 1, which shows that the Loral CCD is non–linear at the $`\mathrm{}<10`$% level at count levels below $`800`$ counts per pixel, but above that level the non–linearity increases sharply. The scatter around a parameterised fit to the non–linearity curve in the 300 to 500 counts range can be explained by Poisson noise statistics alone, indicating that the non–linearity was highly repeatable, varying neither with time nor with position on the CCD, and thus allowing the small non–linearity at these count levels to be accurately calibrated out.
The faintness of the radio galaxies being studied meant that the detected counts per pixel in the on–source exposures, including both sky and object counts, fell automatically within the range 50 to 800 counts per pixel. Both the flat field and the standard star exposures in the blue arm during the July run were built up by summing a series of images, each of which was kept short to have maximum count levels below 1000 counts per pixel. The parameterised curve shown in Figure 1 was then divided into the scientific exposures, after bias subtraction but before flat fielding and other calibration. Such a correction was also applied to the flat–field exposures and to the observations of standard stars. In this way, any systematic offset introduced by the application of the non-linearity curve to the object will be roughly cancelled by its application to the calibrator, reducing any errors to $`\mathrm{}<2`$%, far below the other uncertainties related to the calibration procedure.
## 3 Results
The resulting one dimensional spectra, extracted from the central 4.3 arcsec ($`35`$ kpc) region along the slit direction in each of the blue and red arms, are shown in Figures 2 to 15 (a & b). In Table 2 are tabulated the fluxes of the various emission lines relative to \[OII\] 3727 and their equivalent widths, together with the mean flux density of the continuum in various wavelength regions. These flux ratios and flux densities (although not the plotted one-dimensional spectra, to allow comparison with previously published data) have been corrected for galactic extinction using the Milky Way HI column density data of Burstein and Heiles \[Burstein & Heiles 1982\], quoted in Table 2, and the parameterised galactic extinction law of Howarth \[Howarth 1983\]. These extinction corrections are $`\mathrm{}<10`$% for most sources, but exceed a factor of 2 at the shortest wavelengths for the low galactic latitude source 3C22.
The emission line flux ratios and continuum flux densities are tabulated only for the single extracted spectrum. Even these very deep spectra do not have high enough signal–to–noise in the blue continuum of most of the galaxies to investigate in detail variations in the continuum colour along the spatial direction of the slit. Variations in the intensity, velocity and FWHM of the emission lines along the spatial direction of the slit are readily apparent, and are considered below in the study of the \[OII\] 3727 emission line.
Also presented in Table 2 are the observed strengths of the 4000Å break, as determined by the ratio of the mean continuum flux between 4050Å and 4250Å to that between 3750Å and 3950Å \[Bruzual 1983\]; note that due to the presence of the excess aligned optical–UV emission, the strength of this break cannot be used directly to age the stellar populations of these galaxies, although the galaxies with only weak alignment effects (e.g. 3C441) do show fairly strong breaks indicative of evolved stellar populations.
To study the velocity structure of the \[OII\] 3727 emission line, a two–dimensional region around this emission line was extracted \[Figures 2 to 15(c)\], and from this a series of one dimensional spectra were extracted from spatial regions of width 4 pixels (1.44 arcsec), with the extraction centre stepped in units of 2 pixels (0.72 arcsec, $`\frac{2}{3}`$ of a seeing profile). Each extracted spectrum was then analysed using the following automated procedure.
1. The extracted spectrum was fitted to find the best–fitting Gaussian, allowing for continuum subtraction. If this had a velocity FWHM greater than the instrumental resolution<sup>1</sup><sup>1</sup>1The exclusion on the basis of FWHM was necessary to avoid selection of single pixel spikes, but it should be noted that real features may have a measured FWHM less than the velocity resolution, and thus be excluded, if they are detected with only low signal–to–noise. However, all of the extracted Gaussian profiles are found to have deconvolved FWHM in excess of 200 km s <sup>-1</sup>(Figures 2 to 15f) and so it is extremely unlikely that any real features have been excluded by this method., determined by measuring the FWHM of unblended sky lines, and had an integrated signal–to–noise ratio greater than 5 then it was accepted. Otherwise no fit was made at this spatial position.
2. The spectrum was then fitted using a combination of two Gaussians. This fit was preferred to the single Gaussian fit only if both fitted Gaussians were wider than the velocity resolution, had an integrated signal–to–noise in excess of five, and the reduced chi–squared of two Gaussian fit was below that of the single Gaussian fit. If these requirements were not satisfied, the single Gaussian fit was adopted. Note that the amplitude of a fitted Gaussian was allowed to be negative to detect absorption features (although none were observed).
3. This process was repeated using 3,4,5 etc Gaussians.
In this way, it was possible to search for high velocity gas components, and structures in the emission line gas inconsistent with being a single velocity component (e.g. see 3C324; Figure 10). For each extracted Gaussian, the integrated emission line flux was calculated, as was the velocity relative to that at the centre of the galaxy and the FWHM of the emission line. The last of these was deconvolved by subtracting in quadrature the instrumental FWHM, as determined from unblended sky lines. The errors on each of these three parameters were also determined. It should be noted that a Gaussian did not always provide an ideal fit to the velocity profile, for example with profiles showing slight wings in either the blue or red direction, perhaps associated with a weaker emission component at a different velocity that was too faint to be individually distinguished.
The variation of the intensity, velocity and FWHM of the \[OII\] 3727 line emission with spatial location along the slit are presented in Figures 2 to 15(d to f). The large–scale variations in these three parameters measured here agree extremely well with those determined from lower spatial and spectral resolution data by McCarthy et al. \[McCarthy et al. 1996\] for the seven galaxies in common between the two samples; the only significant exception is 3C324, where the higher spectral resolution of the current data has shown that a single Gaussian component is clearly not sufficient to describe the velocity structure. Note also that the surface brightnesses of the \[OII\] emission line determined from these spectra are comparable to those measured from the same region in narrow–band imaging of this emission line by McCarthy et al. \[McCarthy et al. 1995\].
Important features of the emission line properties of individual galaxies are discussed briefly below. A full discussion of the continuum morphologies of these sources can be found in Best et al. \[Best et al. 1997\], and is not repeated here except where of direct relevance.
3C22 has been identified as possessing a significant quasar component on the basis of a broad H$`\alpha `$ emission line and its high luminosity and nucleated appearance in the K–band \[Dunlop & Peacock 1993, Rawlings et al. 1995, Economou et al. 1995\]. The emission line properties observed here, however, are by no means extreme (Figure 2). The \[OII\] line emission is confined to approximately the inner 2 arcsecond ($`17`$ kpc) radial distance along the slit (see also McCarthy et al. 1995) and is consistent with velocity variations $`\mathrm{}<100`$ km s <sup>-1</sup>. The FWHM seen for this line is high (700 to 800 km s <sup>-1</sup>) but not exceptional with respect to the rest of the sample. The ratio of emission line fluxes seen from this galaxy are intermediate within the sample, and similar to the combined radio galaxy spectrum of McCarthy \[McCarthy 1993\]. The continuum emission at rest–frame wavelengths $`\mathrm{}<3000`$Å is somewhat bluer than average.
3C217 possesses by far the highest equivalent width \[OII\] 3727 line emission of all of the galaxies in this sample. This intense line emission is relatively compact (Figure 3; see also the narrow band \[OII\] 3727 image of Rigler et al. 1992) and confined to the inner 2–3 arcsec radius, in the region in which the HST images also show very luminous and blue rest–frame ultra–violet emission \[Best et al. 1997\]. The \[OII\] line shows a large velocity dispersion and a complex velocity profile, but with only small ($`\mathrm{}<200`$ km s <sup>-1</sup>) variations along the slit in the mean velocity. Relative to the other galaxies in the sample, the lower ionisation lines are strong in the spectrum of this object.
3C226 shows a smooth, regular emission line gas profile (Figure 4). The \[OII\] 3727 emission line shows a clear intensity peak at the centre of the galaxy, extended slightly to the north–west (see also the narrow–band image of McCarthy et al. 1995) where the HST image shows a faint blue knot of emission \[Best et al. 1997\]. The relative velocity plot is consistent with a simple rotating halo; it may instead represent material infalling or outflowing along the radio axis, although the smooth slope of the velocity profile would be surprising in that case. The FWHM of the line profile is low relative to the other sources in the sample and fairly constant along the slit. At any given location along the slit, however, the dispersion in the line velocities is far greater than the mean offset velocity of the emission line at that location, indicating that whether the relative velocity plot represents a mean rotational motion or if it arises through inflow or outflow of material, then there is considerable scatter in the emission line cloud velocities relative to these mean motions. The emission line ratios of 3C226 are fairly typical for the sample; the continuum emission in the blue–arm spectrum is redder than the average.
3C247 has line emission extending for over 10 arcseconds along the radio axis (Figure 5; see also McCarthy et al. 1995). The inner approximately 2 arcsec radius of the line emission is almost symmetrical, with an intermediate velocity FWHM and a velocity profile which again may be consistent with a mean rotational motion or with infall / outflow of material. Further to the north–east there is a smooth transition into a region of \[OII\] emission redshifted by 150 km s <sup>-1</sup>and with a lower velocity width. This second region has an associated continuum object, and it seems likely that what is seen here is an interaction of the radio galaxy with a companion. The radio galaxy itself shows a significant 4000Å break of strength $`1.61\pm 0.06`$; bearing in mind that the true strength of this break is diluted by aligned continuum emission, the host galaxy must contain a well–evolved stellar population. A strong CaK 3933Å absorption feature is readily apparent in the red–arm spectrum (Figure 5b).
3C252 shows line emission extended over only a few arcseconds, with a smooth velocity profile again representing simple rotation or infall / outflow (Figure 6). The galaxy has the lowest velocity FWHM of any source in the sample, although still with a velocity dispersion significantly greater than the mean relative velocities. The integrated \[OII\] 3727 flux is amongst the lowest in the sample, but many of the other emission lines are strong by comparison.
3C265 is an extreme radio galaxy in both its continuum and emission line properties. More than a magnitude brighter at optical wavelengths than other radio galaxies at the same redshift, its continuum emission is composed of a large number of components extending over 80 kpc (10 arcsec) with a remarkably blue colour (Figure 7a); its \[OII\] 3727 emission shows a similar, or even greater, extent (Figure 7; see also Tadhunter 1991; Rigler et al. 1992; McCarthy et al. 1995, 1996; Dey & Spinrad 1996). From the galaxy centre the emission extends a considerable distance to the north–west with a fairly flat velocity profile and decreasing velocity width. The ‘blob’ of line emission offset 9 arcsec to the north–west of the galaxy centre is associated with a continuum emission region (e.g. see Best et al. 1997). The properties of the \[OII\] 3727 emission line are also observed with lower signal–to–noise in weaker emission lines.
Tadhunter \[Tadhunter 1991\] reported the presence of high velocity gas components to the south–east of the nucleus, with velocities of +750 and +1550 km s <sup>-1</sup> with respect to the velocity at the continuum centroid, although these were not obvious in the data of Dey & Spinrad \[Dey & Spinrad 1996\] nor of McCarthy et al. \[McCarthy et al. 1996\]. The current data confirm the presence of the +750 km s <sup>-1</sup>component, the slightly higher velocity measured here being due to a small offset between the continuum centroid position determined here and that of Tadhunter. This component is also detected clearly in the \[NeIII\] 3869 line. The +1550 km s <sup>-1</sup> component is, however, not detected; this may be due to the difference in slit position angle of the two observations (136 versus 145) and/or the use of a narrower slit in the current observations. The origin of the high velocity component is almost certainly related to the radio source activity \[Tadhunter 1991\].
3C280 has a complex emission line structure extending over 11 arcsec (90 kpc; Figure 8). The emission shows a strong central peak together with a large extension to the east where it forms a loop around the eastern radio lobe \[Rigler et al. 1992, McCarthy et al. 1995\]. This loop of emission is redshifted with respect to the velocity at the continuum centroid by about 500 km s <sup>-1</sup>. The FWHM of the \[OII\] 3727 emission is moderately high and almost constant throughout the entire extent of the emission.
3C289 shows a central peak of line emission, with a secondary emission region a couple of arcseconds to the south–east (Figure 9; see also Rigler et al. 1992), corresponding to a faint emission region on the HST image of Best et al. \[Best et al. 1997\]. Both the integrated \[OII\] 3727 emission line intensity and the FWHM of the emission line are relatively low for the sample. The velocity profile could represent rotation or infall / outflow of material. A weak CaK 3933Å absorption line may be present in the red–arm spectrum.
3C324 has previously been described as showing a velocity shear of 700 km s <sup>-1</sup>along the radio axis \[Spinrad & Djorgovski 1984, McCarthy et al. 1996\], but the higher spectral and spatial resolution data presented in Figure 10 clearly indicate that that is not the case. The emission line gas is composed of two distinct components, with velocities separated by $`800`$ km s <sup>-1</sup>. At the position corresponding to the continuum centroid, the two velocity components overlap; the adoption of the mean of these two as the true redshift of the system is necessarily uncertain, and the possibility that the true centre of 3C324 lies coincident with either of the components determined here to be at +400 and $``$400 km s <sup>-1</sup> cannot be excluded. The western emission line component is slightly more luminous and has the higher FWHM, reaching over 1000 km s <sup>-1</sup>; note that the dissociation of the central emission into two separate components means that a FWHM as high as 1500 km s <sup>-1</sup>, determined by McCarthy et al. \[McCarthy et al. 1996\] for the blended pair, is not measured here. It is unclear whether these two emission line regions represent different physical systems, perhaps undergoing a merger, or whether radial acceleration by radio jet shocks is responsible for the bimodality of the emission line velocities.
This two component structure of the emission line properties of 3C324 reflects the structure of its optical–UV continuum emission \[Longair et al. 1995, Dickinson et al. 1996\]. The HST images show bright emission regions to the east and west, but a central minimum corresponding to the radio core position and interpreted as extinction by a central dust lane. Narrow–band images of the \[OII\] 3727 emission line also show an elongated clumpy morphology \[Hammer & Le Fèvre 1990, Rigler et al. 1992\]. Cimatti et al. \[Cimatti et al. 1996\] showed that the polarisation properties of the emission to the east and the west of the nucleus also differ strongly.
3C340 is another radio galaxy whose emission line structure is smooth and well–ordered (Figure 11). The relative velocity plot is consistent with simple rotation or infall or outflow of material, and the line widths are the second lowest in the sample. The emission is centrally concentrated, with a small (2 to 3 arcsec) extension along the radio axis to the west (see also the narrow–band image of McCarthy et al. 1995). The integrated \[OII\] 3727 intensity is relatively low, with the emission line ratios in the spectrum indicating a very high ionisation state. The galaxy shows a significant 4000Å break ($`1.52\pm 0.07`$), a broad CaK 3933Å absorption feature, and a red colour for its short wavelength continuum emission.
3C352 shows an elongated \[OII\] 3727 emission region extending for 10 arcseconds, and possibly further since the presence of a bright star to the north prohibits the detection of any further line emission in that direction. The velocity profile is smooth throughout the central regions of the source with a velocity shear exceeding 700 km s <sup>-1</sup>, but distorts somewhat at larger distances (Figure 12). The FWHM is large, reaching over 1000 km s <sup>-1</sup>. These results are consistent with those of Hippelein and Meisenheimer \[Hippelein & Meisenheimer 1992\] from Fabry–Perot imaging. Relative to the rest of the sample, the lower ionisation lines are strong in the spectrum. A broad CaK absorption feature can be seen at 3933Å.
3C356 has long been a puzzle, with two equally bright infrared galaxies separated by about 5 arcsec corresponding to the location to two radio core–like features. The identification of the true nucleus has been a matter of some debate, with different authors favouring the northern or the southern galaxy for different reasons (see Best et al 1997 for a more complete discussion). For the current data, the slit was placed to include both components, with zero offset corresponding to the location of the northern galaxy (see Figure 13). As is observed for the continuum emission (e.g. Rigler et al. 1992, Best et al. 1997), the line emission from the northern region is compact whilst that from the southern region is more extended but gives a comparable integrated intensity (see also Lacy & Rawlings 1994, McCarthy et al. 1996). The northern region shows virtually no variation in its velocity with position, and a low velocity width; the southern region, redshifted by about 1200 km s <sup>-1</sup>, shows a steep velocity shear of 500 km s <sup>-1</sup> in 3 arcsec and a slightly broader FWHM.
3C368 is easily the best–studied galaxy in this sample (e.g. Hammer et al. 1991, Meisenheimer & Hippelein 1992, Rigler et al. 1992, Dickson et al. 1995, Longair et al. 1995, McCarthy et al. 1995, 1996, Stockton et al. 1996, Best et al. 1997, 1998a to name only the most recent), showing a highly elongated morphology in both its continuum and line emission extending about 10 arcsec (Figure 14), comparable to the extent of the radio source. The velocity structure of the line emission determined here is in full agreement with the previous lower spectral resolution measurements and the Fabry–Perot imaging of Meisenheimer & Hippelein \[Meisenheimer & Hippelein 1992\], the northern knots showing a velocity offset of order 600 km s <sup>-1</sup> and an extreme FWHM (up to 1350 km s <sup>-1</sup>). Many of the lower ionisation lines in the spectrum appear strong relative to the other galaxies in the sample. Study of the continuum emission is hampered by the presence of a galactic M–dwarf star lying within a couple of arcseconds of the centre of 3C368. \[Hammer et al. 1991\].
3C441 shows, in addition to the \[OII\] 3727 emission from the central galaxy, a secondary region of emission offset 12 arcseconds to the north–west and about 800 km s <sup>-1</sup> redwards in velocity (Figure 15; cf. McCarthy et al. 1996, Lacy et al. 1998). This emission region lies close to the radio hotspot \[McCarthy et al. 1995\] and has been associated with an interaction between the radio jet and a companion galaxy to 3C441 \[Lacy et al. 1998\]. The \[OII\] 3727 emission associated with the host galaxy itself has a low integrated intensity and a smooth velocity gradient of nearly 400 km s <sup>-1</sup> in 5 arcsec, consistent with rotation or with infalling / outflowing gas. A relatively large 4000Å break is observed in the spectrum ($`1.64\pm 0.04`$), together with strong CaK 3933Å absorption, consistent with the fact that this galaxy also shows only a very weak alignment effect at optical–UV wavelenths \[Best et al. 1997\].
Composite spectra have been produced for each of the red and blue arms of the spectrograph, by combining all of the presented spectra at the same rest–frame wavelengths, giving each individual spectrum an equal weighting. 3C368 was excluded from this combined spectrum due to the contribution of the foreground M–star to its emission. The resulting total spectra, shown in Figure 16, are equivalent to single spectra of over 20 hours in duration. In Table 3 are tabulated the relative strengths of the emission lines in this composite spectrum. These are quoted relative to the commonly adopted scale of H$`\beta `$$`=100`$ by assuming H$`\gamma `$/H$`\beta `$$`0.47`$, appropriate for Case B recombination at T=10000 K \[Osterbrock 1989\]; this value is also consistent with that obtained from the H$`\delta `$ line assuming H$`\delta `$/H$`\beta `$$`0.26`$.
One feature is immediately apparent when comparing these relative line fluxes with those from the composite spectrum of radio galaxies with redshifts $`0.1<z<3`$ constructed by McCarthy \[McCarthy 1993\]: the emission lines at short wavelengths are less luminous by factors of 2 to 4, relative to H$`\beta `$, than those of McCarthy’s spectrum. This may be due to the wide range of redshifts of the radio galaxies making up McCarthy’s composite and the strong correlation between emission line flux and redshift \[Rawlings & Saunders 1991\]; the shortest wavelength lines in his composite spectrum are only observed in the highest redshift sources (with powerful line emission) whilst the H$`\beta `$ line is seen in the lower redshift sources, introducing a bias towards lines at shorter rest–frame wavelengths appearing more luminous. The composite spectra presented in Figure 16 and Table 3 are much less prone to this bias, and so provide a fairly accurate measure of relative line fluxes at redshift $`z1`$.
Besides the emission lines, other features visible in the spectra include the broad CaK absorption feature at 3933Å, with an equivalent width of $`10\pm 2`$Å, and a weaker G–band absorption at 4300Å with an equivalent width of $`7\pm 3`$Å. A 4000Å break is marginally visible, but there is little evidence for the spectral breaks at 2640 and 2900Å (cf. Spinrad et al. 1997) expected from an old stellar population. This is not too surprising since the contribution from the old stars to the total flux density at these wavelengths, and indeed throughout all of the combined blue arm spectrum, is small compared to that of the aligned emission.
## 4 Conclusions
Extremely deep spectroscopic observations have been presented of an unbiased sample of the most powerful radio galaxies with redshifts $`z1`$. A broad range of emission lines is seen and a study at intermediate spectral resolution of the two–dimensional velocity structures of the emission line gas are presented. The enhanced sensitivity of new CCDs at short wavelengths has enabled the measurement of emission line ratios and continuum flux densities at unprecedentedly short wavelengths, $`\lambda \mathrm{}<3500`$Å, corresponding to the near–UV in the rest–frame of the sources where any continuum contribution from an evolved stellar population will be negligible.
The main results can be summarised as follows:
* Analysis of the velocity structures of these galaxies shows them to exhibit a wide range of kinematics. Some sources have highly distorted velocity profiles and velocity FWHM exceeding 1000 km s <sup>-1</sup>. Other sources have lower velocity dispersions and more ordered emission line profiles, with the variation of mean velocity along the slit being consistent with simple rotation. Even in these latter sources, however, the velocity FWHM are still a few hundred km s <sup>-1</sup>, significantly larger than the variations in mean velocities, indicating that there is considerable scatter in the emission line cloud velocities relative to any mean rotational motion.
* A high velocity ($`750`$ km s <sup>-1</sup>) gas component is confirmed close to the nucleus of 3C265. This is unique amongst the sample, but other galaxies display gas with velocities $`400`$ km s <sup>-1</sup> offset a few arcseconds from the centre, either connected to the central emission line region (3C280, 3C352, 3C368) or as a discrete region (3C356, 3C441).
* 3C324 is shown to consist of two kinematically distinct components separated in velocity by 800 km s <sup>-1</sup>.
* For those galaxies in which the alignment effect is seen to be relatively weak in the HST images, and hence the spectra are not dominated by emission from these alignment processes, 4000Å breaks from evolved stellar populations are clearly visible. CaK absorption features are also readily apparent in a number of the spectra.
* At rest–frame wavelengths shortward of $`2500`$Å the continuum emission of the galaxies is, on average, relatively flat in $`f_\lambda `$, although considerable source to source variations are seen both in these continuum colours and in the emission line ratios.
* A composite spectrum gives the relative strengths of the emission lines at rest–frame wavelengths between HeII 1640 and \[OIII\] 4363. Emission lines at short rest–frame wavelengths are systematically weaker (relative to H$`\beta `$) than those in the composite spectrum of McCarthy \[McCarthy 1993\]. It is suspected that this is due to a bias introduced in McCarthy’s spectrum by the emission line strength versus redshift correlation, and the large redshift coverage of the radio galaxies which comprise his sample.
The broad variation in kinematical and ionisation properties within the sample as a whole are investigated and compared against other radio source properties in the accompanying Paper 2, and conclusions are drawn there concerning the origin of the ionisation and kinematics of the emission line gas.
## Acknowledgements
This work was supported in part by the Formation and Evolution of Galaxies network set up by the European Commission under contract ERB FMRX– CT96–086 of its TMR programme. The William Herschel Telescope is operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roches de los Muchachos of the Instituto de Astrofisica de Canarias. We thank the referee, Mike Dopita, for his careful consideration of the original manuscript and a number of useful suggestions.
|
no-problem/9908/nucl-th9908078.html
|
ar5iv
|
text
|
# Photoproduction of mesons in nuclei at GeV energies Work supported by DFG and BMBF
## I Introduction
Photoproduction of mesons in nuclei offers the possibility to study the interaction of photons with nucleons in the nuclear medium. In the total photonuclear absorption cross section one observes experimentally for photon energies above 1 GeV a reduction of the absorption strength in nuclei which is known as shadowing effect. While Vector Meson Dominance (VMD) models are able to quantitatively describe this effect for photon energies above about 4 GeV they usually underestimate the effect for lower photon energies (see e.g. Ref. ). In Ref. it has been speculated that this may be related to a decrease of the $`\rho `$-meson mass in the nuclear medium.
The in-medium properties of the $`\rho `$-meson have found widespread interest during the past decade as they may be related to chiral symmetry . The experimental data on dilepton production in nucleus-nucleus collisions at SPS energies also seem to indicate a change of the $`\rho `$-meson spectral function (lowering of the mass or broadening) in the nuclear medium .
A third experimental observation that may be related to in-medium properties of the $`\rho `$-meson is the disappearance of the $`D_{13}(1520)`$-resonance in the total photoabsorption cross section in nuclei at photon energies around 800 MeV. In Refs. it has been proposed that this disappearance is caused by the large coupling of the $`D_{13}`$ to the $`N\rho `$-channel and a medium modification of the spectral function of the $`\rho `$-meson.
While a consistent theoretical descriptions of these observations is not yet available our transport approach is a first step into this direction. During the past years we have developed a semi-classical BUU transport model that allows us to calculate inclusive particle production in heavy-ion collisions from 200 AMeV to 200 AGeV, in photon, and in pion induced reactions with the very same physical input. This model has already been very successfully applied to the description of heavy-ion collisions at SIS energies and photoproduction of pions and etas up to 800 MeV . Just recently we have given predictions for dilepton production in pion-nucleus reactions , that will be measured by the HADES collaboration at GSI , and in photon-nucleus reaction in the energy range from 800 MeV to 2.2 GeV that is accessible at TJNAF . In these studies we have investigated direct observable consequences of medium modifications of the vector mesons through their dilepton decay.
The calculation of photoproduction in the region of the nucleon resonances (up to photon energies of about 1 GeV) has recently been extended to electroproduction . In this study we have also discussed different scenarios that might lead to a disappearance of the $`D_{13}`$ in the total photoabsorption cross section with respect to their influence on more exclusive observables.
It is the purpose of the present paper to give predictions for pion, eta, kaon, and antikaon production in photon-nucleus reactions in the energy range from 1 to 7 GeV. A comparison of these calculations with experiments that are possible at ELSA and TJNAF will on the one hand help to improve our understanding of the onset of the shadowing effect. On the other hand is photoproduction an excellent tool to study medium modifications of the produced mesons since the final observables get strongly modified by the final state interaction of the primary produced particles with the nuclear medium. In photon-nucleus reactions the produced particles have in general larger momenta with respect to the nuclear environment than in heavy-ion collisions. Therefore, photonuclear experiments yield partly complementary information about the in-medium self energies of mesons compared to heavy-ion experiments. They also have the great advantage that they allow to study the properties of the produced hadrons in an environment that is much closer to equilibrium than in a heavy-ion collision.
While the early seventies have already seen a remarkable number of experimental and theoretical studies of photoinduced particle production (for a rather complete review see Ref. ) the emphasis there was on the study of coherent production processes. Incoherent production had to rely for an interpretation on the Glauber approximation that involves a number of approximations and restrictions. The study described here is free of most of these and thus provides a more reliable framework for an interpretations of such reactions. Unfortunately, the few existing experimental data in Refs. on incoherent photoproduction of mesons in nuclei were all obtained under very restrictive experimental conditions which, furthermore, can not be reconstructed from the literature. We can, therefore, not compare our results to any data.
Our paper is organized as follows: In Section II we describe briefly the BUU transport model. In Section III we discuss our treatment of elementary photon-nucleon collisions and, in particular, the implementation of the shadowing effect. Our results for photoproduction of pions, etas, kaons, and antikaons are presented in Section IV. We close with a summary in Section V.
## II The BUU model
We use a semi-classical transport model (BUU) for a description of the final state interactions (fsi) of the produced particles. This description allows a full coupled channel treatment of the fsi, including non-forward processes. Our model has been described in Ref. . Therefore we restrict here ourselves to a brief presentation of the basic ideas and only discuss in detail the basic new features of our method.
The BUU equation describes the classical time evolution of a many-particle system under the influence of a self-consistent mean field potential and a collision term. For the case of identical particles it is given by:
$$(\frac{}{t}+\frac{H}{\stackrel{}{p}}\frac{}{\stackrel{}{r}}\frac{H}{\stackrel{}{r}}\frac{}{\stackrel{}{p}})f=I_{coll}[f],$$
(1)
where $`f(\stackrel{}{r},\stackrel{}{p},t)`$ denotes the one-particle phase space density with $`\stackrel{}{r}`$ and $`\stackrel{}{p}`$ being the spatial and momentum coordinates of the particle. $`I_{coll}`$ is the collision term and $`H(\stackrel{}{r},\stackrel{}{p},f)`$ stands for the single particle mean field Hamilton function which, in our numerical realization , is given as:
$$H=\sqrt{(\mu +S)^2+\stackrel{}{p}^2},$$
(2)
where $`S(\stackrel{}{r},\stackrel{}{p},f)`$ is an effective scalar potential. For a system of non-identical particles one gets an equation for each particle species that is coupled to all others by the collision term and/or the mean field potential. Besides the nucleon we take into account all nucleonic resonances that are rated with at least 2 stars in Ref. : $`P_{33}`$(1232), $`P_{11}`$(1440), $`D_{13}`$(1520), $`S_{11}`$(1535), $`P_{33}`$(1600), $`S_{31}`$(1620), $`S_{11}`$(1650), $`D_{15}`$(1675), $`F_{15}`$(1680), $`P_{13}`$(1879), $`S_{31}`$(1900), $`F_{35}`$(1905), $`P_{31}`$(1910), $`D_{35}`$(1930), $`F_{37}`$(19
50), $`F_{17}`$(1990), $`G_{17}`$(2190), $`D_{35}`$(2350). The resonances couple to the following channels: $`N\pi `$, $`N\eta `$, $`N\omega `$, $`\mathrm{\Lambda }K`$, $`\mathrm{\Delta }(1232)\pi `$, $`N\rho `$, $`N\sigma `$, $`N(1440)\pi `$, $`\mathrm{\Delta }(1232)\rho `$.
We also propagate explicitly the following baryonic resonances with total strangeness $`S=1`$: $`\mathrm{\Lambda }`$, $`\mathrm{\Sigma }`$, $`\mathrm{\Sigma }(1385)`$, $`\mathrm{\Lambda }(1405)`$, $`\mathrm{\Lambda }(1520)`$, $`\mathrm{\Lambda }(1600)`$, $`\mathrm{\Sigma }(1660)`$, $`\mathrm{\Lambda }(1670)`$, $`\mathrm{\Sigma }(1670)`$, $`\mathrm{\Lambda }(1690)`$, $`\mathrm{\Sigma }(1750)`$, $`\mathrm{\Sigma }(1775)`$, $`\mathrm{\Lambda }(1800)`$, $`\mathrm{\Lambda }(1810)`$, $`\mathrm{\Lambda }(1820)`$, $`\mathrm{\Lambda }(1830)`$, $`\mathrm{\Lambda }(1890)`$, $`\mathrm{\Sigma }(1915)`$, $`\mathrm{\Sigma }(2030)`$, $`\mathrm{\Lambda }(2100)`$, $`\mathrm{\Lambda }(2110)`$. The parameters of these resonances are consistent with the values given by the PDG and are listed in Table I. The resonances couple to the channels: $`\mathrm{\Lambda }\pi `$, $`\mathrm{\Sigma }\pi `$, $`\mathrm{\Sigma }(1385)\pi `$, $`\mathrm{\Lambda }\eta `$, $`N\overline{K}^{}(892)`$, $`\mathrm{\Lambda }(1520)\pi `$. The mass dependence of the partial decay widths is treated analogous to the case of the nucleonic resonances (see Ref. ); in cases when the relative angular momentum of the decay products is not uniquely given by the quantum numbers we use the lowest possible one. In the mesonic sector we take into account the following particles: $`\pi `$, $`\eta `$, $`\rho `$, $`\omega `$, $`\sigma `$, $`\varphi `$, $`K`$, $`\overline{K}`$, $`K^{}(892)`$, $`\overline{K}^{}(892)`$.
For a detailed description of the used cross sections in the non-strange sector we refer to Refs. . Baryon-baryon collisions above invariant energies of 2.6 GeV and meson-baryon collisions above 2.2 GeV are described by using the string fragmentation model FRITIOF . For strangeness production in low energy pion-nucleon collisions we adopt the parameterizations for $`\pi N\mathrm{\Lambda }K`$ and $`\pi N\mathrm{\Sigma }K`$ from Ref. and for $`\pi NNK\overline{K}`$ from Ref. . Strangeness production in baryon-baryon collisions is only of minor importance for the calculations presented here and is described in Ref. . The cross sections for the interactions of kaons and antikaons with nucleons can be found in Appendix A.
## III Photon-nucleon interaction
In Refs. we have described in detail how we calculate photonuclear reactions. The total cross section for any observable is given as an incoherent sum over the contributions from all nucleons in the nucleus where the final state interactions of the particles produced in the primary $`\gamma N`$ collisions are calculated using the transport equation (1). For invariant energies below 2.1 GeV (corresponding to $`E_\gamma =1.88`$ on a free nucleon at rest) we describe elementary $`\gamma N`$ collisions as in Ref. by an explicit calculation of the cross sections for production of nucleonic resonances as well as one-pion, two-pion, eta, vector meson, and strangeness production. For larger energies we use the string fragmentation model FRITIOF where we initialize a zero mass $`\rho ^0`$-meson for the photon following a VMD picture. In Fig. 1 we show that this procedure gives an excellent description of charged particle multiplicities in photon-proton collisions. The agreement seen there is better than could be expected from a model that had been developed for applications at higher energies. However, we do not expect the Lund model to give correct predictions for all specific channels, especially with respect to isospin. Since in the Lund model flavor exchange mechanisms are not included processes like, e.g. $`\rho N\mathrm{\Lambda }K`$, are not possible. Therefore we treat exclusive strangeness production independent of the Lund model also for energies above the string threshold. The used cross sections for exclusive strangeness production are discussed in Appendix B.
Nuclear shadowing of the incoming photon is taken into account for photon energies above 1 GeV by adopting the model of Ref. in the following way. For the total photon-nucleus cross section we have:
$$\sigma _{\gamma A}=A\sigma _{\gamma N}d^3r\rho (\stackrel{}{r})S(\stackrel{}{r})A_{eff}\sigma _{\gamma N},$$
(3)
where $`\rho (\stackrel{}{r})`$ denotes the nuclear density and $`S(\stackrel{}{r})`$ is given as:
$$S(b,z)=8\pi \underset{V}{}\left(T_{\gamma V}\right)^2\mathrm{Im}\left\{i_{\mathrm{}}^z𝑑z^{}\rho (b,z^{})\mathrm{exp}\left[iq_{}^V(z^{}z)+2i\sqrt{\pi }T_{VV}_z^{}^z\rho (b,\xi )𝑑\xi \right]\right\},$$
(4)
where we have expressed everything in cylindrical coordinates $`(b,z)`$ with the photon momentum along the $`z`$-axis. The momentum transfer $`q_{}^V`$ is:
$$q_{}^V=E_\gamma \sqrt{E_\gamma ^2M_V^2},$$
(5)
with $`E_\gamma `$ and $`M_V`$ denoting the photon energy and the mass of the vector meson V, respectively. In Eq. (4) $`T_{ab}`$ is the amplitude for the process $`aNbN`$. We used all parameters from (model I) taking into account $`\rho `$, $`\omega `$ and $`\varphi `$ mesons.
In Figs. 2 and 3 the resulting shadowing effect is compared to experimental data as function of photon energy for <sup>12</sup>C and <sup>208</sup>Pb. Within the experimental error bars the agreement is quite satisfactory.
We can define a shadowing factor $`s_N(\stackrel{}{r})`$ for an in-medium photon-nucleon cross section:
$$s_N(\stackrel{}{r})=1\frac{S(\stackrel{}{r})}{\sigma _{\gamma N}},$$
(6)
so that we can write the total photon-nucleus cross section as integral over in-medium shadowed single nucleon cross sections:
$$\sigma _{\gamma A}=d^3r\rho (\stackrel{}{r})s_N(\stackrel{}{r})\sigma _{\gamma N}.$$
(7)
In Fig. 4 we show the shadowing factor $`s_N(\stackrel{}{r})`$ for <sup>208</sup>Pb at photon energies of 2 and 7 GeV. At 2 GeV there are some interference structures but since $`s_N`$ varies only between 0.8 and 1.2 the influence of shadowing on our calculations for particle production is very small. At 7 GeV shadowing is much more important. Due to interference effects one sees a rise of $`s_N`$ for larger $`z`$ that one naively would not expect.
In our calculations the same shadowing factor $`s_N(\stackrel{}{r})`$ is used for all partial cross sections, so that, for example, the primary in-medium cross section $`\sigma _{\gamma NNm}^{med}`$ for photoproduction of a meson m in $`\gamma NNm`$ is related to the vacuum cross section $`\sigma _{\gamma NNm}^{vac}`$ via:
$$\sigma _{\gamma NNm}^{med}=s_N(\stackrel{}{r})\sigma _{\gamma NNm}^{vac}.$$
(8)
The final state interactions are then treated in the transporttheoretical framework described in Section II. The initial state interactions of the incoming photon are thus described in a model that contains quantum-mechanical coherence effects whereas the final state interactions are treated in a completely incoherent way.
## IV Results
### A Photoproduction of pions and etas
In Fig. 5 we present the results of our calculations for the total $`\pi ^{}`$ production cross section in <sup>12</sup>C (upper part) and <sup>208</sup>Pb (lower part). The solid lines display our ’standard’ calculations. One sees that the cross section per nucleon is larger for <sup>12</sup>C because nuclear shadowing and final state interaction are less effective. For <sup>12</sup>C there is also a local maximum around 2 GeV in the excitation function that is not present in <sup>208</sup>Pb. This maximum is mainly caused by an interplay of the rising $`\pi `$-production cross section and the onset of shadowing. For <sup>208</sup>Pb the decrease between 2 and 3 GeV due to shadowing is neutralized by secondary collisions of the primary produced particles that contribute to the total yield.
For the dashed lines we used a two-body absorption for the $`\mathrm{\Delta }(1232)`$-resonance only instead of the absorption introduced in Ref. that gives an enhanced $`\mathrm{\Delta }`$-absorption. With increasing photon energy the treatment of the $`\mathrm{\Delta }`$-absorption becomes less important. For <sup>12</sup>C the improved, new treatment reduces the pion yield by about 5%, for <sup>208</sup>Pb by about 20%.
The use of a string fragmentation model in hadronic transport models always requires the introduction of a finite ’formation time’. In our calculations we use $`t_f=0.8`$ fm/c (in the rest frame of the produced particles) which leads to a good reproduction of experimental data on pion production in heavy-ion collisions at SPS energies. While this value lies in a reasonable range our formation time prescription is nonetheless questionable as we neglect any interaction of the ’strings’ during their formation with the surrounding nuclear medium. From a practical point of view this prescription should thus be regarded as a parameterization of our ignorance with respect to the role of partonic degrees of freedom. In order to explore the resulting uncertainties we have also performed calculations with $`t_f=0`$ (dotted lines in Fig. 5). This enhances the final state interactions of the produced particles. Here one should note that by the final state interactions the particle yield is not only reduced but can also be enhanced when a primarily produced particle with high energy strikes another nucleon, as e.g. in $`\pi N\pi \pi N`$. From Fig. 5 one sees that the formation time becomes more important with increasing photon energy. For <sup>12</sup>C the calculation with zero formation time gives an enhancement of the pion yield at 7 GeV by about 15%. This enhancement is less pronounced in <sup>208</sup>Pb because the absorption of the secondary produced particles is more effective.
In Fig. 5 we also show results of calculations without shadowing for the incoming photon (dash-dotted lines). Because of the coordinate dependence of the shadowing factor the effect of shadowing is in principle not simply proportional to the effect seen in the total absorption cross section. However, the deviations from this proportionality are quite small as one sees by comparing the results in Fig. 5 to the effective mass numbers in Figs. 2 and 3.
The effect of the final state interaction of the produced particles is, as expected, rather different for <sup>12</sup>C and <sup>208</sup>Pb. In Fig. 5 calculations without final state interaction are displayed by the dot-dot-dashed lines. Now there are only two competing mechanisms that influence the shape of these excitation functions. On the one hand the particle yield in elementary photon-nucloen collisions increases with photon energy. On the other hand the shadowing effect becomes also more important for higher energies. In the case of <sup>208</sup>Pb the shadowing effect dominates and the cross section decreases monotonously with increasing photon energy. For <sup>12</sup>C there is first a decrease of the pion yield up to a photon energy of 3 GeV and then an increase because shadowing is here less important than for <sup>208</sup>Pb.
In Fig. 6 we show the effects of the same scenarios on $`\eta `$ photoproduction. First one observes a strong increase of the total cross section from 1 to 3 GeV by about a factor of 4 which is simply due to the opening of phase space. The effect of the formation time is again determined by an interplay between secondary production and absorption which results in a very small net effect.
The shadowing effect is, as expected, very similar to the case of pion production. For <sup>12</sup>C the calculation without final state interaction gives practically the same result as the ’standard’ calculation. This is quite different for <sup>208</sup>Pb where the final state interaction reduces the yield, in particular for low photon energies, significantly.
Since the total meson production yield is, as discussed above, determined by different effects that partly cancel each other it is instructive to look at more exclusive observables. In Figs. 7 and 8 we therefore present momentum differential cross sections for the production of pions (upper part) and etas (lower part) at photon energies of 2 and 4 GeV in <sup>208</sup>Pb. Here we show our ’standard’ calculations (solid lines) as well as calculations without formation time (i.e. maximum final state interaction) (dashed lines) and without final state interaction (dotted lines). In the pion case the spectrum is getting ’softer’ with increasing final state interactions.
The $`\eta `$-spectra show a pronounced structure at high momenta that is caused from exclusive processes $`\gamma NN\eta `$ that are strongly forward peaked. In $`\pi ^{}`$ production such a structure is not present because the string fragmentation model used here does not include flavor exchange mechanisms. We leave the inclusion of such processes for future work. The influence of the final state interaction on the $`\eta `$-spectrum is similar to the pion case: the final state interactions mainly shift the spectrum to lower energies.
In Figs. 9 and 10 we present the results of our calculations for the $`\pi ^+\pi ^{}`$ invariant mass spectra at photon energies of 2, 4, and 6 GeV in <sup>12</sup>C and <sup>208</sup>Pb, respectively. We show at each case the total mass differential cross section as well as the contribution coming from $`\rho ^0`$ decays. The effect of the final state interaction is, as expected, much larger for <sup>208</sup>Pb than for <sup>12</sup>C. While the peak of the $`\rho `$ meson clearly dominates the spectrum in <sup>12</sup>C it is, especially for low photon energies, harder to identify in <sup>208</sup>Pb.
In Ref. we have investigated the observable effects of medium modifications of the vector mesons $`\rho `$ and $`\omega `$ through their $`e^+e^{}`$ decay in photoproduction at energies between 0.8 and 2.2 GeV. In this study we have found an enhancement of the yield of intermediate mass ($`500`$ MeV) dileptons by about a factor 3 when the mass of the $`\rho `$ meson was reduced in the nuclear medium according to the predictions of Refs. :
$$\mu ^{}=\mu 0.18m_\rho ^0\frac{\rho (\stackrel{}{r})}{\rho _0},$$
(9)
where $`m_\rho ^0`$ denotes the pole mass of the $`\rho `$ meson. In Figs. 9 and 10 the dashed lines show the calculations with such a dropping mass scenario. One sees that the $`\pi ^+\pi ^{}`$ spectrum is hardly influenced by such a medium modification. This is simply due to the fact that the pions have a very short mean free path in the nuclear medium ($`1`$ fm). Therefore the probability that two pions which stem from a decay of a $`\rho `$ meson at a relevant density are both able to propagate to the vacuum without rescattering is very low.
### B Photoproduction of kaons and antikaons
In Fig. 11 we present our results for $`K^+`$-production in <sup>12</sup>C and <sup>208</sup>Pb. We again show, like for pion and eta production, the results of different model calculations: a ’standard’ calculation (solid lines) that includes all effects, a calculation with zero formation time (dashed lines), without shadowing (dotted lines), and without final state interaction (dash-dotted lines). The primary produced $`\overline{s}`$-quarks can not be annihilated in the nuclear medium and all of them are finally contained in $`K^+`$ and $`K^0`$ mesons. Therefore the final state interaction can only enhance the $`K^+`$ yield. From Fig. 11 one sees that for <sup>12</sup>C the number of secondary produced $`K^+`$-mesons is almost negligible while for <sup>208</sup>Pb they amount to about 30% of the total yield. In a calculation with zero formation time the cross section for $`K^+`$ production is enhanced for large photon energies in <sup>12</sup>C by about 20% and in <sup>208</sup>Pb by about 40%. This enhancement is mainly caused by collisions of primary high energy pions with nucleons. Such pions have a large formation time $`t_f^{lab}=\gamma t_f`$ in the lab system, where $`\gamma =E_\pi /m_\pi `$ is a Lorentz factor, which suppresses secondary interactions.
The shadowing effect is very similar to the one for pion or eta production. The fact that the calculations without shadowing and zero formation time are practically identical for both nuclei is accidental.
In Fig. 12 we show our results for $`K^{}`$-production. In contrast to $`K^+`$-production primary $`K^{}`$-mesons can be absorbed via processes $`K^{}NY\pi `$. For <sup>12</sup>C absorption and secondary production nearly cancel each other as can be seen by comparing the ’standard’ calculation (solid line) and the calculation without final state interaction (dash-dotted line). The calcultion with zero formation time gives a slightly larger result. In <sup>208</sup>Pb the absorption mechanism is a little more dominant. Therfore, the calculation without final state interaction gives the largest result.
By comparing the total $`K^+`$\- and $`K^{}`$-production in Figs. 11 and 12 one sees that in our calculation almost as many antikaons as kaons are produced. This is due to the fact that the string fragmentation model produces much more antikaons than hyperons in a $`\rho ^0`$-nucleon collision which might be an artifact of the model. However, since there are presently no experimental data for inclusive antikaon production in photon-nucleon collisions available it is difficult to check this point. Therefore we stress here that our results for photoproduction in nuclei are only valid under the caveat that our description of the elementary photon-nucleon collision is correct.
In Figs. 13 and 14 we show momentum differential cross sections for production of $`K^+`$\- and $`K^{}`$-mesons in <sup>208</sup>Pb at photon energies of 2 and 4 GeV. The solid lines are our ’standard’ calculations, the dashed line are the calculations with zero formation time, and the dotted lines display the result without final state interaction. From the $`K^+`$-spectra one sees that the final state interactions primarily enhance the low momentum yield. For the $`K^{}`$-mesons the high momentum tail is significantly reduced by the final state interactions. A comparison of these spectra to future experimental data will therefore be helpful with respect to our understanding of the interactions of kaons with nucleons in the nuclear medium.
## V Summary
We have presented a calculation of photoproduction of pions, etas, kaons, and antikaons in the energy range from 1 to 7 GeV in <sup>12</sup>C and <sup>208</sup>Pb within a model that contains shadowing of the incoming photon and treats the outgoing particles in a coupled channel semi-classical BUU transport model. It thus goes beyond the standard Glauber treatment because it can describe all possible chains that can lead to the final state under investigation. Predictions for total cross sections as well as momentum differential cross sections were given.
We have investigated in detail the influence of shadowing for the incoming photon and final state interaction of the produced particles on our results. A comparison of experimental data to our results will clarify if our treatment of the in-medium meson-nucleon interactions is correct.
In particular, we have shown that $`\pi ^+\pi ^{}`$ invariant mass spectra exhibit only a very small sensitivity on medium modifications of the $`\rho `$ meson.
Our calculations of photoproduction of mesons in nuclei are based on the input of photoproduction of mesons on nucleons for which so far only few experimental data in the relevant energy range exist. New experimental data are urgently needed in order to remove uncertainties coming from the treatment of the elementary process. There are also uncertainties coming from our description of the hadronic interactions because not all channels have yet experimentally been measured or are in principal unmeasurable. However, we do not expect these uncertainties to have a large influence on the results reported here since we have calculated rather inclusive observables for which the total employed cross sections are most decisive.
In assessing the overall reliability of the predictions made here we point out that the method (and code) used here has been shown to provide a very good description of $`\pi `$, $`2\pi `$ and $`\eta `$ photoproduction data in the MAMI energy regime (up to 800 MeV) . In the present paper we show that the method also describes shadowing very well up to photon energies of a few GeV. The same method also gives an excellent description of meson production in heavy-ion collisions in the GeV regime, typically describing these data within a factor $`<2`$. In these latter reactions the same final state interactions take place as in the present calculation. Thus, both the initial state (shadowing) and the final state interactions with their complex coupled channel effects are well under control. We thus feel confident that the accuracy of the present calculations is as good as that observed in the other reaction channels.
Finally, we wish to mention that the calculations reported here can be extended to the case of electroproduction .
## A Kaon-nucleon and antikaon-nucleon collisions
The elastic cross section $`K^+pK^+p`$ is parameterized for invariant energies below the string threshold $`\sqrt{s}<2.2`$ GeV by the following expression:
$$\sigma _{K^+pK^+p}=\frac{a_0+a_1p+a_2p^2}{1+a_3p+a_4p^2},$$
(A1)
where $`p`$ denotes the kaon momentum in the rest frame of the nucleon and we use: $`a_0=10.508`$ mb, $`a_1=3.716`$ mb/GeV, $`a_2=1.845`$ mb/GeV<sup>2</sup>, $`a_3=0.764`$/GeV, $`a_4=0.508`$/GeV<sup>2</sup>. In Fig. 15 (upper part) we show that this gives a good description of the experimental data. For scattering on a neutron there exist in the relevant energy range only a few experimental data for the charge exchange process $`K^+nK^0p`$ and no data for the process $`K^+nK^+n`$. Therefore, we assume on the neutron the same total elastic cross section (including charge exchange):
$`\sigma _{K^+nK^+n}=\sigma _{K^+nK^0p}={\displaystyle \frac{1}{2}}\sigma _{K^+pK^+p}.`$This is a rather crude approximation which is also not in line with the experimental data for $`K^+nK^0p`$ at low momenta . However, for the calculations presented here this is not essential since all these cross sections are small and, in particular, for low momenta they play only a small role for the phase space distributions of the particles involved.
The inelastic kaon-nucleon cross section is obtained by a spline interpolation through selected data points of the total cross section after subtraction of the elastic contribution. The resulting cross sections are displayed in Fig. 15 (middle part for $`K^+p`$, lower part for $`K^+n`$). We assume the inelastic cross section to consist only of $`K\pi N`$ states which is a good approximation since these cross sections are only used for invariant collision energies below 2.2 GeV. The cross sections for $`K^0`$ scattering on nucleons follow from isospin symmetry:
$`\sigma _{K^0p}=\sigma _{K^+n},\sigma _{K^0n}=\sigma _{K^+p}.`$
In case of antikaon scattering on nucleons we first get contributions to the cross sections from the $`S=1`$ resonances listed above which we treat analogous to pion-nucleon scattering as incoherent Breit-Wigner type contributions. From Fig. 16 one sees that these contributions alone – unlike the case of pion-nucleon – do not suffice to describe the cross sections for the different channels. Therefore we have introduced a non-resonant background cross section of the following form for the different channels $`i`$ in $`K^{}p`$ scattering:
$$\sigma _{K^{}pi}^{bg}=a_0\frac{p_f}{p_is}\left(\frac{a_1^2}{a_1^2+p_f^2}\right)^{a_2},$$
(A2)
where $`p_i`$ and $`p_f`$ denote the cms momenta of the initial and final particles, respectively and $`\sqrt{s}`$ is the total cms energy. The parameters $`a_j`$ are given in Table II. For $`K^{}pK^{}p`$ this parameterization is only used for invariant energies below 1.7 GeV. For larger energies we use a spline interpolation through selected experimental data points because the broad bump in the experimental data around 1.8 GeV can hardly be described by a simple fit function. From Fig. 16 one sees that our parameterizations give a good description of the experimental data for $`K^{}pK^{}p,K^0n,\mathrm{\Lambda }\pi ^0,\mathrm{\Sigma }^+\pi ^{},\mathrm{\Sigma }^{}\pi ^+,\mathrm{\Sigma }^0\pi ^0`$.
In order to describe the total $`K^{}p`$ cross section in the relevant energy range it is necessary to also include channels with more than 2 particles in the final state. This is done here by including the process $`\overline{K}NY^{}\pi `$ with a constant matrix element for all hyperon resonances. The cross section is then given as:
$$\sigma _{\overline{K}NY^{}\pi }=C\frac{\left|\right|^2}{p_is}^{\sqrt{s}m_\pi }𝑑\mu p_f𝒜_Y^{}(\mu ),$$
(A3)
where the isospin factor $`C`$ is given by the following expression of Clebsch-Gordan coefficients:
$$C=\underset{I}{}\left(\frac{1}{2}\frac{1}{2}I_{z,\overline{K}}I_{z,N}|\frac{1}{2}\frac{1}{2}II_{z,tot}I_Y^{}1I_{z,Y^{}}I_{z,\pi }|I_Y^{}1II_{z,tot}\right)^2.$$
(A4)
In Eq. (A3) $`𝒜_Y^{}`$ stands for the spectral function of the hyperon resonance $`Y^{}`$ that is treated analogous to the ones for the nucleonic resonances . For the squared matrix element we use $`||^2=22`$ mb GeV<sup>2</sup> and we take for this process only the hyperon resonances into account with a mass above 1.6 GeV. In Fig. 17 (upper part) we show that the sum of all partial contributions very well describes the total $`K^{}p`$ cross section.
For $`K^{}n`$ scattering the elastic scattering cross section can be described by the resonance contributions with a small constant background cross section:
$`\sigma _{K^{}nK^{}n}^{bg}=4\mathrm{mb}.`$The cross section for $`K^{}n\mathrm{\Lambda }\pi ^{}`$ follows from isospin symmetry from $`K^{}p\mathrm{\Lambda }\pi ^0`$:
$`\sigma _{K^{}n\mathrm{\Lambda }\pi ^{}}=2\sigma _{K^{}p\mathrm{\Lambda }\pi ^0}.`$In the $`\mathrm{\Sigma }\pi `$ channel we have from isospin symmetry:
$`\sigma _{K^{}n\mathrm{\Sigma }^{}\pi ^0}=\sigma _{K^{}n\mathrm{\Sigma }^0\pi ^{}}`$and for the non-resonant background we assume:
$`\sigma _{K^{}n\mathrm{\Sigma }^{}\pi ^0}^{bg}={\displaystyle \frac{1}{2}}\left(\sigma _{K^{}p\mathrm{\Sigma }^{}\pi ^+}^{bg}+\sigma _{K^{}p\mathrm{\Sigma }^+\pi ^{}}^{bg}\right).`$This is a rather crude approximation. However, for the calculations presented here it is only of primary importance that the total cross section, summed over all final states, are correctly described. The result for the total $`K^{}n`$ cross section is displayed in Fig. 17 (lower part) and one sees that the available experimental data are reasonably well reproduced. The cross sections for $`\overline{K}^0`$ scattering on nucleons immediately follow from isospin symmetry:
$`\sigma _{\overline{K}^0p}=\sigma _{K^{}n},\sigma _{\overline{K}^0n}=\sigma _{K^{}p}.`$
## B Strangeness production in photon-nucleon collisions
The exclusive strangeness production processes $`\gamma N\mathrm{\Lambda }K,\mathrm{\Sigma }K,NK\overline{K}`$ are fitted to available experimental data. The channel $`YK`$ is parameterized by the following expression:
$$\sigma _{\gamma NYK}=a_0\frac{p_f}{p_is}\frac{a_1^2}{a_1^2+(\sqrt{s}\sqrt{s}_0)^2},$$
(B1)
where $`\sqrt{s}_0`$ denotes the invariant threshold energy. We assume the constants $`a_0`$ and $`a_1`$ to be independent of the isospin states of the incoming and outgoing particles and use: $`a_0^\mathrm{\Lambda }=13`$ $`\mu `$bGeV<sup>2</sup>, $`a_0^\mathrm{\Sigma }=15`$ $`\mu `$bGeV<sup>2</sup>, $`a_1^\mathrm{\Lambda }=0.5`$ GeV, $`a_1^\mathrm{\Sigma }=0.4`$ GeV. In Fig. 18 it is shown that these parameterizations describe the experimental data reasonably well.
The cross section for antikaon production $`\gamma NNK\overline{K}`$ is parameterized by:
$$\sigma _{\gamma NNK\overline{K}}=a_0\frac{16(2\pi )^7}{p_i\sqrt{s}}\mathrm{\Phi }_3\frac{a_1^2}{a_1^2+(\sqrt{s}\sqrt{s}_0)^2},$$
(B2)
where $`\mathrm{\Phi }_3`$ denotes the 3-body phase space as, for example, given by Eq. (35.11) in Ref. . From the experimental data for $`\gamma ppK^+K^{}`$ we obtain $`a_0=12`$ $`\mu `$b and $`a_1=0.7`$ GeV (resulting cross section shown in Fig. 18). These values are used for all isospin channels.
In Fig. 19 we compare our treatment of strangeness production in photon-proton reactions to the available experimental data on inclusive strangeness production. The dashed line shows the sum of the exclusive processes $`\gamma p\mathrm{\Lambda }K,\mathrm{\Sigma }K,NK\overline{K}`$; the dotted line is the strangeness production cross section that results from the string fragmentation model FRITIOF. We note that the string fragmentation model does not produce the exclusive channels such that there is no double counting involved here. The total inclusive strangeness production cross section, as sum of the contributions from the exclusive channels and the string model, is displayed as solid line in Fig. 19. The experimental data are described rather well up to a photon energy of 4 GeV, but for larger energies the experimental cross section is overestimated by about 50%. The experimental data seem to indicate a saturation of the cross section already at 4 GeV which is hard to explain. Therefore we use the described cross sections keeping in mind that out treatment of the elementary photon-nucleon reaction might need to be refined when new and more reliable experimental data will become available.
|
no-problem/9908/nucl-th9908001.html
|
ar5iv
|
text
|
# Information Theory, Quark Clusters in Nuclei, and Parton Distributions
## 1 Introduction
Information Theory was originally developed by Hartley , Nyquist , and Shannon in order to establish a mathematical description of telecommunications and to understand how information may be lost upon transmission over noisy channels. Shannon, in particular, developed a complete formalism in which the concept of information is quantified and important theorems regarding its transmission are proven. The fundamental quantity that measures information is information entropy. Shannon has shown that the information entropy is the most suitable function of the probabilities for emission of signals by an ergodic source that measures the magnitude of the receiver’s uncertainty on those signals. In this sense information entropy is a true measure of one’s ignorance of their correct content. The larger the entropy the greater the uncertainty and, consequently, the information content. Therefore, maximizing the information entropy can lead to evaluation of the signal probability distribution under the constraints imposed by the telecommunications problem at hand.
Application of this theory in physics could be versatile. All quantum phenomena, for example, are stochastic in nature and are described in terms of probability amplitudes. Then by appropriately defining the information entropy of the physical system under consideration and maximizing it under constraints imposed by theoretical assumptions or experimental data one can obtain the probability amplitudes that are most consistent with one’s ignorance of the system. This method has been used by Plastino to evaluate wave functions for various physical systems. It is a very powerful technique since it does not rely on any specific modeling but only on what is actually known about the system to derive best estimates for what is unknown.
In this article we apply Information Theory to improve parton, specifically gluon, distributions in nuclei. These are suitable subjects to the method because they are probabilistic in character. The problem whose solution we wish to improve is that of quarkonium suppression in proton-nucleus collisions at very high energies. The production of charmonium states, most notably of the $`J/\mathrm{\Psi }`$ boson as well as of bottomonium ones, especially the $`\mathrm{{\rm Y}}`$ resonances, has been observed in various experiments involving heavy nuclear targets to be lower that in hydrogen if the latter is multiplied by the mass number of the nucleus. At energies achieved at Fermilab $`J/\mathrm{\Psi }`$ and $`\mathrm{{\rm Y}}`$ suppression is very pronounced and exhibits a characteristic dependence on the momentum fraction of the target nucleon carried by the struck parton . Many models have been developed to explain this behavior. For the purposes of demonstrating the Information Theory method we consider a model that is based on the assumption that quarks in nuclei have a finite probability to conglomerate forming multi-quark color singlet states, usually called (multi)quark clusters . The parton distributions in such clusters differ from those in single nucleons and generally are concentrated to lower momentum fractions of the partons as the cluster becomes larger. This model supplemented by final state dissociation of the produced quarkonium has successfully described $`J/\mathrm{\Psi }`$ suppression in hadron-nucleus collisions as well as most heavy ion collisions, the latter being the topic of heated debate due to their relevance with Quark-Gluon Plasma production. In this model very simple parton distributions have been used based on very general assumptions. However, the gluon distributions which play a crucial role in quarkonium production are poorly known in nuclei. It turns out that within this model the gluon distributions that solve the problem of $`J/\mathrm{\Psi }`$ suppression are inadequate to describe $`\mathrm{{\rm Y}}`$ suppression from the same experiment. We shall use Information Theory to improve them in a manner that maintains their applicability to the $`J/\mathrm{\Psi }`$ data and, at the same time, to enhance agreement with the $`\mathrm{{\rm Y}}`$ data.
## 2 Information Theory
Suppose an ergodic source of information, which can be anything from a telegraphic device to a quantum system, produces signals $`x`$ from some available ensemble $`X`$ with probability distribution $`p(x)`$. We define the information entropy of the source as
$$S=\underset{x}{}p(x,\alpha _i)\mathrm{ln}p(x,\alpha _i),$$
(1)
where the summation includes all instances of $`x`$ in $`X`$ and can indicate an integral if $`x`$ is a continuous variable and $`\alpha _i`$ are a group of fixed parameters in the function $`p`$. It is required that
$$\underset{x}{}p(x,\alpha _i)=1.$$
(2)
If the logarithm is binary then $`S`$ is expressed in bits. Considered as a functional of the probability distribution, $`S`$ is maximal when $`p`$ is uniform; we know the most about a system when the signals it produces have no variability. To obtain the optimal function $`p(x,\alpha _i)`$, i.e., to estimate the “best” set of parameters $`\alpha _i`$ we impose the extremization condition
$$\frac{S}{a_i}=0,$$
(3)
for all $`i`$. To ensure a maximum second order derivatives must be looked at as well. In this work we assume a certain functional form for $`p`$ and simply want to determine its parameters. The procedure can be generalized to include cases in which we do not know the exact form of $`p`$ but there are constraints based on data. The method of Lagrange’s multipliers can be used to evaluate distribution functions when a number of expectation values are known .
## 3 Quarkonium Suppression in p-A Collisions
### 3.1 Facts and models
The observed suppression of the $`J/\mathrm{\Psi }`$, $`\mathrm{\Psi }^{}`$ production cross section per unit mass number, $`A`$, in high energy hadron-nucleus and nucleus-nucleus collisions exhibits a strong nuclear dependence. The hadron-nucleus data also show that the depletion increases with the longitudinal momentum of the quarkonium. These results have generated many theoretical studies. A variety of effects are thought to contribute and numerous models have been suggested. These contributions may be grouped into six major categories: (1) Quarkonium scattering off parton and/or hadron co-movers ; (2) Glauber inelastic scattering on the nucleons ; (3) Shadowing and EMC distortions of the nuclear parton distributions ; (4) Parton scattering before and after the hard process; (5) Parton energy loss in the initial and/or the final state and, (6) Intrinsic charm in nucleons . Color transparency may alter the charmonium Glauber absorption . In the case of heavy ion collisions contribution of an unconfined state (Quark Gluon Plasma) has probably been very small in earlier experiments but is currently debated due to new data . In our view more than one contribution must be carefully balanced to explain the quarkonium suppression. Here the modifications of the initial state parton distributions (for all values of the Bjorken variable, $`x`$) and the final state inelastic scattering (absorption) are considered.
In this article the term “EMC effect”, named after the European Muon Collaboration, will signify any deviation from unity of the structure function ratio of a bound nucleon to that of a free one at any value of the variable $`x`$, the latter defined as the fraction of the nucleon momentum carried by the interacting parton.
### 3.2 Quark clusters in nuclei
The EMC effect has been studied in the framework of the expansion of a nuclear state $`|A`$ on a complete basis of color singlet states labeled by the number of (3, 6, 9 or more) valence quarks they contain. Such states, also referred to as multi-quark clusters, are formed when nucleons bound in a nucleus overlap sufficiently so that they share their constituent partons. The probabilities for multi-quark cluster formation can be estimated using nuclear wave functions or can be found by fitting DIS data . The agreement with EMC and NMC data is excellent down to $`Q^22`$ GeV<sup>2</sup> and the fit strongly constrains the quark momentum distributions in clusters. (Momentum conservation in clusters fixes the total momentum fraction carried by the gluons.) Good description of the small enhancement above unity of the EMC ratio (antishadowing) around $`x=0.1`$ requires inclusion of up to 12q clusters but the essential features of the data are accommodated by a truncation to the 6q term and use of an effective 6q cluster probability, $`f`$. The observed shadowing of the structure function, $`F_2`$, in the nucleus for low $`x_B`$ and the depletion for intermediate $`x`$ combined with the QCD sum rules require the presence of antishadowing of $`F_2`$ for other values of $`x`$. This antishadowing, however, need not be restricted to the range $`0.05<x<0.2`$. In fact, this model predicts antishadowing for $`x>0.8`$ in agreement with the data of Ref. on the slope of the Ca over deuterium structure function ratio. In addition, the electron DIS data from SLAC confirm the excellent agreement of this model with the observed nuclear dependence for all $`x>0.1`$ . The model has been successfully applied to explain nuclear effects in Drell-Yan processes and gives interesting predictions for these effects at RHIC energies . Direct photon production in hadron-nucleus collisions has also been predicted to be altered by nuclear effects.
The probability $`f`$ depends on $`A`$ approximately logarithmically and is 0.040-0.052 for deuterium . Very dense nuclei (<sup>4</sup>He) have $`f`$ values larger than the logarithmic prediction. In the scaling limit the Nq proton-like cluster parton momentum distributions are assumed to have standard forms,
$`V_N^{u,d}(x)`$ $`=`$ $`B_N^{u,d}\sqrt{x}(1x)^{b_N^{u,d}},b_N^d=b_N^u+1,`$ (4)
$`S_N(x)`$ $`=`$ $`A_N(1x)^{a_N},A_N=x_N^S(1+a_N),`$ (5)
$`G_N(x)`$ $`=`$ $`C_N(1x)^{c_N},C_N=x_N^G(1+c_N).`$ (6)
The exponents that best describe the NMC data are ($`b_3^u,a_3,b_6^u,a_6`$) = (3, 9, 10, 11). For the gluon exponents the direct $`\gamma `$ data suggest ($`c_3,c_6`$) = (6, 10) . The valence exponents approximately follow the dimensional counting rules. The ocean consists of three species of quarks (and their antiparticles) with the strange distribution being half as large as the up (or down) quark sea distribution. The gluon momentum fraction is taken 5 times larger than the ocean one. Isospin invariance relations connect the distributions that belong to the same isospin states reducing, thus, the number of independent paramenters. Kinematics forces the fraction of the cluster momentum carried by a parton in 6q clusters, $`x^{(6)}`$, to be half as large as that in nucleons, $`x^{(3)}`$. In this model the QCD sum rules are explicitly obeyed. Furhter description can be found in Refs. .
### 3.3 Theoretical cross sections
Including 3q and 6q clusters in the nucleus the doubly differential hadron level cross section for quarkonium production is calculated from the $`\alpha _s^2`$ order parton level ones (with both quark annihilation ($`q\overline{q}`$) and gluon fusion ($`gg`$) included). The latter are convoluted with the appropriate sums of products of parton distributions $`H_{q\overline{q}}^{(i)}(x_1,x_2^{(i)})`$ and $`H_{gg}^{(i)}(x_1,x_2^{(i)})`$ ($`i=3,6`$), respectively . The indices 1 and 2 refer to the probe (moving in the $`+z`$ direction in the laboratory) and the target, respectively. The duality (in effect color evaporation) hypothesis is applied to integrate the differential cross section over $`m^2`$ with $`m`$ ranging from twice the $`c(b)`$ quark mass, $`2m_{c(b)}`$, to the open charm (bottom) threshold, $`2m_{D(B)}`$. Here $`m_D=1.864`$ GeV and $`m_B=5.278`$ GeV. The duality constant, $`F_d`$, is simply the portion of the total charmonium cross section (up to a given order) that corresponds to the quarkonium; it cancels in cross section ratios. The resulting cross section can be expressed as a function of the longitudinal momentum fraction carried by the produced quark-antiquark pair, $`x_F=2p_L/\sqrt{s}`$. The higher order corrections are assumed to result in a multiplicative factor, $`K`$. The transverse momentum dependence due to higher order diagrams is, thus, integrated over and that due to the intrinsic transverse momentum of the partons is neglected. These considerations lead to the order $`\alpha _s^2`$ equation
$$\frac{d\sigma ^{(A)}}{dx_F}=KF_d_{4m_{c(b)}^2}^{4m_{D(B)}^2}𝑑m^2\underset{i=3,6}{}J^{(i)}\left[H_{q\overline{q}}^{(i)}\widehat{\sigma }_{q\overline{q}}(m^2)+H_{gg}^{(i)}\widehat{\sigma }_{gg}(m^2)\right],$$
(7)
where $`\widehat{\sigma }_{q\overline{q}}`$ and $`\widehat{\sigma }_{gg}`$ are the partonic-level cross sections for the two hard processes and $`J^{(i)}`$ are the Jacobians that transform $`x_1`$ and $`x_2^{(i)}`$ to $`x_F`$ and $`m^2`$. In this way the entire quarkonium yield is found. The various final states being unitary rearrangements of one another have up to this point the same dependence on EMC-type nuclear effects at the same $`\sqrt{s}`$. The perturbative $`Q^2`$ evolution of the parton distributions is omitted as it largely cancels in the calculation of cross section ratios. In addition, the $`Q^2`$ values that are relevant to the production of quarkonium states are inside the scaling region in which the parton distributions used in this work are valid.
### 3.4 Final state absorption
After its production the $`c\overline{c}`$ ($`b\overline{b}`$) system propagates in the nuclear medium developing into a quarkonium state which then decays into the observed lepton pairs. During this stage the system may be inelastically scattered by 3q and 6q clusters. It is reasonable to assume that the scattering occurs with clusters bound in the nucleus since the break up time of the latter exceeds the time the pair needs to traverse the nuclear radius. The $`J/\mathrm{\Psi }`$ absorption cross section has been measured to be $`\sigma _{abs}^{(3)}`$ 3.5 mbarn/nucleon . This number has been extracted in a kinematic range in which parton distribution modifications are negligibly small ($`x_20.22`$, the point at which the EMC ratio crosses the unit line) and since it is the average absorption cross section over the path traversed in the nucleus, color transparency effects are already in it. The $`\mathrm{\Psi }^{}`$ is attenuated a little more than the $`J/\mathrm{\Psi }`$ due to its larger radius. In this It is also assumed that the cross section for bottomonium ($`\mathrm{{\rm Y}}`$ states) absorption is of the same order as that of charmonium but smaller ($`3`$ mbarn/nucleon) due to its more compact size.
The cross section on a 6q cluster is $`2^{3/2}`$ times larger than that on a nucleon (bag model estimate) but the density of scattering centers in the medium is reduced when $`f0`$. Then
$$\rho \sigma _{abs}=\sigma _{abs}^{(3)}\rho _A[(1f)+2^{3/2}f]/(1+f),$$
(8)
where $`\rho _A`$ is the number density of the nucleus $`A`$ taken as constant within the nuclear volume. The average path length the $`c\overline{c}`$ ($`b\overline{b}`$) pair travels in an approximately spherical nucleus of radius $`r_A`$ estimated by means of a simple geometrical (not eikonal) calculation is $`L_A2r_A/\pi `$. The experimentally measured nuclear RMS radii are employed to compute $`\rho _A`$ and $`L_A`$. The cross section in Eq.(7) is then attenuated by the factor
$$P_A=\mathrm{exp}[\rho \sigma _{abs}L_A].$$
(9)
## 4 Charmonium Suppression
The nuclear dependence is extracted by taking the ratio, $`R_A`$, of the $`J/\mathrm{\Psi }`$ production cross section per unit $`A`$ in collisions of a hadron, proton in this case, with a heavy nucleus to that in collisions with a light one and examining its $`x_F`$ dependence. Since the $`K`$-factor may also depend on $`x_F`$ at first we examine the ratio of the experimental to the order $`\alpha _s^2`$ theoretical cross section. In Fig. 1 we present this ratio for Cu and Be using a representative set of parameters for our model and the data of Ref. . $`F_d`$ is the same in both cases and cancels in the ratio. Clearly the $`K`$-factor exhibits a strong $`x_F`$ dependence but, most importantly for our purposes, is the same for the two nuclei. This implies that it will cancel in ratios of two theoretical or experimental cross sections. The origin of this factor must, thus, lie in processes that do not depend on the size and intrinsic properties of the nucleus (the absorption part is included in the results of Fig. 1). The $`x_F`$ dependence of $`K`$ in Fig. 1 agrees with that of the ratio of the “diffractive” to hard cross sections in Ref. . Issues related to the relative suppression of various charmonium states are outside the scope of this article.
The results for $`R_A`$ are confronted with the data of Ref. in Fig. 2. The dotted lines in Fig. 2 represent the prediction of the full model including 6q clusters and final state absorption with the lowest (highest) value of $`f`$ for the heavy (light) nucleus and the largest values of the 6q ocean and gluon exponents, ($`a_6,c_6`$) = (12, 11); the solid ones correspond to the opposite $`f`$ combination and ($`a_6,c_6`$) = (10, 9). In order to make the influence of the initial and final state contributions to $`R_A`$ clear in Fig. 2(d) the results without final state absorption are shown (short and long dash curves with the same connotation as the dotted and solid ones, respectively) as well as the result with only final state absorption, $`f=0`$ (dot-dash line). It is evident that the predictions of the full model are in agreement with the data. We note in passing that for large negative $`x_F`$ this model predicts antishadowing of $`J/\mathrm{\Psi }`$ production in p-A collisions because in this region large $`x_2`$ values for the gluon distribution ratio are accessed.
## 5 Bottomonium Suppression
Using the model we described earlier we can also calculate the suppression ratio for the $`\mathrm{{\rm Y}}`$ states and compare the results with the data of Ref. . Specifically we calculate the exponent $`\alpha `$ defined by means of the equation
$$\frac{d\sigma ^{(A)}}{dx_2}=A^\alpha \frac{d\sigma ^{(d)}}{dx_2},$$
(10)
where the superscripts refer to large nuclei ($`A`$) or deuterium ($`d`$) targets and $`x_2=x_2^{(3)}`$. At this point it is instructive to observe that gluon fusion dominates the charmonium production process and is very important for bottomonium production as well. Therefore, the ratio of cross sections to a large extent reflects the ratio of gluon distributions, $`R_G^{(A)}`$. It is not hard to see that with the given definition of gluon distributions $`R_G^{(A)}`$ monotonically increases with $`x_2`$. As shown in Fig. 3, however, the data of Ref. contradict this theoretical prediction. At $`x_20.15`$ there is a wide “bump” and the ratio starts decreasing at higher $`x_2`$. The reason that this behavior becomes more apparent in the case of $`\mathrm{{\rm Y}}`$ production is the fact that the $`\mathrm{{\rm Y}}`$ is much more massive than the $`J/\mathrm{\Psi }`$. For given center of mass energy, a particular quarkonium momentum, $`x_F`$, probes a larger $`x_2`$ value in the $`\mathrm{{\rm Y}}`$ case. The gluon distributions being the most relevant and the least known among all the partons should be the first candidates for improvement.
### 5.1 Improved gluon distributions
At this point we turn to Information Theory. The total momentum fractions carried by the partons in an Nq cluster,
$$z_N^{(a)}=_0^1𝑑xF_N^{(a)}(x),$$
(11)
where $`a`$ designates the type of parton and $`F_N^{(a)}`$ is the momentum distribution of $`a`$, must always add up to unity. Consequently the functions $`F_N^{(a)}`$ satisfy the condition required in order to define an information entropy for them,
$$S_N=\underset{a}{}_0^1𝑑xF_N^{(a)}(x)\mathrm{ln}F_N^{(a)}(x).$$
(12)
We shall maintain the quark distributions as defined in the previous section and modify the gluon distributions under the constraint that the total momentum fraction carried by gluons in each type of cluster is fixed and equal to that of the unmodified distributions, i.e., $`z_3^{(g)}=0.57`$ for nucleons and $`z_6^{(g)}=0.60`$ for 6q clusters.
The trend of the $`\mathrm{{\rm Y}}`$ data suggests that the simplest possible modification to the gluon distributions is an alteration of the linear term in $`x`$. We will, then, define corrected momentum distributions for the gluons in nucleons and 6q clusters as
$$G_N(x)=C_N(1x)^{c_N}+C_N^{}x.$$
(13)
For each $`N`$ we now have two unknown parameters $`C_N`$ and $`C_N^{}`$. Momentum conservation, i.e., that the fractions $`z_N^{(g)}`$ are constant, imposes the condition
$$C_N^{}=2z_N^{(g)}\frac{2C_N}{c_N+1},$$
(14)
where the exponents $`c_N`$ are kept fixed, $`c_3=6`$ and $`c_6=10`$. Then we evaluate $`C_N`$ from the requirement
$$\frac{S_N}{C_N}=0.$$
(15)
The maximization procedure yields the following numbers: $`(C_3,C_3^{})=(1.163,\mathrm{\hspace{0.25em}0.812})`$ and $`(C_6,C_6^{})=(1.411,\mathrm{\hspace{0.25em}0.987})`$. We can compare these numbers with the uncorrected ones: $`(C_3,C_3^{})_{uncorr}=(4.130,\mathrm{\hspace{0.25em}0.0})`$ and $`(C_6,C_6^{})_{uncorr}=(6.624,\mathrm{\hspace{0.25em}0.0})`$. We have thus changed the shape of the function without affecting its integral, the total gluon momentum. The behavior of the new functions differs from that of the old ones mostly in the large $`x`$ region for which on the other hand we have little experimental data. We note that the ocean distributions could not accommodate such type of alteration because deeply inelastic scattering imposes a constraint on the ratio of neutron to proton structure functions, $`F_2^{(n)}(x)/F_2^{(p)}(x)1/4`$ as $`x1`$.
Using the corrected distributions we can recalculate the exponent $`\alpha `$ and compare the results with the data. With a final state absorption cross section of 3 mbarn/nucleon and the new distributions the agreement with the data is considerably improved as shown in Fig. 3 in which the upper curve corresponds to the uncorrected model, the lower curve to the corrected model using only gluon fusion contributions and the middle curve to the corrected model including gluon fusion and quark annihilation. It must be pointed out that this is not the only model that gives reasonable description of the relative $`J/\mathrm{\Psi }`$ to $`\mathrm{{\rm Y}}`$ suppression data in p-A collisions. The authors of Ref. attribute the smaller suppression of $`\mathrm{{\rm Y}}`$ to the $`Q^2m^2`$ evolution of the distribution functions, where $`m`$ is the mass of the resonance. Indeed, as discussed in Ref. the evolution of the ocean (and consequently the gluon) distributions leads to smaller shadowing as $`Q^2`$ decreases. Our model neglects the $`Q^2`$ evolution relying on the observation that the masses of the quarkonium resonances are already in the scaling region and attributes the reduced $`\mathrm{{\rm Y}}`$ suppression to its smaller absorption cross section. It is conceivable that both effects may, in fact, contribute to this phenomenon. We would mostly like to emphasize that it is the shape of the suppression curve that needed to be improved to account for the $`\mathrm{{\rm Y}}`$ data.
It is worthwhile noting that due to the fact that $`\mathrm{{\rm Y}}`$ production probes a different kinematic regime from $`J/\psi `$ the correction on the gluon distributions has only a small effect on the $`J/\mathrm{\Psi }`$ suppression curves. The agreement with the charmonium data is still good as it can be observed in Fig. 4, although it slightly deteriorates at small $`x_F`$ (large $`x_2`$). In addition the corrected model does not exhibit antishadowing of $`J/\mathrm{\Psi }`$ production for negative $`x_F`$ a feature that would be in contradiction with the data . Data on bottomonium and charmonium give complementary information on the gluon distributions in nuclei. In addition the $`\mathrm{{\rm Y}}`$ is a really good probe of the initial state in which it is produced due to low absorption cross section.
### 5.2 Predictions for RHIC
We can use this model to make predictions for the $`J/\mathrm{\Psi }`$ and $`\mathrm{{\rm Y}}`$ suppression at the Relativistic Heavy Ion Collider (RHIC) with $`\sqrt{s}=200`$ GeV/nucleon. The calculation proceeds along the same lines as for p-A collisions but now there is the additional possibility of 6q-6q cluster collisions. The nuclear effect is, thus, more pronounced. The cross section for collisions of a nucleus A with a nucleus B is
$$\frac{d\sigma ^{(AB)}}{dx_F}=KF_d_{4m_c(b)^2}^{4m_{D(B)}^2}𝑑m^2\underset{i=3,6}{}\underset{j=3,6}{}J^{(i,j)}\left[H_{q\overline{q}}^{(i,j)}\widehat{\sigma }_{q\overline{q}}+H_{gg}^{(i,j)}\widehat{\sigma }_{gg}\right],$$
(16)
where $`J^{(i,j)}=x_1^{(i)}x_2^{(j)}/[m^2(x_1^{(i)}+x_2^{(j)})]`$, $`i`$ and $`j`$ represent the type of colliding cluster, $`\sqrt{s}`$ is the nucleon-nucleon CM energy and $`H_{q\overline{q}(gg)}^{(i,j)}`$ are functions of parton distributions appropriate for A-B collisions.
In Fig. 5 we show theoretical results obtained with the uncorrected (curves marked by (d)) and the corrected (curves marked by (c)) models. The curves exhibiting less suppression are for the $`\mathrm{{\rm Y}}`$. The corrected model leads to larger suppression. The large $`x_F`$ behavior is now much more distinct. It can be understood if we realize that in symmetric heavy ion collisions as $`x_F1`$ the large $`x`$ region for the positively moving nucleus is probed. Therefore, in the uncorrected model the ratio increases with $`x_F`$ reflecting the increase in the gluon distribution ratio and exhibits antishadowing while in the corrected one it flattens out and remains below unity.
## 6 Conclusions
We have applied Information Theory to improve the gluon momentum distribution functions in nuclei, including the “EMC effect”. The main idea is that by defining an information entropy, $`S`$, for those functions whose total integral is unity we can evaluate their parameters by maximizing $`S`$ with respect to them. In other words we assume that the best choice of parameters is the one that is consistent with maximal ignorance under the constraint of momentum conservation. A quark-cluster model for the “EMC effect” has been used to establish good agreement with the data on $`J/\mathrm{\Psi }`$ suppression in p-A collisions but proved inadequate to describe $`\mathrm{{\rm Y}}`$ suppression. Then Information Theory provided us with a tool to improve the model with significant success. The gluon distributions have been corrected for their behavior at large $`x`$ and an overall agreement with the differential cross sections for quarkonium suppression was achieved.
An interesting aspect of this method is that it does not rely on any specific microscopic theory which in turn should be investigated in detail but uses only very general notions. The solution that is consistent with the assumption of maximal ignorance, quantified by the information entropy, seems to be an optimal one. This method can be used to improve the parameters of more detailed and realistic parton distributions in nucleons and nuclei under constraints imposed by experimental data.
The author would like to thank R. Vogt, S. Gavin and A. Plastino for useful discussions.
|
no-problem/9908/hep-ph9908309.html
|
ar5iv
|
text
|
# Multi-TeV Scalars are Natural in Minimal Supergravity
IASSNS-HEP-99-78
FERMILAB-PUB-99/226-T
For a top quark mass fixed to its measured value, we find natural regions of minimal supergravity parameter space where all squarks, sleptons, and heavy Higgs scalars have masses far above 1 TeV and are possibly beyond the reach of the Large Hadron Collider at CERN. This result is simply understood in terms of “focus point” renormalization group behavior and holds in any supergravity theory with a universal scalar mass that is large relative to other supersymmetry breaking parameters. We highlight the importance of the choice of fundamental parameters for this conclusion and for naturalness discussions in general.
The standard model with a fundamental Higgs boson suffers from a large and unexplained hierarchy between the weak and Planck scales . Because supersymmetric theories are free of quadratic divergences, however, this hierarchy is stabilized in supersymmetric extensions of the standard model when the scale of superpartner masses is roughly of order the weak scale $`M_{\mathrm{weak}}`$ . The promise of providing a natural solution to the gauge hierarchy problem is the primary phenomenological motivation for supersymmetry.
Because the requirement of naturalness places upper bounds on superpartner masses, this criterion has important experimental implications. In a model-independent analysis, naturalness constraints are weak for some superpartners, e.g., the squarks and sleptons of the first two generations . However, in widely studied scenarios where the scalar masses are unified at some high scale, such as minimal supergravity, it is commonly assumed that squark and slepton masses must all be $`\stackrel{<}{}1\mathrm{TeV}`$. This bound places all scalar superpartners within the reach of present and near future colliders, and is a source of optimism in the search for supersymmetry at the high energy and high precision frontiers. We show here, however, that this assumption is invalid, and in fact, it is precisely in supergravity theories with a universal scalar mass that all squark and slepton masses may naturally be far above 1 TeV.
Supersymmetric theories are considered natural if the weak scale is not unusually sensitive to small variations in the fundamental parameters. Although the criterion of naturalness is inherently subjective, its importance for supersymmetry has motivated several groups to provide quantitative definitions of naturalness . In this analysis, we adopt the following prescription:
(1) We consider the minimal supergravity framework with its 4+1 input parameters
$$\left\{P_{\mathrm{input}}\right\}=\{m_0,M_{1/2},A_0,\mathrm{tan}\beta ,\mathrm{sign}(\mu )\},$$
(1)
where $`m_0`$, $`M_{1/2}`$, and $`A_0`$ are the universal scalar mass, gaugino mass, and trilinear coupling, respectively, $`\mathrm{tan}\beta =H_u^0/H_d^0`$ is the ratio of Higgs expectation values, and $`\mu `$ is the Higgsino mass parameter. The first three parameters are at the grand unified theory (GUT) scale $`M_{\mathrm{GUT}}2\times 10^{16}\mathrm{GeV}`$, i.e., the scale where the U(1)<sub>Y</sub> and SU(2) coupling constants meet.
(2) The naturalness of each point $`𝒫\{P_{\mathrm{input}}\}`$ is then calculated by first determining all the parameters of the theory (Yukawa couplings, soft supersymmetry breaking masses, etc.), consistent with low energy constraints. Renormalization group (RG) equations are used to relate high and low energy boundary conditions. In particular, at the weak scale, proper electroweak symmetry breaking requires<sup>1</sup><sup>1</sup>1The tree-level conditions are displayed here for clarity of presentation. In all numerical results presented below, we use the full one-loop Higgs potential , minimized at the scale $`m_0/2`$, approximately where one-loop corrections are smallest, as well as two-loop RG equations , including all low-energy thresholds .
$`{\displaystyle \frac{1}{2}}m_Z^2`$ $`=`$ $`{\displaystyle \frac{m_{H_d}^2m_{H_u}^2\mathrm{tan}^2\beta }{\mathrm{tan}^2\beta 1}}\mu ^2`$ (2)
$``$ $`f(m_{H_d}^2,m_{H_u}^2,\mathrm{tan}\beta )\mu ^2,`$
$`2B\mu `$ $`=`$ $`\mathrm{sin}2\beta (m_{H_d}^2+m_{H_u}^2+2\mu ^2),`$ (3)
where $`m_{H_u}^2`$ and $`m_{H_d}^2`$ are the soft scalar Higgs masses, and $`B\mu `$ is the bilinear scalar Higgs coupling.
(3) We choose to consider the following set of (GUT scale) parameters to be free, independent, and fundamental:
$$\{a_i\}=\{m_0,M_{1/2},A_0,B_0,\mu _0\}.$$
(4)
(4) All observables, including the $`Z`$ boson mass, are then reinterpreted as functions of the fundamental parameters $`a_i`$, and the sensitivity of the weak scale to small fractional variations in these parameters is measured by the sensitivity coefficients
$$c_i\left|\frac{\mathrm{ln}m_Z^2}{\mathrm{ln}a_i}\right|.$$
(5)
(5) Finally, we form the fine-tuning parameter
$$c=\mathrm{max}\{c_i\},$$
(6)
which is taken as a measure of the naturalness of point $`𝒫`$, with large $`c`$ corresponding to large fine-tuning.
As is clear from the description above, several subjective choices have been made, as they must be in any definition of naturalness. The choice of minimal supergravity in step (1), and particularly the assumption of a universal scalar mass, plays a crucial role. Deviations from this assumption will be considered below.
The choice of fundamental parameters in step (3) is also important and varies throughout the literature. An appealingly simple choice (see, e.g., Ref. ) is $`\{a_i\}=\{\mu \}`$, where $`\mu `$ is to be evaluated at the weak scale. This is equivalent to using $`\mu ^2`$ as a fine-tuning measure, since Eqs. (2) and (5) imply $`c_\mu =4\mu ^2/m_Z^2`$. While generally adequate, this definition is insensitive to large fine-tunings in the function $`f`$ of Eq. (2), as we will see below; such fine-tunings are accounted for in the more sophisticated choice of Eq. (4).
The top quark Yukawa $`Y_t`$ (sometimes along with other standard model parameters, such as the strong coupling) is included among the fundamental parameters in some studies and not in others . This choice typically attracts little comment, and attitudes toward it are at best ambivalent . This ambiguity reflects, perhaps, a diversity of prejudices concerning the fundamental theory of flavor. It is important to note, however, that unlike the parameters of Eq. (4), $`Y_t`$ is not expected to be related to supersymmetry breaking and is, in some sense, now measured, as it is strongly correlated with the top quark mass $`m_t`$. For these reasons, we find it reasonable to assume that in some more fundamental theory, $`Y_t`$ is fixed to its measured value in a flavor sector separate from the supersymmetry breaking sector, and we therefore do not include it among the $`a_i`$. This choice is critical for our conclusions, as will be discussed below.
In step (5), various other choices are also possible. For example, the $`c_i`$ may be combined linearly or in quadrature; we follow the most popular convention. In other prescriptions, the $`c_i`$ are combined after first dividing them by some suitably defined average $`\overline{c}_i`$ to remove artificial appearances of fine-tuning . We have not done this, but note that such a normalization procedure typically reduces the fine-tuning measure and would only strengthen our conclusions.
Given the prescription for measuring naturalness described above, we may now present our results. In Fig. 1, contours of constant $`c`$, along with squark mass contours, are presented for $`\mathrm{tan}\beta =10`$. Moving from low to high $`m_0`$, the contours are determined successively by $`c_{\mu _0}`$, $`c_{M_{1/2}}`$ and $`c_{m_0}`$. The naturalness requirement $`c<25`$ ($`c<50`$) allows regions of parameter space with $`m_02\mathrm{TeV}(2.4\mathrm{TeV})`$. More importantly, regions with $`m_0\stackrel{>}{}2\mathrm{TeV}`$, where all squarks and sleptons have masses well above 1 TeV, are as natural as the region with $`(m_0,M_{1/2})\stackrel{<}{}(1000\mathrm{GeV},400\mathrm{GeV})`$, where squark masses are below 1 TeV.
The naturalness of multi-TeV $`m_0`$, though perhaps surprising, may be simply understood as a consequence of a “focus point” in the RG behavior of $`m_{H_u}^2`$ , which renders its value at $`M_{\mathrm{weak}}`$ highly insensitive to its value in the ultraviolet. Note that for moderate and large $`\mathrm{tan}\beta `$, Eq. (2) implies that $`m_Z^2`$ is insensitive to $`m_{H_d}^2`$ and is determined primarily by $`m_{H_u}^2`$.
Consider any set of minimal supergravity input parameters. These generate a particular set of RG trajectories, $`m_i^2|_\mathrm{p}(t),M_i|_\mathrm{p}(t),A_i|_\mathrm{p}(t),\mathrm{}`$, where $`t\mathrm{ln}(Q/M_{\mathrm{GUT}})`$ and $`Q`$ is the renormalization scale. Now consider another set of boundary conditions that differs from the first by shifts in the scalar masses. The new scalar masses $`m_i^2=m_i^2|_\mathrm{p}+\delta m_i^2`$ satisfy the RG equations
$$\frac{d}{dt}m_i^2\frac{1}{16\pi ^2}\left[g^2M_{1/2}^2+Y^2A^2+\underset{j}{}Y^2m_j^2\right]$$
(7)
at one-loop, where positive numerical coefficients have been omitted, and the sum is over all chiral fields $`\varphi _j`$ interacting with $`\varphi _i`$ through the Yukawa coupling $`Y`$. However, because the $`m_i^2|_\mathrm{p}`$ are already a particular solution to these RG equations, the deviations $`\delta m_i^2`$ obey the homogeneous equations
$$\frac{d}{dt}\delta m_i^2\frac{1}{16\pi ^2}\underset{j}{}Y^2\delta m_j^2.$$
(8)
Such equations are easily solved. Assume for the moment that the only large Yukawa coupling is $`Y_t`$, i.e., $`\mathrm{tan}\beta `$ is not extremely large. Then $`\delta m_{H_u}^2`$ is determined from
$$\frac{d}{dt}\left[\begin{array}{c}\delta m_{H_u}^2\\ \delta m_{U_3}^2\\ \delta m_{Q_3}^2\end{array}\right]=\frac{Y_t^2}{8\pi ^2}\left[\begin{array}{ccc}3& 3& 3\\ 2& 2& 2\\ 1& 1& 1\end{array}\right]\left[\begin{array}{c}\delta m_{H_u}^2\\ \delta m_{U_3}^2\\ \delta m_{Q_3}^2\end{array}\right],$$
(9)
where $`Q_3`$ and $`U_3`$ denote the third generation squark SU(2) doublet and up-type singlet representations, respectively. The solution corresponding to the universal initial condition $`\delta m_0^2(1,1,1)^T`$ is
$$[\begin{array}{c}\delta m_{H_u}^2\\ \delta m_{U_3}^2\\ \delta m_{Q_3}^2\end{array}]=\frac{\delta m_0^2}{2}\left\{[\begin{array}{c}3\\ 2\\ 1\end{array}]\mathrm{exp}\left[_0^t\frac{6Y_t^2}{8\pi ^2}𝑑t^{}\right][\begin{array}{c}\hfill 1\\ \hfill 0\\ \hfill 1\end{array}]\right\}.$$
(10)
For $`t`$ and $`Y_t`$ such that $`\mathrm{exp}\left[\frac{6}{8\pi ^2}_0^tY_t^2𝑑t^{}\right]=1/3`$, $`\delta m_{H_u}^2=0`$, i.e., $`m_{H_u}^2`$ is independent of $`\delta m_0^2`$.
The RG evolution of $`m_{H_u}^2`$ in minimal supergravity is shown for several values of $`m_0`$ in Fig. 2. As expected, the RG curves exhibit a focus (not a fixed) point, where $`m_{H_u}^2`$ is independent of its ultraviolet value. Remarkably, however, for the physical top mass of $`m_t175\mathrm{GeV}`$, the focus point is very near the weak scale. Thus, the weak scale value of $`m_{H_u}^2`$ and, with it, the fine-tuning parameter $`c`$ are highly insensitive to $`m_0`$. If the particular solution is natural (say, with all input parameters near the weak scale), the new solution, even with very large $`m_0`$, is also natural.
We have also checked numerically that the focusing effect persists even for very large values of $`\mathrm{tan}\beta `$. Indeed, in the limit $`Y_t=Y_bY_\tau `$, Eq. (8) can be similarly solved analytically, and one finds that focusing occurs for $`\mathrm{exp}\left[\frac{7}{8\pi ^2}_0^tY_t^2𝑑t^{}\right]=2/9`$. For the experimentally preferred range of top masses, the focus point is again tantalizingly close to $`M_{\mathrm{weak}}`$ .
The naturalness of multi-TeV $`m_0`$ has important implications for collider searches. Although $`m_{H_u}^2`$ is focused to the weak scale, all other soft masses remain of order $`m_0`$. From Eqs. (8) and (10), we find that for $`m_0M_{1/2},A_0`$, the physical masses of squarks, sleptons, and heavy Higgs scalars are well-approximated by
$`\stackrel{~}{t}_R:\sqrt{1/3}m_0`$ $`\mathrm{All}\mathrm{other}\stackrel{~}{\mathrm{q}},\stackrel{~}{\mathrm{}}:\mathrm{m}_0`$
$`\stackrel{~}{t}_L,\stackrel{~}{b}_L:\sqrt{2/3}m_0`$ $`H^\pm ,A,H^0:m_0.`$ (11)
Exact values of $`m_{\stackrel{~}{u}_L}`$ are presented in Fig. 1. All squarks, sleptons, and heavy Higgs scalars may therefore have masses $`\stackrel{>}{}12\mathrm{TeV}`$, and may be beyond the reach of the Large Hadron Collider (LHC) and proposed linear colliders. The discovery of such heavy scalars then requires some even more energetic facility, such as the envisioned muon or very large hadron colliders.
As may be seen from Fig. 1, however, fine-tuning constraints do not allow multi-TeV $`M_{1/2}`$. A similar conclusion applies to $`\mu `$, as may be seen in Fig. 3. We therefore expect all gauginos and Higgsinos to be within the kinematic reach of the LHC. Note that some regions of low $`\mu `$ are unnatural. In these regions, large cancellations in the function $`f`$ of Eq. (2) occur, and the simple definition $`c\mu ^2`$ is inadequate.
In addition to the gauginos and Higgsinos, the lightest Higgs boson is, of course, still required to be light. Contours of lightest Higgs mass $`m_h`$ are also presented in Fig. 3. Very heavy top and bottom squarks increase $`m_h`$ through radiative corrections: for low $`M_{1/2}`$, $`m_h`$ increases by roughly $`6`$ GeV as $`m_0`$ increases from 500 GeV to 2 TeV. However, in the multi-TeV $`m_0`$ scenario, naturalness requires $`A_0M_{\mathrm{weak}}`$ (see below), and so left-right squark mixing is suppressed. The upper bound on $`m_h`$ in Fig. 3 is thus approximately $`120\mathrm{GeV}`$, well below limits achieved for TeV squarks with maximal left-right mixing, and within the 3-5$`\sigma `$ discovery range of Higgs searches at the Tevatron with luminosity $`1030\mathrm{fb}^1`$ .
The focus point analysis presented above (for small $`Y_b`$) relied heavily on the universality of the $`H_u`$, $`U_3`$ and $`Q_3`$ soft masses. It is not hard to show, however, that GUT scale boundary conditions of the form $`(m_{H_u}^2,m_{U_3}^2,m_{Q_3}^2)=(1,1x,1+x)`$, for any $`x`$, also exhibit the focus point behavior. With respect to the other supersymmetry breaking parameters, the focus point is fairly robust. The mechanism is independent of all other scalar masses. Also, in the analysis above, any natural particular solution would do. Arbitrary and non-universal gaugino masses and trilinear couplings of order $`M_{\mathrm{weak}}`$ are therefore allowed. (Similarly, deviations in $`m_{H_u}^2`$, $`m_{U_3}^2`$, and $`m_{Q_3}^2`$ of order $`M_{\mathrm{weak}}^2`$ do not destabilize the focus point.) Note, however, that multi-TeV gaugino masses and $`A`$ parameters are not allowed. The required hierarchy between the scalar masses and the gaugino mass, $`A`$, and $`\mu `$ parameters may result from an approximate U(1)<sub>R+PQ</sub> symmetry or from the absence of singlet $`F`$ terms . $`B\mu `$ may also be suppressed by such a symmetry, and so leads to an experimentally viable scenario with naturally large $`\mathrm{tan}\beta m_{H_d}^2/(B\mu )`$, which is typically difficult to realize .
Although the focus point mechanism depends on a relation between $`m_t`$ and $`\mathrm{ln}(\frac{M_{\mathrm{GUT}}}{M_{\mathrm{weak}}})`$, it is not extraordinarily sensitive to these values. The focus point is still near the weak scale if $`m_t`$ is varied within its experimental uncertainty of 5 GeV, and, in fact, natural regions with multi-TeV $`m_0`$ are also possible if the high scale is raised to $`10^{18}\mathrm{GeV}`$ .
We stress, however, that if $`Y_t`$ is included among the free and fundamental parameters, multi-TeV $`m_0`$ would be considered unnatural. For example, for $`\mathrm{tan}\beta =10`$ and $`A_0=0`$, $`c_{Y_t}<25`$ (50) corresponds to $`m_0\stackrel{<}{}500\mathrm{GeV}`$ (800 GeV) . We have presented above our rationale for not including $`Y_t`$ among the $`a_i`$, although a definitive resolution of this issue most likely requires an understanding of the fundamental theory of flavor.
In conclusion, for moderate and large $`\mathrm{tan}\beta `$, multi-TeV scalars are natural in minimal supergravity. In view of this result, the discovery of squarks, sleptons, and heavy Higgs scalars may be extremely challenging even at the LHC. In addition, it is not surprising that these scalars have so far escaped detection, as present bounds are far from excluding most of the natural parameter space. Finally, it is tempting to speculate that what appears to be an accidental conspiracy between $`m_t`$ and the ratio of high to weak scales may find some fundamental explanation. If gauginos and Higgsinos are discovered, but all supersymmetric scalars escape detection at the LHC, the preservation of the naturalness motivation for supersymmetry, as currently understood, will require either an explanation of large cancellations between supersymmetry breaking soft masses at the weak scale, or the above scenario with a top mass fixed to be near 175 GeV. The latter possibility is, in our view, far more compelling and is supported by experimental data.
Acknowledgments — We are grateful to K. Agashe, M. Drees, and L. Hall for correspondence and conversations, and to the Aspen Center for Physics for hospitality. This work was supported in part by DOE under contracts DE–FG02–90ER40542 and DE–AC02–76CH03000, by the NSF under grant PHY–9513835, through the generosity of Frank and Peggy Taplin (JLF), and by a Marvin L. Goldberger Membership (TM).
|
no-problem/9908/cond-mat9908346.html
|
ar5iv
|
text
|
# Reply to “Comment on ‘Macroscopic Equation for the Roughness of Growing Interfaces in Quenched Disorder’ ”
Braunstein et al. replies: In a comment of the recent paper , Lopez et al. obtained analytically the short time behavior of the directed percolation depinning (DPD) model . Their results explains the behavior of the temporal derivative of the interface width (DSIW) for all $`q`$ until a time $`t\text{e}^2`$. We argue that the fail in reproducing the early time regime until the correlations are generated ($`t1`$ at the depinning transition) is because the density of active sites of the interface is not a constant $`p`$. This density depends on time as we will show below. At time $`t`$ a site $`i`$, of a one dimensional lattice of size $`L`$, is chosen at random with probability $`1/L`$. Let us denote by $`h_i(t)`$ the height of the $`i`$-th generic site at time $`t`$. The set of $`\{h_i,i=1,\mathrm{},L\}`$ defines the interface between “wet” and “dry” cells. We shall denote $`F_i=F_i(h_i+1)`$ the activity of the $`i`$-th generic site above the interface. If the cell $`(i,h_i+1)`$ is active $`F_i=1`$ (unblocked) otherwise $`F_i=0`$ (blocked). The time evolution for the probability of active sites $`f(F_i=1,t)f(t)`$ just above of the interface in a time step $`\delta t=1/L`$ is
$$f(t+\delta t)=\frac{p}{L}f(t)+(1\frac{1}{L})f(t).$$
(1)
This equation takes into account the probability that a cell above the interface remains active after a time step $`\delta t`$. The first term take into account the probability of growth $`f/L`$ in the $`i`$-th column times the probability $`p`$ that the new cell of the interface be active. The second term is the probability that no growth occurs in the $`i`$-th column when the cell is active. Taking the limit $`\delta t0`$ in Eq. (1) and solving the equation, with initial condition $`f(0)=p`$, we obtain
$$f(t)=p\text{e}^{qt}.$$
(2)
Notice that $`f(t)`$ is the interface activity density (IAD), i.e. $`f(t)=\{F_i\}`$ where the brackets (braces) denotes averages over the lattice (realizations). Notice that $`f(t)`$ is close to $`p`$ until $`t\text{e}^2`$ where the result of López et. al. holds. To obtain a more realistic description for the DSIW until the correlation are generated ($`t1`$) it is necessary to take into account the temporal dependence of the IAD. Let us consider a growth model in a system of size $`L`$ with density $`f(t)`$ of active cells in the early time regime. In this regimen the lateral correlations are negligible. Assuming independence between $`F_i`$ and $`h_i`$, the time evolution for the probability of having a column with height $`h`$ at time $`t`$ is given by
$$P(h,t+\delta t)=P(h1,t)\frac{f(t)}{L}+P(h,t)\left(1\frac{f(t)}{L}\right).$$
(3)
Taking the limit $`\delta t0`$ we obtain the master equation for the probability. Using the generating function of moments, one can calculate the DSIW:
$$\frac{dw^2}{dt}=f(t).$$
(4)
For $`t1`$ horizontal correlations are generated and Eq. (2) breaks down. From numerical simulations we could check that the hypothesis assumed to derive Eq (3) holds in the neighborhood of the criticality and in the pinned phase but it breaks down for $`qq_c`$. However, these values of $`q`$ are not interesting from an experimental point of view . A comparison of Eq.(4) with numerical simulations of the DPD model is presented in Fig. 1 for the critical value. We can see that until $`t\text{e}^2`$ the analytic result of Eq. (4) and the one obtained by López et al. are coincident with the numerical results of the DSIW. This is because in this regime $`f(t)p`$. As time goes on, $`f(t)`$ decays and the hypothesis of López et al. does not hold as we can see in this Figure. However, our analytical result predicts the DSIW until $`t1`$.
Finally, we argue that contrary to what it is claimed in the comment of , Braunstein and Buceta’s formula does describe the macroscopic behavior of the interface even when the solution is not exactly a matching between the early \[Eq. (4)\] and the asymptotic regimes.
|
no-problem/9908/chao-dyn9908018.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The Lyapunov spectrum is recognized as an important diagnostic of chaotic dynamical systems. As such, it has been studied intensely in the context of extended systems . It has been observed that in the thermodynamic limit the spectrum seems to approach a continuous density, and some theoretical studies have focused on this phenomenon . However, despite the large amount of available data, there is an unsatisfactory degree of understanding of the relation between the Lyapunov spectrum of extended systems, and their global or collective properties.
In connection with these problems, the recent study of presents an interesting development (see also for further results and references). In the context of molecular dynamics simulations, they find hydrodynamic, i.e., slow, long-wavelength behavior in the tangent space dynamics. Namely, they observe that the Lyapunov vectors associated with the Lyapunov exponents of small absolute value have ordered, wave-like structure, and that the exponents themselves follow an ordered pattern. Hydrodynamic behavior in phase space is of course present in every extended system with a continuous symmetry. In the models considered in the symmetries in question are translation and Galilei invariance, precisely those which enable the hydrodynamic description of a fluid in terms of the Navier-Stokes equations . However, it is for the first time that a similar phenomenon is observed in tangent space.
In this paper, we study theoretically the slow Lyapunov modes (vectors and exponents) of extended systems with translation invariance. We focus attention on a simplified model which shares the essential features with the more elaborate model of . This simplified model is constructed only in tangent space without an accompanying real space dynamics, and is based on a random matrix approximation. As has been often found before, in systems with strong chaos, qualitative features of the Lyapunov spectrum are well reproduced by approximating the tangent matrices by independent random matrices with appropriately chosen distributions . We prove several statements on the slow Lyapunov modes of this model in the thermodynamic limit, which show that in this limit the Lyapunov vectors and exponents are indeed well described as being hydrodynamic.
The basic reason for the existence of these hydrodynamic modes is evidently the translation invariance. Its presence dictates that the dynamics are indifferent to a uniform shift of all the particles (or their momenta), so that the associated Lyapunov vectors are decoupled from the rest of the dynamics, and the associated Lyapunov exponents vanish. We show that slowly growing large wavelength disturbances are nearly decoupled for the same reason, and use this property to show how the clean wave structure is obtained as a result of the orthogonalization procedure which involves all the faster growing Lyapunov vectors. It should be emphasized that the wave-like structure characterizes the Lyapunov vector at any given instant and is not an average property. Our arguments depend essentially on the local and hyperbolic character of the interactions, in addition to translation invariance. The absence of translation invariance has been recognized to ruin the hydrodynamic modes . In translation invariant anharmonic chains, the absence of short time hyperbolicity seems to ruin the hydrodynamic modes . We present theoretical arguments for the existence of hydrodynamic modes in the simplified model, which are complemented by numerical verifications. The outline of the paper is as follows. In section 2 the hydrodynamic phenomenology is described in some more detail, the definition of the random matrix model is presented and motivated, and the results are stated. They are derived in sections 35. Section 6 is devoted to numerical studies of the hydrodynamic properties of Lyapunov vectors.
## 2 Hydrodynamic behavior in tangent space
The systems studied in consist (among others) of a large set of disks moving in a two dimensional box $`\mathrm{\Omega }`$ with periodic boundary conditions (torus geometry), with elastic scattering. In this case, the phase space is $`4N`$-dimensional where $`N`$ is the number of disks. The Lyapunov vectors have $`4N`$ components which we label as
$$(\delta x_n,\delta y_n,\delta p_{x,n},\delta p_{y,n})1nN,$$
with evident notation. To give them a geometrical meaning the components of the Lyapunov vectors are drawn in at the instantaneous position of the particles which carry a given specific index. That is, one constructs a vector field $`\stackrel{}{v}(t,\stackrel{}{x})`$ with values in $`𝐑^4`$, which are defined only at the instantaneous positions $`\stackrel{}{x}_n(t)`$ of the particles, for example
$$v_x(t,\stackrel{}{x}_n(t))=\delta x_n(t),$$
and similarly for the other components.
The vector fields of the slow Lyapunov vectors, defined as the Lyapunov vectors with small corresponding Lyapunov exponents, are very well approximated by the long wavelength eigenmodes of a ‘reverse wave equation’ in the domain $`\mathrm{\Omega }`$:
$$_t^2\stackrel{}{v}(t,\stackrel{}{x})=\frac{1}{N^2}^2\stackrel{}{v}(t,\stackrel{}{x})$$
(1)
(note the unusual sign in front of $`^2`$). That is, the vectors look like long wavelength waves with, say, $`n`$ nodes in the $`x`$ direction and $`m`$ nodes in the $`y`$ direction, and the corresponding Lyapunov exponent is proportional to
$$\pm \frac{1}{N}\sqrt{\left(\frac{m}{L_x}\right)^2+\left(\frac{n}{L_y}\right)^2}.$$
Note that the translation modes—constant Lyapunov vectors with zero exponents, which are trivially present in any system with translation invariance, correspond to the special case $`m=n=0`$. This phenomenology was observed in simulations with widely varying parameters, such as aspect ratio, density, and the shape of the particles .
The tangent flow of the molecular dynamics system can be written as
$$_t\left(\begin{array}{c}\delta \stackrel{}{x}\\ \delta \stackrel{}{p}\end{array}\right)=G(t)\left(\begin{array}{c}\delta \stackrel{}{x}\\ \delta \stackrel{}{p}\end{array}\right),$$
(2)
where the components $`\delta \stackrel{}{x},\delta \stackrel{}{p}`$ are column vectors with $`N`$ entries each of which is a vector in $`𝐑^2`$. The quantity $`G(t)`$ is the action on the tangent space induced by the flow $`\mathrm{\Phi }(t)`$ of the dynamical system: If $`\psi _0`$ is the instantaneous state of the system then $`G(t)`$ is given by $`G(t)f=D\mathrm{\Phi }(t)_{\psi _0}f`$, where $`\mathrm{\Phi }(t)(\psi _0+\epsilon f)=\mathrm{\Phi }(t)\psi _0+\epsilon D\mathrm{\Phi }(t)_{\psi _0}f+O(\epsilon ^2)`$. The evolution operator of eq. (2) may be written formally as
$$U=𝖳\mathrm{exp}G(t)𝑑t.$$
(3)
We proceed to construct a simplified model of the tangent dynamics, by making a series of modifications and assumptions about the nature of $`G`$ and $`U`$.
We first replace the hard-core interaction with a short range ‘soft’ potential. In that case $`G`$ will have a block structure of the form
$$G(t)=\left(\begin{array}{cc}0& 1\\ A(t)& 0\end{array}\right),$$
(4)
where the symmetric $`N\times N`$ matrix $`A`$ depends on the instantaneous positions of the particles and couples only nearest neighbors. Since the interactions are purely repulsive, the flow is hyperbolic, which implies that $`A`$ should be taken positive.
At this stage one may note that if the matrix $`A(t)`$ in eq. (4) were replaced by the negative of the discrete Laplacian, the Lyapunov spectrum of $`G`$ would be precisely that described in eq. (1). However, the matrices $`A(t)`$ are in fact generated by chaotic dynamics, and therefore fluctuate rapidly. Furthermore, the particles in the gas rearrange in time, so that the positions of the non-zero elements in the matrix also evolve. In our study we concentrate on the first feature. That is, we show how tangent dynamics of the type (4) result in slow hydrodynamic modes in spite of the fluctuations in $`A(t)`$; the effects of particle rearrangement may in principle dealt with similarly, but need to be studied further.
The above discussion allows us to conclude that the matrices $`A(t)`$ should have non-zero elements only at those positions which are nonzero in the discrete Laplacian. Furthermore, momentum conservation implies that the sum of elements in any row and column of $`A`$ must vanish. This specifies completely the matrix structure of $`A`$, and it remains to model the time dependence of the off-diagonal non-zero elements of $`A`$. For this we invoke the hypothesis of strong chaos : The elements of $`A`$ may be treated as independent random processes, with a correlation time $`\tau `$ which is short with respect to other time scales of the system. It is commonly found that this approximation yields results which are in good qualitative agreement with those of the actual tangent flow.
With this in mind we model the evolution operator $`U`$ by a product of independent random matrices $`S_n`$
$$U=\underset{n}{}S_n,$$
(5)
where
$$S_n𝖳\mathrm{exp}_{(n1)\tau }^{n\tau }G(t)𝑑t.$$
(6)
During the time interval of length $`\tau `$, $`A`$, and therefore $`G`$, may be considered constant, so the simplest model for $`S`$ would be $`S=\left(\begin{array}{cc}1& \tau \\ \tau A& 1\end{array}\right)`$. However it is more convenient to correct this form by a second order term in $`\tau `$ in order to preserve the symplectic property which holds for $`U`$. We thus arrive at our model :
$$S=\left(\begin{array}{cc}1& \tau \\ \tau A& 1+\tau ^2A\end{array}\right).$$
(7)
The matrices $`A`$ are independent, and their off-diagonal elements are independent and identically distributed. The actual probability distribution of the off-diagonal elements can be chosen arbitrarily, subject to the constraint of uniform hyperbolicity, namely that the support of the distribution is strictly negative, and bounded away from zero.
The model as defined above makes sense in any space dimension, but for the sake of simplicity we study it in one dimension. There it bears similarity to the tangent dynamics of an anharmonic chain. However, in the latter case the matrices $`S`$ would be elliptic rather than hyperbolic. As we show below this is an essential ingredient in the mechanism for hydrodynamic modes, which are not present in the Lyapunov spectra of anharmonic chains . Unfortunately, we have not been able to find a model dynamics in real space whose tangent space dynamics would resemble that generated by the matrices of type (7). On the other hand our results do not use explicitly the dimensionality of the system and seem to be generalizable to higher dimensions.
Since the individual matrices are symplectic, the Lyapunov exponents of (5) come in pairs of equal absolute value and opposite signs. Translation and Galilei invariance imply the existence of two vanishing exponents. We concentrate our attention on the Lyapunov exponents $`\lambda _{N1}`$ and $`\lambda _{N2}`$ of smallest positive value, and the corresponding Lyapunov vectors $`v_{N1}`$ and $`v_{N2}`$.
Before we go on, it is necessary to make precise what we mean by Lyapunov vectors. As is well-known, essentially all numerical methods for calculating the tangent space dynamics of the the Lyapunov vectors $`v_n`$ are defined as follows: One starts with an orthogonal matrix $`Q`$, mutiplies it from the left with the tangent matrix (in the present case $`S`$) and decomposes the result as $`SQ=Q^{}R`$, where $`Q^{}`$ is orthogonal and $`R`$ is upper triangular. This procedur is iterated to yield a sequence of $`Q_t`$. The columns of the orthogonal matrices $`Q_t`$ are what we will call the Lyapunov vectors. The reader should note that these vectors are not the ones whose existence is proved in the multiplicative ergodic theorem.
We can now state the main result of this paper. Existence of hydrodynamic Lyapunov modes: As the size $`N`$ of the matrices tends to infinity the exponents $`\lambda _{N1}`$ and $`\lambda _{N2}`$ as well as the vectors $`v_{N1}`$ and $`v_{N2}`$ are asymptotic to the exponents and vectors that would be obtained if the matrices $`A`$ where replaced everywhere by the negative of the discrete Laplacian (properly rescaled). The statement holds for the Lyapunov vectors, which are random objects, in probability.
The statement is spelled out only for two Lyapunov modes, which have nearly equal exponents, and where the deviation from hydrodynamic behavior is the smallest. However, as will become apparent from the arguments below, the result can be extended to a number of Lyapunov modes near the middle of the spectrum which is proportional to $`\sqrt{N}`$. Our numerical studies also indicate that this is in fact true.
As already explained, the basic reason for the existence of the hydrodynamic modes is translation invariance. However, this general observation is not sufficient, and the actual proof is not trivial. It depends on the random nature of successive matrices, i.e., on strong chaos. Our strategy will be to show the existence of hydrodynamic modes first in the spectrum of a single matrix of type $`A`$, i.e., a negative random Laplacian (sec. 3); then this will be used to show that such modes exist in the Lyapunov spectrum of non-symplectic products of type $`(1+A_n)`$ in sec. 4, which in turn will be used to show the same property for symplectic products in sec. 5.
## 3 Spectral properties of a single matrix
The matrix $`A`$ defined in section 2 takes in one dimension the explicit form
$$A=\left(\begin{array}{ccccccc}a_1+a_2& a_2& 0& 0& \mathrm{}& 0& a_1\\ a_2& a_2+a_3& a_3& 0& \mathrm{}& 0& 0\\ \mathrm{}& & & & & & \mathrm{}\\ a_1& 0& 0& 0& \mathrm{}& a_N& a_N+a_1\end{array}\right),$$
(8)
where the $`a_n`$ are positive identically distributed independent random numbers. The distribution of the $`a_n`$ is arbitrary, subject to the condition $`0<a_{\mathrm{min}}<a<a_{\mathrm{max}}`$ with $`a_{\mathrm{min}}<a_{\mathrm{max}}`$, and normalized such that $`a^1=1`$, for later convenience. The $``$ always denote expectation with respect to the probability distribution of the $`a`$. We are not going to assume that the width of the distribution is small.
The matrix $`A`$ may be written as a product
$$A=\underset{¯}{}𝒜\overline{}$$
(9)
where $`\underset{¯}{}`$ and $`\overline{}`$ are the discrete derivatives whose action on a vector $`v𝐑_N`$ is
$$(\underset{¯}{}v)_n=v_nv_{n1},(\overline{}v)_n=v_{n+1}v_n,$$
(10)
and $`𝒜`$ is a diagonal matrix with diagonal elements $`a_n`$. (The indices are extended periodically so that $`a_{N+1}a_1`$). If all the $`a_n`$ were equal to one, $`A`$ would reduce to the discrete Laplacian matrix $`^2\underset{¯}{}\overline{}`$.
We define the Fourier transform matrix $`F`$ with elements
$$F_{kn}=\frac{1}{\sqrt{N}}e^{i\frac{2\pi }{N}kn},$$
(11)
which is a unitary transformation taking $`𝐑^N`$ to $`\stackrel{~}{𝐂}^N`$, the subset of $`𝐂_N`$ (with standard basis vectors $`e_n`$) consisting of vectors $`\stackrel{~}{v}`$ for which $`v_k=v_k^{}`$, which is an $`N`$ dimensional vector space over $`𝐑`$. The components of $`A`$ in the new basis are
$$\stackrel{~}{A}_{kl}(FAF^{})_{kl}=\mu _k^{}(a\delta _{k,l}+\stackrel{~}{a}_{kl})\mu _l,$$
(12)
where $`\mu _k=1\mathrm{exp}\left(\frac{2\pi i}{N}k\right)`$, and $`\stackrel{~}{a}`$ is related to the Fourier transform of $`a`$ considered as a vector in $`𝐑^N`$ by
$$\stackrel{~}{a}=N^{1/2}F(\stackrel{}{a}\stackrel{}{a}).$$
(13)
The random variables $`\stackrel{~}{a}_k`$ are centered, and as sums of independent random numbers their ‘single-point’ distribution is nearly Gaussian with variance
$$|\stackrel{~}{a}|^2=\frac{a^2a^2}{N},$$
(14)
so that they are typically small, of order $`𝒪(N^{1/2})`$. The joint distribution is not Gaussian.
Note that $`\mu _0=0`$, so that row $`0`$ and column $`0`$ of $`\stackrel{~}{A}`$ are zero, with the translation vector $`e_0`$ being trivially a zero eigenvector. We define the slow subspace
$$V_\mathrm{s}=\mathrm{Span}(\{e_0,e_1,e_1\})\stackrel{~}{𝐂}^N,$$
(15)
and its orthogonal complement, the fast subspace $`V_\mathrm{f}`$. We will consider often below the block decomposition of $`\stackrel{~}{A}`$ and other matrices into the fast and slow subspaces, e.g.,
$$\stackrel{~}{A}=\left(\begin{array}{cc}A_{\mathrm{ff}}& A_{\mathrm{fs}}\\ A_{\mathrm{sf}}& A_{\mathrm{ss}}\end{array}\right).$$
(16)
Note that $`V_\mathrm{f}`$ contains slow as well as fast modes.
The block $`A_{\mathrm{ss}}`$ has small norm of order $`𝒪(N^2)`$, and the off-diagonal blocks have norm of order $`𝒪(N^1)`$. However, there are more specific properties of $`A`$ which are needed to establish the existence of hydrodynamic eigenmodes. Consider the eigenvalue problem $`Av=\lambda v`$. Letting $`v=\underset{¯}{}u`$, and using the representation (9) gives an equation for $`u`$
$$^2u=\lambda 𝒜^1u.$$
(17)
It is convenient to proceed by writing eq. (17) in Fourier component form
$$(|\mu _k|^2\lambda )\stackrel{~}{u}_k=\lambda \underset{q}{}\stackrel{~}{b}_{kq}\stackrel{~}{u}_q,$$
(18)
with the $`\stackrel{~}{b}_k`$ bearing a relation to $`a_n^1`$ analogous to that between $`\stackrel{~}{a}_k`$ and $`a_n`$, namely
$$\stackrel{~}{b}_k=N^{1/2}F_{kn}(\frac{1}{a_n}1).$$
(19)
Since $`\frac{1}{a^2}<1/a_{\mathrm{min}}^2`$ is $`𝒪(1)`$ we find that the $`\stackrel{~}{b}_k`$ are $`𝒪(N^{1/2})`$ for the same reason that the $`\stackrel{~}{a}_k`$ are.
We claim that given a fixed $`m`$, and for $`N\mathrm{}`$ the system (18) has two linearly independent solutions $`u^{(\pm m)},\lambda _{\pm m}`$ such that
$$\frac{1}{|u_m^{(\pm m)}|}\underset{|k|m}{}|u_k^{(\pm m)}|=𝒪(N^{1/2}),$$
(20)
and
$$\left|\frac{\lambda _{\pm m}}{|\mu _m|^2}1\right|=𝒪(N^{1/2}).$$
(21)
We justify the claim by showing that eqs. (20) and (21) are consistent with the eigenvalue equation (18). For this we rewrite (18) as
$$\stackrel{~}{u}_k^{(m)}=\frac{\lambda _m}{|\mu _k|^2\lambda _m}\underset{q}{}\stackrel{~}{b}_{kq}\stackrel{~}{u}_q^{(m)}.$$
(22)
We assume that (2021) hold; this implies that the sum over $`q`$ in (22) is dominated by the two terms with $`q=\pm m`$, that is,
$$\stackrel{~}{u}_k^{(m)}=\frac{|\mu _m|^2}{|\mu _k|^2|\mu _m|^2}(b_{km}\stackrel{~}{u}_m^{(m)}+b_{k+m}\stackrel{~}{u}_m^{(m)}),\text{for }|k|m\text{.}$$
(23)
On substituting this expression in the left-hand-side of (20) the sum over $`k`$ is observed to be local, in the sense that it is dominated by terms with $`|k|m`$, where $`|\mu _k|^2\left(\frac{2\pi k}{N}\right)^2`$. Since $`\stackrel{~}{b}_k`$ is $`𝒪(N^{1/2})`$, assumption (20) is verified. On the other hand, using (20) in (22) for $`k=m`$ gives
$$\stackrel{~}{u}_m^{(m)}=\frac{\lambda _m}{|\mu _m|^2\lambda _m}(b_0\stackrel{~}{u}_m^{(m)}+b_{2m}\stackrel{~}{u}_m^{(m)}).$$
(24)
Since $`b_0`$ and $`\stackrel{~}{b}_{2m}`$ are $`O(N^{1/2})`$ it follows that $`\lambda _m/(|\mu _m|^2\lambda _m)=O(N^{1/2})`$ verifying (21), which shows that (2021) are indeed consistent with (18).
In terms of the original variables $`v`$, eq. (23) reads
$$\stackrel{~}{v}_k^{(m)}=\frac{\mu _k^{}\mu _m}{|\mu _k|^2|\mu _m|^2}(b_{km}\stackrel{~}{v}_m^{(m)}+b_{k+m}\stackrel{~}{v}_m^{(m)}),\text{for }|k|m\text{,}$$
(25)
so that the norm of $`v_{}^{(m)}`$, the component of $`v^{(m)}`$ orthogonal to $`e_{\pm m}`$ is small,
$$v_{}^{(m)}^2\frac{1}{N}\underset{|k|m}{}\frac{k^2m^2}{(k^2m^2)^2}=O\left(\frac{1}{N}\right).$$
In words, these eigenvectors are almost pure Fourier modes, i.e., eigenvectors of the discrete Laplacian.
For further developments we also need to show these modes are the only ones with eigenvalues of order $`𝒪(N^2)`$.
This is established easily by noting that the sharp cutoff on the probability distribution of the $`a`$ implies that every realization $`A`$ satisfies the bounds
$$a_{\mathrm{min}}^2<A<a_{\mathrm{max}}^2,$$
(26)
and then by the minimax principle it follows that the $`p`$th eigenvalue of $`A`$ is larger than $`a_{\mathrm{min}}`$ times the $`p`$th eigenvalue of $`^2`$ (sorting the eigenvalues of both matrices in increasing order).
The results of this section can be summarized using the decomposition of $`\stackrel{~}{𝐂}^N`$ into slow and fast subspaces defined above. We have shown there exist small numbers $`\epsilon `$ and $`\lambda `$, and a number $`0<\alpha <1`$, such that the matrix $`\stackrel{~}{A}`$ can be block diagonalized,
$$\stackrel{~}{A}=RDR^T,RR^T=1,D=\left(\begin{array}{cc}D_\mathrm{f}& 0\\ 0& D_\mathrm{s}\end{array}\right),$$
(27)
with the off-diagonal blocks bounded by $`R_{\mathrm{sf}},R_{\mathrm{fs}}<\epsilon `$, and the diagonal blocks obeying
$$D_\mathrm{f}>\lambda >\alpha \lambda >D_\mathrm{s}0,$$
(28)
and furthermore
$$A_{\mathrm{ss}}<\alpha \lambda .$$
(29)
The orders of magnitude for $`\lambda `$ and $`\epsilon `$ are $`\epsilon =𝒪(N^{1/2})`$ and $`\lambda =𝒪(N^2)`$, whereas $`\alpha 1/4`$. However, to keep the discussion reasonably general we are not going to use these specific values in our arguments below. Rather, we will make statements regarding arbitrary matrices which satisfy the conditions (2729).
Although this will not be used below, it is relevant to note that if we let $`m`$ vary, the small parameter in (20) and (21) becomes $`m/N^{1/2}`$. This means that we can expect a number of hydrodynamic eigenmodes which is proportional to $`\sqrt{N}`$. Another way to see this is related to the study of the vibrations of one-dimensional disordered lattices which are modeled precisely by the eigenmodes of matrices of type (8). There it is known that the localization length $`\xi `$ is proportional to $`\lambda _m^1`$. Since $`\lambda _m(2\pi /N)^2`$ the localization length will reach $`N`$ when $`m=𝒪(N^{1/2})`$. Thus, again, we only expect wave-like modes when $`m<𝒪(N^{1/2})`$.
## 4 Products of matrices of the form $`1+\tau A`$
In this section we use the properties derived in section 3 to derive the existence of hydrodynamic modes in the Lyapunov spectrum of the product $`_n(1+\tau A_n)`$, where the matrices $`A_n`$ are independent realizations of the random matrix defined in eq. (8). Beside providing a step towards proving the existence of hydrodynamic modes in symplectic products, such a product may be regarded as the discrete approximation to a continuous tangent flow given by
$$U=𝖳\mathrm{exp}A(t)𝑑t.$$
(30)
\[Compare eqs. (3) and (4).\] Although this does not correspond to the tangent flow of a mechanical system, it is nonetheless the simplest example where hydrodynamic Lyapunov modes can be expected. For convenience of further analysis we absorb $`\tau `$ into the definition of $`A`$ and change to Fourier basis once and for all, so the problem becomes that of a product
$$\underset{n}{}(1+\stackrel{~}{A}_n).$$
(31)
Since the Lyapunov exponents of the slow part are expected to be smaller than the rest we aim at showing that the first $`N3`$ Lyapunov vectors span a subspace $`L_\mathrm{f}`$ (of $`\stackrel{~}{𝐂}_N`$) which is almost orthogonal to $`V_\mathrm{s}`$ in the sense that for any two unit vectors $`u_\mathrm{f}L_\mathrm{f}`$ and $`v_\mathrm{s}V_\mathrm{s}`$ one has $`|u_\mathrm{f}v_\mathrm{s}|1`$. Although the subspace $`L_\mathrm{f}`$ changes after each step, we show below that the ‘almost orthogonality’ is propagated from step to step.
To show this, we propose the following scheme. Take an arbitrary vector $`uL_\mathrm{f}`$, whose components in $`V_\mathrm{f}`$ and $`V_\mathrm{s}`$ are $`u_\mathrm{f}`$ and $`u_\mathrm{s}`$ respectively, normalized so that $`u_\mathrm{f}=1`$, and assume that $`u_\mathrm{s}`$ is small. The action of $`1+A`$ generates a new normalized vector $`u^{}`$ by
$$u^{}=\frac{(1+A)u}{[(1+A)u]_\mathrm{f}},$$
(32)
where $`[]_\mathrm{f}`$ is the projection onto the $`\mathrm{f}`$-component We would like to show that $`u_\mathrm{s}`$ remains small after repeated iteration of this process.
The block diagonalization (27) shows that the subspaces $`V_\mathrm{f}`$ and $`V_\mathrm{s}`$ are indeed almost invariant under the transformation $`\stackrel{~}{A}`$. However, in trying to apply this fact to the Lyapunov vectors of the product (31) we immediately encounter the danger that the small perturbations may accumulate. The basic problem is that a vector in $`V_\mathrm{s}`$ is contracted with respect to the ‘slowest’ direction in $`V_\mathrm{f}`$ by a factor of only $`1𝒪(\lambda )`$ (as can be seen from the bounds on $`D_\mathrm{f}`$ and $`D_\mathrm{s}`$), whereas the perturbations which tilt a vector in $`L_\mathrm{f}`$ with respect to $`V_\mathrm{f}`$ are of order $`\epsilon `$, which is the typical size of the off-diagonal blocks \[cf. eqs. (27)–(29)\] and since we are interested in the case $`\lambda \epsilon `$ this contraction is not strong enough to overcome the perturbation.
This order of magnitude argument can be made explicit by constructing a series of matrices with the properties given by eqs. (27)–(29), which take a vector in $`V_\mathrm{f}`$ and rotate it such that the outcome is a vector which has an angle with $`V_\mathrm{f}`$ of order 1. This counter-example is given in appendix A.
An essential ingredient in the construction of this counter-example is that the matrix $`\stackrel{~}{A}`$ has to be chosen specifically given $`u`$ which in turn depends on former realizations, in violation of the independence assumption. In other words, although such a ‘bad’ sequence is possible one naturally expects that this is an event with very low probability. Typically the perturbations to $`u_\mathrm{s}`$ generated by the off-diagonal part of the matrices $`R`$ do not have the same direction, and should serve to cancel one another. Therefore the statement one can hope to show is that in the sequence generated by iteration of eq. (32), the probability that $`u_\mathrm{s}>C\epsilon `$ for some fixed $`C`$ is very small, as was shown for a similar example in . Here we will only prove the weaker statement that the variance $`u_\mathrm{s}^2`$ is $`𝒪(\epsilon ^2)`$, and take that as an indication that the probabilistic statement is correct, since the behavior of higher moments can be treated in an analogous manner.
To prove this statement we look at the $`s`$-component of eq. (32),
$$u_\mathrm{s}^{}=\frac{A_{\mathrm{sf}}u_\mathrm{f}+(1+A_{\mathrm{ss}})u_\mathrm{s}}{(1+A_{\mathrm{ff}})u_\mathrm{f}+A_{\mathrm{fs}}u_\mathrm{s}}.$$
(33)
The quantity $`u_\mathrm{s}^{}^2`$ is a sum of three terms $`E_1+E_2+E_3`$:
$$E_1=\frac{A_{\mathrm{sf}}u_\mathrm{f}^2}{\mathrm{}^2},E_2=2\frac{A_{\mathrm{sf}}u_\mathrm{f}(1+A_{\mathrm{ss}})u_\mathrm{s}}{\mathrm{}^2},E_3=\frac{(1+A_{\mathrm{ss}})u_\mathrm{s}^2}{\mathrm{}^2},$$
(34)
where $`\mathrm{}=(1+A_{\mathrm{ff}})u_\mathrm{f}+A_{\mathrm{fs}}u_\mathrm{s}`$.
To bound these terms we first need a lower bound on the denominator $`\mathrm{}`$. Let $`v_\mathrm{f}=R_{\mathrm{ff}}^Tu_\mathrm{f}+R_{\mathrm{fs}}^Tu_\mathrm{s}`$, and define $`d`$ by
$$D_\mathrm{f}v_\mathrm{f}dv_\mathrm{f}.$$
(35)
Note that $`d`$ can vary widely between $`𝒪(1)`$ values and $`𝒪(\lambda )`$. But, using the lower bound on $`D_\mathrm{f}`$ of (28), we see that
$$(1+D_\mathrm{f})v_\mathrm{f}^2=v_\mathrm{f}^2+2v_\mathrm{f}D_\mathrm{f}v_\mathrm{f}+D_\mathrm{f}v_\mathrm{f}^2(1+2\lambda +d^2)v_\mathrm{f}^2.$$
(36)
Expanding $`\mathrm{}`$ as
$$\mathrm{}=R_{\mathrm{ff}}(1+D_\mathrm{f})v_\mathrm{f}+R_{\mathrm{fs}}D_\mathrm{s}v_\mathrm{s},$$
(37)
and using the estimates $`R_{\mathrm{fs}}D_\mathrm{s}=𝒪(\epsilon \lambda )`$ and $`1R_{\mathrm{ff}}=𝒪(\epsilon ^2)`$ (cf. eq. (28)), we get from (36) the desired lower bound on $`\mathrm{}`$:
$$\mathrm{}^2>(1+2\lambda +d^2)v_\mathrm{f}^2(1𝒪(\epsilon )).$$
(38)
We can now bound $`E_1`$, $`E_2`$ and $`E_3`$. First, we have
$$A_{\mathrm{sf}}u_\mathrm{f}=R_{\mathrm{sf}}D_\mathrm{f}v_\mathrm{f}+𝒪(\lambda \epsilon )<\epsilon dv_\mathrm{f}+𝒪(\lambda \epsilon ).$$
(39)
Thus, neglecting higher order corrections in $`\epsilon `$, we get
$$E_1<\frac{\epsilon ^2d^2}{1+2\lambda +d^2}.$$
(40)
The bound on the term $`E_2`$ makes essential use of the translation invariance. For this, we note that
$$\frac{A_{\mathrm{fs}}u_\mathrm{s}}{(1+A_{\mathrm{ff}})u_\mathrm{f}^2}$$
(41)
transforms as a vector, that is, its $`k`$th component is multiplied by $`\mathrm{exp}(i\frac{2\pi }{N}kx)`$ under a relabeling of the coordinates $`nn+x`$. Therefore, because of translation invariance, the expectation value of (41) must remain invariant under such transformation, which means it must vanish. Since the denominator in $`E_2`$ is $`\mathrm{}^2`$ (which also depends on $`u_\mathrm{s}`$) and not $`(1+A_{\mathrm{ff}})u_\mathrm{f}^2`$, we need some gymnastics to exhibit the vanishing term. In order to see this we write
$$\begin{array}{c}E_2=2<\frac{A_{\mathrm{sf}}u_\mathrm{f}u_\mathrm{s}}{(1+A_{\mathrm{ff}})u_\mathrm{f}^2}+\frac{A_{\mathrm{sf}}u_\mathrm{f}A_{\mathrm{ss}}u_\mathrm{s}}{(1+A_{\mathrm{ff}})u_\mathrm{f}^2}\hfill \\ \frac{A_{\mathrm{sf}}u_\mathrm{f}(1+A_{\mathrm{ss}})u_\mathrm{s}\left[2(1+A_{\mathrm{ff}})u_\mathrm{f}A_{\mathrm{fs}}u_\mathrm{s}+A_{\mathrm{fs}}u_\mathrm{s}^2\right]}{(1+A_{\mathrm{ff}})u_\mathrm{f}^2\mathrm{}^2}>.\hfill \end{array}$$
(42)
The first term in (42) vanishes because of translation invariance, as explained before. The second term is bounded by
$$2\alpha \lambda \epsilon u_\mathrm{s}$$
(43)
and the dominant part of the third is
$$4\frac{\left(A_{\mathrm{sf}}u_\mathrm{f}u_\mathrm{s}\right)\left((1+A_{\mathrm{ff}})u_\mathrm{f}A_{\mathrm{fs}}u_\mathrm{s}\right)}{(1+A_{\mathrm{ff}})u_\mathrm{f}^2\mathrm{}^2}<\frac{4d\epsilon ^2u_\mathrm{s}^2}{1+2\lambda +d^2}.$$
(44)
The last term is bounded by
$$E_3<\frac{1+2\alpha \lambda }{1+2\lambda +d^2}u_\mathrm{s}^2.$$
Collecting the bounds yields
$$u_\mathrm{s}^{}^2<\frac{d^2\epsilon ^2+(1+2\alpha \lambda +4d\epsilon ^2)u_\mathrm{s}^2}{1+2\lambda +d^2}+2\alpha \lambda \epsilon .$$
(45)
It appears from (45) that although large perturbations are possible when $`d`$ is $`𝒪(1)`$, the contraction rate increases precisely enough to compensate this contribution. Thus if $`u_\mathrm{s}^2`$ is $`𝒪(\epsilon ^2)`$ to start with, it will stay so indefinitely.
In summary, assuming that the variance is indeed a measure of typical fluctuations, we have shown that, for $`N1`$, the subspace $`L_\mathrm{f}`$ spanned by the first $`N3`$ Lyapunov vectors of the product (31) is, with very high probability, almost orthogonal to $`V_\mathrm{s}`$. This implies that the last three Lyapunov vectors (including the translation) remain approximately in $`V_\mathrm{s}`$. This means by definition that they are hydrodynamic, in the sense that they are well approximated by eigenvectors of the discrete Laplacian. Since the action of $`\stackrel{~}{A}`$ on $`V_\mathrm{s}`$ has two eigenvalues close to $`(2\pi /N)^2`$ as shown in the previous section, it follows as a corollary that the two smallest non-trivial Lyapunov exponents have approximately this value, so that they are also hydrodynamic. This completes the demonstration.
## 5 Products of symplectic matrices
We now turn to products of matrices of the form
$$S=\left(\begin{array}{cc}1& \tau \\ \tau A& 1+\tau ^2A\end{array}\right).$$
(46)
We disregard the two translation modes in $`S`$ for convenience and view the matrices $`S`$ as $`(2N2)\times (2N2)`$ matrices. Let us recall that since the matrices $`S`$ are symplectic and hyperbolic, the Lyapunov exponents are non-zero and come in pairs of opposite signs. We concentrate on modes number $`N2`$ and $`N1`$ which are the smallest positive ones.
We reduce the problem to an equivalent one to which the results of section 4 can be applied directly. We denote by $`L_+`$ the subspace spanned by the first $`N1`$ Lyapunov vectors. It is spanned by a set of $`N1`$ independent vectors, which we display in the form of a $`(N1)\times (2N2)`$ matrix $`𝒱`$. The $`N1`$ vectors can always be chosen in such a way that $`𝒱`$ is of the normal form
$$𝒱=\left(\begin{array}{c}\mathrm{𝟏}\\ V\end{array}\right),$$
(47)
where both blocks are $`(N1)\times (N1)`$. Acting on $`𝒱`$ with $`S`$ gives a spanning set of the image subspace $`L_+^{}`$,
$$S𝒱=\left(\begin{array}{c}1+\tau V\\ \tau A+(1+\tau ^2\stackrel{~}{A})V\end{array}\right),$$
(48)
and changing basis to normal form gives $`𝒱^{}=\left(\begin{array}{c}\mathrm{𝟏}\\ V^{}\end{array}\right)`$ where
$$V^{}=\tau A+\frac{V}{1+\tau V}.$$
(49)
A convenient property of this matrix dynamical system is that if $`V`$ is symmetric to begin with, it stays so as a consequence of the symplectic property of $`S`$ .
By definition, any vector $`vL_+`$ has a block representation $`v=\left(\begin{array}{c}u,\\ Vu\end{array}\right)`$. From eq. (48) it follows that its image is $`\left(\begin{array}{c}u^{},\\ V^{}u^{}\end{array}\right)`$, where
$$u^{}=(1+\tau V)u.$$
(50)
Hence, the first $`N1`$ Lyapunov modes of the products of the $`S`$ are the same as those of the product $`_n(1+\tau V_n)`$ where the matrices $`V_n`$ are evolving according to eq. (49): $`V_{n+1}=\tau A_n+V_n/(1+\tau V_n)`$.
In view of this equivalence, it suffices to show that the matrix $`V`$ has the properties formulated in eqs. (2729) and to apply the results of section 4. First note if $`A_n=^2`$ (minus the discrete Laplacian) for all $`n`$, then the $`V_n`$ converge to $`f(^2)`$, where
$$f(x)=\frac{\tau x}{2}+\sqrt{x+\left(\frac{\tau x}{2}\right)^2}$$
is the larger root of the quadratic equation $`f(x)=\tau x+\frac{f(x)}{1+\tau f(x)}`$. For small $`x>0`$ this is close to $`x^{1/2}`$, and therefore we assume that $`V`$ has a representation of the type given by eqs. (2729), with $`\epsilon =𝒪(N^{1/2})`$ as before and $`\lambda `$ is now $`f(4\pi /N^2)=𝒪(N^1)`$. The aim is to show that this property is carried on to $`V^{}`$.
In order to avoid the necessity of presenting even more technical details, we present the argument for the case where the slow subspace contains a single mode, rather than a pair of nearly degenerate modes. Since $`V^{}`$ is symmetric its smallest eigenvalue is given by
$$\lambda _V^{}=\underset{u=1}{\mathrm{min}}u(\tau A+\frac{V}{1+\tau V})u.$$
(51)
It follows from our assumptions that there exist (normalized) eigenvectors of $`V`$ and $`A`$:
$$Ae_A=\lambda _1(1+c_A\epsilon )e_A,Ve_V=f(\lambda _1)(1+c_V\epsilon )e_V,$$
(52)
where $`\lambda _1`$ is the smallest positive eigenvalue of $`^2`$ and $`e_A`$ and $`e_V`$ are close to $`e_1`$, the corresponding eigenvector. The variational principle then gives immediately a lower bound on $`\lambda _V^{}`$,
$$\lambda _V^{}>f(\lambda _1)(1+\epsilon c_V^{}),$$
(53)
for some $`c_V^{}`$ between $`c_A`$ and $`c_V`$. To get an upper bound on $`\lambda _V^{}`$ recall that it was shown in sec. 3 eq. (25) that the $`k`$ component of $`e_A`$ is of order $`\epsilon k^1`$, and note that the bound (26) implies
$$f(a_{\mathrm{min}}^2)<V<f(a_{\mathrm{max}}^2).$$
(54)
We now use $`u=e_A`$ in eq. (51) and get
$$\lambda _V^{}<\lambda _1(1+c_A\epsilon )+e_Af(a_{\mathrm{max}}^2)e_A<f(\lambda _1)(1+\overline{c}\epsilon ),$$
(55)
for some constant $`\overline{c}`$. Eqs. (53) and (55) establish the desired property of the eigenvalues.
The corresponding eigenvector $`e_V^{}`$ is the one which minimizes eq. (51). Because of the minimax principle applied to $`A`$ and $`V`$, letting $`u=e_1+w`$ with $`we_1=0`$ and $`w`$ small, the quadratic form $`uAu`$ may be approximated by
$$uAu(ww_A)A(ww_A)+\lambda _A,$$
and similarly
$$uVu(ww_V)V(ww_V)+\lambda _V.$$
Therefore, in order to find $`w_V^{}`$ we need to minimize the quadratic form
$$\tau (ww_A)A(ww_A)+(ww_V)\frac{V}{1+\tau V}(ww_V).$$
The minimum occurs at
$$w_V^{}=(1+B)^1(Bw_A+w_V),$$
(56)
where
$$B=\tau \frac{1+\tau V}{V}A.$$
We can use again the bounds (26) and (54) to show that
$$b_{\mathrm{min}}g(^2)<B<b_{\mathrm{max}}g(^2),$$
for some positive numbers $`b_{\mathrm{min}},b_{\mathrm{max}}`$ and a positive function $`g`$, and thus bound the components of $`w_V^{}`$,
$$|(w_V^{})_k|<\frac{(w_V)_k+b_{\mathrm{max}}g(|\mu _k|^2)(w_A)_k}{1+b_{\mathrm{min}}g(|\mu _k|^2)},$$
where $`|\mu _k|^2`$ is the $`k`$th eigenvalue of $`^2`$ (see sec 3). This shows that $`(w_V^{})_k\epsilon k^1`$. This completes the demonstration of the desired properties of $`V^{}`$, and, on applying the results of section 4, the existence of hydrodynamic Lyapunov modes in the symplectic case.
## 6 Numerical tests
The purpose of this section is to verify numerically some of the statements given above, and on to further study numerically the dependence of hydrodynamic behavior of several Lyapunov modes as a function of noise level as well as system size.
The simplest system we discuss is a product $`_nA_n`$ of independent matrices of the form (8). Since the relative gap between the first two non-zero eigenvalues is $`𝒪(1)`$ \[see eqs. (2729)\], the contraction in this case is strong, and the potential problems of the accumulation of errors as discussed in sec. 4 and appendix A are absent. Nevertheless, even in this case, there are some qualitative differences between the behavior of the Lyapunov modes, and the corresponding eigenmodes of a single matrix.
We quantify the degree of hydrodynamic behavior in the Lyapunov modes as follows. For the Lyapunov vectors $`v_i`$ we computed the residuals $`r_i`$, that is, the norm of the orthogonal complement
$$r_i=v_i(v_ie_k)e_k(v_ie_k)e_k$$
where $`k=k(i)`$ is the wave vector associated with vector $`i`$. (For example, $`k=1`$ for the vectors $`v_{N1}`$ and $`v_{N2}`$ discussed above.) In fact, to get more precise results we subtracted from $`v_i`$ all the components with lower-lying $`k`$:
$$r_i=v_i\underset{kk(i)}{}\left((v_ie_k)e_k(v_ie_k)e_k\right)$$
(The results are not very different for the two definitions of $`r_i`$.)
Fig. 1 presents these residuals for systems with different sizes of the matrices and different values of the noise variance $`\sigma `$. The vertical axis measures $`(r_k/\sigma )^2`$, and the horizontal axis gives $`k/N`$. The approximate collapse of the graphs for small $`k`$ implies that the dependence of the residuals on system size $`N`$ and noise strength (variance) $`\sigma `$) is given by the scaling form
$$r_k=\sigma f_1\left(\frac{k}{N}\right).$$
The behavior of $`f_1`$ for small $`x`$ is approximately $`f_1(x)=𝒪(\sqrt{x})`$, which implies that for a fixed $`k`$
$$r_k\frac{\sigma }{\sqrt{N}},$$
the same dependence as in the residuals of the eigenvectors of a single matrix. However, the dependence of the residuals as a function of $`k`$ is $`r_k\sqrt{k}`$, slower the linear dependence on $`k`$ in the case of a single matrix.
For the Lyapunov exponents $`\lambda _i`$ we measure the relative deviation $`\delta _k`$ from the respective eigenvalues $`|\mu _k|^2`$ of the discrete Laplacians (see sec. 3),
$$\delta _k=\frac{\mathrm{exp}(\lambda _k)}{|\mu _k|^2}1.$$
The results for the deviations $`\delta _i`$ of the Lyapunov exponents are displayed in fig. 2, where $`\delta _k/\sigma ^2`$ is plotted against $`k/N`$. The data collapse implies that
$$\delta _k=\sigma ^2f_2\left(\frac{k}{N}\right).$$
The function $`f_2(x)`$ is approximately linear for small $`x`$ which implies for $`kN`$:
$$\delta _kr_k^2.$$
This is not unreasonable, since the Lyapunov exponents, unlike the eigenvalues of a single matrix, are given as a result of an averaging process.
We next present a similar analysis for the product $`_n(1+A_n)`$ which was considered in section 4. The results for the residuals of the Lyapunov vectors and the deviations of the exponents are presented in figures 3 and 4 respectively. The scaling form for the residuals is in this case
$$r_k=\sigma f_3\left(\frac{k}{N}\right),$$
where $`f_3(x)x`$ for small $`x`$. Thus, in this case the residual for a fixed $`k`$ decreases as
$$r_k\frac{\sigma }{N},$$
that is, faster than the $`N^{1/2}`$ decrease in the residuals of the eigenvectors of a single matrix. The analysis presented in section 4 is too general to capture this behavior.
The relative deviations of the exponents scale in this case as
$$\delta _k=\sigma ^2f_4\left(\frac{k}{N}\right),$$
and $`f_4(x)x^2`$ for small $`x`$, so that as in the product of random Laplacians, the relative size of $`r_k`$ and $`\delta _k`$ is $`\delta _kr_k^2.`$
## Appendix A Counter-example
We want to show here that an ‘unfortunate’ choice of rotations can move the system out of the region where the Lyapunov vectors remain essentially aligned with the eigendirections of the Laplacian. The issue here is that, on one hand, the cones in which these vectors lie are slightly contracted and on the other hand slightly turned. The ‘counter-example’ shows that the turning wins over the contraction.
Let $`V_\mathrm{f}=\mathrm{Span}\{e_1,e_2\}`$ and $`V_\mathrm{s}=\mathrm{Span}\{e_3\}`$. Suppose that $`L_\mathrm{f}`$ contains a vector with block representation $`u=(u_\mathrm{f},u_\mathrm{s})`$, with $`u_\mathrm{s}>0`$, normalized such that $`u_\mathrm{f}=1`$, and let $`v_\mathrm{f}`$ span the orthogonal complement to $`u_\mathrm{f}`$ in $`V_\mathrm{f}`$. We construct the matrix $`\stackrel{~}{A}`$ by giving the components in the representation (27),
$$R\left(\begin{array}{cc}\mathrm{𝟏}& \epsilon v_\mathrm{f}\\ \epsilon v_\mathrm{f}^T& 1\end{array}\right),$$
(57)
$`D_\mathrm{s}=\alpha \lambda `$, and $`D_\mathrm{f}`$ is such that
$$\begin{array}{c}D_\mathrm{f}u_\mathrm{f}=(\lambda +\epsilon ^2)u_\mathrm{f}+\epsilon v_\mathrm{f}\\ D_\mathrm{f}v_\mathrm{f}=\epsilon u_\mathrm{f}+v_\mathrm{f}\end{array}.$$
(58)
The image of $`u`$ is
$$(1+\stackrel{~}{A})\left(\begin{array}{c}u_\mathrm{f}\\ u_\mathrm{s}\end{array}\right)=\left(\begin{array}{c}(1+\lambda +\epsilon ^2(1u_\mathrm{s}))u_\mathrm{f}+\epsilon v_\mathrm{f}\\ (1+\alpha \lambda \epsilon ^2)u_\mathrm{s}+\epsilon ^2\end{array}\right).$$
(59)
After normalizing the $`f`$ component to 1, the $`s`$ component becomes
$$u_\mathrm{s}^{}\left[1+(\alpha 1)\lambda +\epsilon ^2(3/2u_\mathrm{s})\right]u_\mathrm{s}+\epsilon ^2.$$
(60)
Evidently, even if $`u_\mathrm{s}=0`$ initially, by choosing $`\stackrel{~}{A}`$ as above, $`u_\mathrm{s}`$ can be increased to an $`𝒪(1)`$ value (as $`\epsilon ,\lambda 0`$) if $`\lambda =𝒪(\epsilon ^2)`$.
Acknowledgments: We have profited from very useful discussions with S. Lepri, C. Liverani, Z. Olami, A. Politi, and L.-S. Young. But we are particularly indebted to H. Posch for having provided and discussed with us his beautiful numerical experiments. This work was partially supported by the Fonds National Suisse, and part of it was done in the pleasant atmosphere of the ESI in Vienna and at the Istituto Nazionale di Ottica in Florence.
|
no-problem/9908/cond-mat9908485.html
|
ar5iv
|
text
|
# REFERENCES
Comment on “Density Functional Simulation of a Breaking Nanowire”
In a recent Letter , Nakamura et al. described first principles calculations for a breaking Na nanocontact. Their system consists of a periodic one-dimensional array of supercells, each of which contains 39 Na atoms, originally forming a straight, crystalline wire with a length of 6 atoms. The system is elongated by increasing the length of the unit cell. At each step, the atomic configuration is relaxed to a new local equilibrium, and the tensile force is evaluated from the change of the total energy with elongation. Aside from a discontinuity of the force occuring at the transition from a crytalline to an amorphous configuration during the early stages of elongation, they were unable to identify any simple correlations between the force and the number of electronic modes transmitted through the contact. An important question is whether their model is realistic, i.e., whether it can be compared to experimental results obtained for a single nanocontact between two macroscopic pieces of metal. In this Comment, we demonstrate that with such a small unit cell, the interference effects between neighboring contacts are of the same size as the force oscillations in a single nanocontact.
In order to understand how the close proximity of the nanocontacts in the model of Ref. may alter the energetics of the system, we consider a system of two identical nanocontacts in series, connecting two macroscopic wires. We model the metallic nanocontacts as constrictions in a free electron gas, with hard-wall boundary conditions, and obtain the energetics of the system from the electronic scattering matrix . The scattering matrix of the compound system may be obtained as a geometric series in the scattering matrices of the individual contacts (which are taken to be symmetric under inversion, for simplicity), while the scattering matrix of a single contact may be evaluated using the adiabatic and WKB approximations , which are quite accurate for contacts of smooth shape . The total grand canonical potential of the system is found to be the sum of the contributions of the individual contacts, plus an interference term
$`\mathrm{\Delta }\mathrm{\Omega }={\displaystyle \frac{2}{\pi }}{\displaystyle 𝑑Ef(E)\underset{\nu }{}\mathrm{tan}^1\frac{R_\nu (E)\mathrm{sin}[2\theta _\nu (E)]}{1+R_\nu (E)\mathrm{cos}[2\theta _\nu (E)]}},`$
where $`f(E)`$ is the Fermi-Dirac distribution function, and $`R_\nu (E)`$ and $`\theta _\nu (E)`$ are the reflection probability and scattering phase shift, respectively, of the $`\nu `$th electronic mode for a single nanocontact.
The magnitude of the correction to the cohesive force in the supercell arrangement of Ref. arising from interference effects between neighboring supercells is $`\mathrm{\Delta }F=[\mathrm{\Delta }\mathrm{\Omega }]/L_{\mathrm{cell}}`$, where $`L_{\mathrm{cell}}`$ is the unit cell length. Interference between more widely separated supercells would lead to an additional correction. Fig. 1(b) shows that for the unit cell size considered in Ref. ($`L_{\mathrm{cell}}=17`$$`31\AA =2.5`$$`4.5\lambda _F`$), the interference correction to the cohesive force is comparable to the force oscillations of an individual nanocontact. For comparison, the conductance of a single nanocontact and the interference correction thereof are shown in Fig. 1(a). For a single contact, there is a clear correlation between the conductance steps and the force oscillations. However, the large interference correction would strongly suppress any correlations between the force calculated in the supercell arrangement of Ref. and the conductance of a single contact.
In order to explain the correlations between cohesion and conductance observed experimentally in metallic nanocontacts , it is essential to treat the energetics and transport of the system on an equal footing. This has been achieved in our free-electron model . The interference term scales as $`\mathrm{\Delta }F𝒪(L_{\mathrm{cell}}^1)`$ \[since $`\theta _\nu (E)L_{\mathrm{cell}}`$\], so it would be worthwhile to perform larger-scale “first principles” simulations to address this question.
J. B. acknowledges support from Swiss National Foundation PNR 36 “Nanosciences” grant # 4036-044033.
C. A. Stafford,<sup>1,2</sup> J. Bürki,<sup>2,3</sup> and D. Baeriswyl<sup>2</sup>
<sup>1</sup>University of Arizona, Tucson, Arizona 85721
<sup>2</sup>Université de Fribourg, 1700 Fribourg, Switzerland
<sup>3</sup>IRRMA, EPFL, 1015 Lausanne, Switzerland
Received 19 August 1999
PACS numbers: 73.40.Jn, 62.20.Fe, 73.20.Dx, 73.23.Ad
|
no-problem/9908/hep-th9908145.html
|
ar5iv
|
text
|
# Ultra-Relativistic Hamiltonian with Various Singular Potentials
## Abstract
It is shown from a simple scaling invariance that the ultra-relativistic Hamiltonian ($`\mu `$=0) does not have bound states when the potential is Coulombic. This supplements the application of the relativistic virial theorem derived by Lucha and Schöberl which shows that bound states do not exist for potentials more singular than the Coulomb potential.
The relativistic generalization of the Schrödinger equation (RSE)
$$\sqrt{𝐩^2+\mu ^2}\psi (𝐱)+V(r)\psi (𝐱)=(E+\mu )\psi (𝐱)$$
(1)
has been used in describing quark-antiquark bound states when one of the constituents is light and the other heavy . The mass $`\mu `$ can be considered as the constituent mass of the light quark and $`E`$ is the binding energy.
This Hamiltonian has two interesting critical behaviors concerning the existence of bound states when the potential is Coulombic,
$$V_c=\alpha /r,\begin{array}{c}\text{ }\alpha >0.\end{array}$$
(2)
First, it has been shown that, irrespective of the value of $`\mu `$, when the coupling constant of the Coulomb potential $`\alpha `$ approaches the value $`2/\pi `$ from below, the bound states disappear due to the large singularity of the potential at the origin . This means, as a corollary, that there are no bound states when the potential is more singular than Coulomb. In this letter, I show the second critical behavior concerning the existence of bound states for a Coulombic potential: irrespective of the value of coupling constant, bound states do not exist for a massless particle ($`\mu =0`$). This second critical behavior in terms of $`\mu `$ implies (similar to the what is implied by the first critical behavior in terms of $`\alpha `$) that there are no bound states with massless particles for a potential more singular than Coulomb.
First I derive a scaling feature of RSE for a general potential. Then I concentrate on the Coulomb potential and show that bound states do not exist for massless particles when the potential is Coulombic. I start with two wavefunctions $`\psi (𝐱)`$ and $`\stackrel{~}{\psi }(𝐱)`$ and their Fourier transforms $`\varphi (𝐩)`$ and $`\stackrel{~}{\varphi }(𝐩)`$ related in the following way,
$$\psi (𝐱)\stackrel{~}{\psi }(t𝐱)$$
(3)
$`\varphi (𝐩)`$ $`=`$ $`{\displaystyle \frac{1}{(\sqrt{2\pi })^3}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}\psi (𝐱)e^{i𝐩𝐱}𝑑x^3`$ (4)
$`=`$ $`{\displaystyle \frac{1}{t^3}}{\displaystyle \frac{1}{(\sqrt{2\pi })^3}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}\stackrel{~}{\psi }(t𝐱)e^{i\frac{𝐩}{t}(t𝐱)}d(tx)^3`$ (5)
$`=`$ $`{\displaystyle \frac{1}{t^3}}\stackrel{~}{\varphi }({\displaystyle \frac{𝐩}{t}}).`$ (6)
Here $`t`$ can be any positive real number. Now suppose $`\psi (𝐱)`$ satisfies RSE, then
$`(E+\mu )\psi (𝐱)`$ $`=`$ $`\sqrt{𝐩^2+\mu ^2}\psi (𝐱)+V(𝐱)\psi (𝐱)`$ (7)
$`=`$ $`{\displaystyle \frac{1}{(\sqrt{2\pi })^3}}{\displaystyle 𝑑p^3\sqrt{𝐩^2+\mu ^2}\varphi (𝐩)e^{i𝐩𝐱}}+V(𝐱)\psi (𝐱)`$ (8)
$`=`$ $`{\displaystyle \frac{1}{t^3}}t^4{\displaystyle \frac{1}{(\sqrt{2\pi })^3}}{\displaystyle d\left(\frac{p}{t}\right)^3\sqrt{\left(\frac{𝐩}{t}\right)^2+\left(\frac{\mu }{t}\right)^2}\stackrel{~}{\varphi }(\frac{𝐩}{t})e^{i\frac{𝐩}{t}(t𝐱)}}+V(𝐱)\stackrel{~}{\psi }(t𝐱)`$ (9)
$`=`$ $`t\sqrt{𝐩^2+\left({\displaystyle \frac{\mu }{t}}\right)^2}\stackrel{~}{\psi }(t𝐱)+V(𝐱)\stackrel{~}{\psi }(t𝐱)`$ (10)
Thus, using the equivalence (3), it gives
$$\sqrt{𝐩^2+\left(\frac{\mu }{t}\right)^2}\stackrel{~}{\psi }(𝐱)+\frac{V(\frac{𝐱}{t})}{t}\stackrel{~}{\psi }(𝐱)=\left(\frac{E}{t}+\frac{\mu }{t}\right)\stackrel{~}{\psi }(𝐱).$$
(11)
If the potential is Coulomb $`V(𝐱)=\alpha /r`$, then we arrive at the scaling results that if
$$\sqrt{𝐩^2+\mu ^2}\psi (𝐱)\frac{\alpha }{r}\psi (𝐱)=(E+\mu )\psi (𝐱)$$
(12)
then
$$\sqrt{𝐩^2+\stackrel{~}{\mu }^2}\stackrel{~}{\psi }(𝐱)\frac{\alpha }{r}\stackrel{~}{\psi }(𝐱)=(\stackrel{~}{E}+\stackrel{~}{\mu })\stackrel{~}{\psi }(𝐱),$$
(13)
with the relations
$`\stackrel{~}{\psi }(𝐱)`$ $`=`$ $`\psi ({\displaystyle \frac{𝐱}{t}})`$ (14)
$`\stackrel{~}{\mu }`$ $`=`$ $`{\displaystyle \frac{\mu }{t}}`$ (15)
$`\stackrel{~}{E}`$ $`=`$ $`{\displaystyle \frac{E}{t}}.`$ (16)
In fact this theorem applies to slightly more general potentials. As long as the parameters of the potential $`V(𝐱)`$ are all dimensionless, then
$$\frac{1}{t}V(\frac{𝐱}{t})=V(𝐱).$$
(17)
This is the only criteria that the potential must satisfy. In particular, the potential need not be spherically symmetric. For example potentials like
$$\begin{array}{c}V(𝐱)=\frac{\alpha _1}{\sqrt{x^2+y^2}}\frac{\alpha _2}{|z|}\text{ (cylindrical) }\end{array}$$
(18)
and
$$\begin{array}{c}V(𝐱)=\frac{\alpha }{\sqrt{ax^2+by^2+cz^2}}\text{ (ellipsoidal) }\end{array}$$
(19)
do satisfy eq.(17). The underlying reason for this generality is that if the potential has only dimensionless parameters, the wavefunction is restricted to the form
$$\psi (𝐱)=\mu ^{\frac{3}{2}}f(\mu 𝐱;\{\alpha _i\}).$$
(20)
Then after normalization, eq.(14) reduces to the statement
$$\begin{array}{c}\text{ if }\psi (𝐱)=\mu ^{\frac{3}{2}}f(\mu 𝐱;\{\alpha _i\})\text{ then }\stackrel{~}{\psi }(𝐱)=\stackrel{~}{\mu }^{\frac{3}{2}}f(\stackrel{~}{\mu }𝐱;\{\alpha _i\}).\end{array}$$
(21)
Here the normalization is $`𝑑x^3|f(𝐱;\{\alpha _i\})|^2=1`$.
In fact, from a dimensional point of view alone, one can show that eq.(20) must hold for non-relativistic Schrödinger equation (NRSE), and from this, all the relations (14),(15) and (16) as well.
This scaling behavior indicates, in words, that as the mass increases by a factor of $`t`$, the wavefunction shrinks by a factor of $`t`$ and the binding energy is also increased by a factor of $`t`$ (in magnitude). From this, it is clear that for $`\stackrel{~}{\mu }=0`$, $`\mu =0`$ automatically as well. This means now the Hamiltonians (12) and (13) become identical. Since this is true for any arbitrary $`t`$, it follows $`\stackrel{~}{E}\mathrm{}`$ as $`t0`$. Hence there are no massless bound states.
One can also extract simple scaling laws for other types of potentials from eq.(11). In particular, for massless case with a potential $`V(r)=r^k`$ with $`k>1`$, one can again show that bound states do not exist by taking the limit $`t0`$. This time the potential is not invariant but
$$\stackrel{~}{V}(\frac{r}{t})=t^kV(r).$$
(22)
Therefore, as $`t0`$ the potential becomes shallower, nevertheless, still the wavefunction shrinks to zero width and the energy goes to negative infinity. Thus again there are no bound states. This is reminiscent to the well-known theorem in NRSE which states that for a potential more singular than $`r^2`$ bound states do not exist . Here in our situation, since massless bound states do not exist for a Coulomb potential it is intuitively clear that for a potential more singular, the same is true.
This fact that the bound states do not exist for a potential more singular than the Coulomb potential can be inferred also from the relativistic virial theorem (RVT) derived by Lucha and Schöberl . It states, for an eigenstate of two-body relativistic Hamiltonian (in the center-of-mass frame)
$$H=\sqrt{𝐩^2+m_1^2}+\sqrt{𝐩^2+m_2^2}+V(x),$$
(23)
the gradient of the potential is related to the kinetic energy as
$$𝐱V(𝐱)=\frac{𝐩^2}{\sqrt{𝐩^2+m_1^2}}+\frac{𝐩^2}{\sqrt{𝐩^2+m_2^2}}.$$
(24)
This leads to
$$\epsilon =𝐱V(𝐱)+V(𝐱)+\frac{m_1^2}{\sqrt{𝐩^2+m_1^2}}+\frac{m_2^2}{\sqrt{𝐩^2+m_2^2}}$$
(25)
where $`\epsilon `$ denotes the total energy of the two-body system of particle mass $`m_1`$ and $`m_2`$. The Hamiltonian (23) simplifies to (1) when $`m_2`$ is taken to infinity and $`m_1`$ is set to be $`\mu `$. Accordingly (25) also reduces to
$$E=𝐱V(𝐱)+V(𝐱)+\frac{\mu ^2}{\sqrt{𝐩^2+\mu ^2}}.$$
(26)
Here the lhs has been reduced to the binding energy $`E`$. For a radially symmetric power law potential
$$V(r)=\alpha r^k$$
(27)
where $`\alpha `$ is positive for $`k>0`$ and negative for $`k<0`$, this means simply
$$E=(k+1)V+\frac{\mu ^2}{\sqrt{𝐩^2+\mu ^2}}.$$
(28)
Clearly for $`k<1`$, if bound states were to exist, it would give a nonsensical result because the lhs is negative while rhs is positive. This indicates that the eigenstates themselves do not exist for those potentials with $`k<1`$ for both finite and zero $`\mu `$. Literally taken, RVT predicts that all the massless particle bound states have one and the same binding energy $`E=0`$ for a Coulomb potential ($`k=1`$). This could also be seen as a manifestation that the bound states do not exist and the expectation values can not be defined.
Incidentally, in the non-relativistic case, not only these scaling relations eqs.(14), (15), (16) and (22) hold, but more general scaling relations can be derived. Indeed, NRSE with a potential of the form $`V(r)=\alpha r^k`$, where $`\alpha `$ is positive when $`k>0`$, and negative when $`k<0`$, can be converted in radially reduced form to the following dimensionless equation ,
$$\frac{d^2}{d\rho ^2}w(\rho )+[sgn(\alpha )\rho ^k+\frac{l(l+1)}{\rho ^2}]w(\rho )=ϵw(\rho ).$$
(29)
From this, it can be seen that all the scaling relations derived here are only a part of this more general scaling transformation. Unfortunately, for the RSE case, this general transformation to the dimensionless form, which would allow one to extract a lot more informations on the bound states, does not seem to be possible.
I thank D.Singleton for reading the manuscript.
|
no-problem/9908/astro-ph9908186.html
|
ar5iv
|
text
|
# Some Implications of the Anisotropic Distribution of Satellite Galaxies
## 1 Introduction
Disk galaxies presumably form from protogalactic clouds that consist of at least several sub-galaxy aggregates, some fraction of which eventually merge to form the dominant galaxy. How efficient and complete is this hierarchical process? Quantitative answers to this type of question have been solely in the domain of simulations (cf. Navarro and Steinmetz 1997 ; Klypin et al. 1999 (hereafter K99); Moore et al. 1999 (hereafter M99)). Those simulations are in turn constrained by observations of the properties of present day galaxies (such as the local Tully-Fisher relation), which are the result of a variety of complicated physical processes, and of the properties of high redshift galaxies, which are difficult to quantify and influenced by selection biases. A reasonable goal is to find a more direct link to the process of hierarchical formation.
In hierarchical formation scenarios, small mass objects generally collapse prior to large ones and become the building blocks of larger objects. Two types of objects that currently surround giant galaxies may qualify as possible hierarchical building blocks: globular clusters and satellite galaxies. Most studies of the dynamical evolution of either globular clusters or satellite galaxies begin with an assumed initial distribution of such companions and focus on the subsequent dynamical evolution of the system. For example, Aguilar, Hut, & Ostriker (1988) calculate the rate of destruction of globular clusters to assess whether the galactic spheroid was built from globular clusters, and Ostriker & Tremaine (1975) investigate the luminosity evolution of the primary galaxy due to infalling satellites. A difficulty with this approach is that the results depend sensitively on the unknown characteristics of the initial population and that we have no empirical means of determining whether the observed population represents a small or large fraction of the initial population. High-resolution numerical simulations are beginning to produce populations of low-mass companions around giant galaxies in a self-consistent cosmological framework (K99 and M99), but they produce far more satellite galaxies than observed.
The observed distribution of satellite galaxies of spiral primaries out to 500 kpc is asymmetric and elongated along the disk minor axis (Odewahn 1989; Zaritsky et al. 1997, hereafter ZSFW, and $`H_0=75`$ km/s/Mpc assumed throughout). This elongation is an extension of the Holmberg effect (the preferred polar orientation of satellites interior to $`r50`$ kpc; Holmberg 1969) to larger radii — although the physical causes of the two observational results may differ and the polar elongation is not evident at intermediate radii (50 to 200 kpc). Both the inner and outer satellite results are statistical because orbits of individual satellites are unknown (although the ZSFW result is for kinematically confirmed satellite galaxies). In the one galaxy for which individual satellite orbits are known, the Milky Way, there is evidence that the orbits are preferentially polar from the alignment of satellites on the sky (Kunkel & Demers 1976; Lynden-Bell 1982), the orientation of the Magellanic Stream (Mathewson, Cleary, and Murray 1974), the three-dimensional distribution of satellites (Majewski 1994; Hartwick 1996), and their space velocities (Scholz & Irwin 1994). Finally, Grebel, Kolatt, & Brandner (1999) find tentative evidence for a statistical excess of M 31 satellites along a polar orbit. Hence, preferentially polar satellite orbits may be common.
The connection between disks and satellite orbits is either imprinted in the initial conditions or is the result of dynamical phenomena during the formation and subsequent evolution of the galaxy. ZSFW argue that the orbital decay time due to dynamical friction at radii larger than 200 kpc excludes dynamical friction as the dominant mechanism. For example, models of the effect of dynamical friction on the orbit of the Large Magellanic Cloud (Tremaine 1976) suggest that the perigalacticon distance has decreased by a factor of $``$ 3 in 10 Gyr. Therefore, satellites similar to the Large Magellanic Cloud that began at perigalacticon radii $`>`$ 200 kpc will not have merged with their parent galaxies. Quinn & Goodman (1986), in their investigation of the Holmberg effect, found that dynamical friction could not even account for the asymmetry inside 50 kpc.
If we wish to invoke a dynamical process for the origin of the anisotropy, we must hypothesize that the missing satellites experienced a catastrophic event that either destroyed the satellite or inhibited the formation of stars (such as the removal of gas from the proto-satellite). Such an event is most likely to occur as satellites make a pericentric pass near the giant, and so could only affect satellites that have made at least one pericenter passage. Using the standard orbital equations of the timing argument (Kahn & Woltjer 1959), a halo mass of 1.5$`\times 10^{12}M_{}`$ (the 90% confidence lower mass limit for the mass enclosed at 200 kpc for this sample of primary galaxies: Zaritsky & White 1994), and $`t_0=15`$ Gyr for the age of the Universe, we find that any satellite on a radial orbit at a current distance $`<`$ 530 kpc has made at least one pericenter passage. If (1) satellite orbits are highly radial (as found in recent simulations: K99 and M99) and (2) satellites on planar orbits either preferentially lose a greater amount of orbital energy, have their gas removed, or are disrupted near pericenter, then anisotropy in the satellite population might extend to radii as large as 500 kpc. The currently available simulations (K99 and M99) do not show such satellite destruction, but these models do not include gas and the survivability of satellites is sensitive to the particulars of the simulation, such as details of the power spectrum (M99). Whether satellite destruction is more common for satellites on planar orbit and whether the mechanism is sufficiently severe to induce the asymmetry has not been demonstrated and must be investigated further. To continue our exploration of the evolution hypothesis, we postulate that such a mechanism does exist and follow the argument to its conclusion.
Regardless of the exact dynamical model that may lead to the anisotropy in the evolution conjecture, we can use the anisotropy to estimate the toll that the process has exacted on the satellite population. We do this by (1) constraining the range of satellite orbital inclinations allowed by the ZSFW sample and (2) estimating the size of the initial satellite population by assuming that it was initially spherically symmetric. Is the inferred missing population a significant component of the galaxies or are the results so implausible that they enable us to exclude the evolution conjecture? The methods used to constrain the orbital inclinations are discussed in §2. After determining the number of “missing” satellites, we assess whether the destroyed satellites constitute a significant fraction of the mass of the primary galaxy. The results and implications are discussed in §3.
## 2 Determining the Orbital Inclination Limit
Because projection effects and the wide range of viewing angles partially mask any underlying asymmetric satellite distribution, an observed asymmetric distribution implies a more strongly asymmetric underlying distribution. To determine the degree of polar alignment necessary to reproduce the observed distribution, we determine the orbital inclination limit in three different ways. In all three ways we presume that there is a single lower inclination limit for satellite orbits.
Our first approach is adopted from Quinn and Goodman’s (1986) treatment. For assumed circular orbits and a power-law radial density profile (parameterized by $`\rho R^\beta `$), they derive an analytic expression for the surface density as a function of angle from the disk plane. In Figure 1, we plot the number of satellites as a function of angle from the plane, $`\theta `$, and compare the results from Quinn and Goodman’s calculation for an orbital inclination limit of 45 and $`\beta =1.8`$ (as measured for satellite galaxies : Lake & Tremaine 1980, Zaritsky et al. 1993, Lorrimer et al. 1994)). For comparison, we plot the number of satellites vs. $`\theta `$ for satellites at $`r>200`$ kpc (for which the anisotropy appears stronger). This comparison suggests that an orbital inclination limit of 45 (we define the inclination limit to be measured from the pole) is appropriate for the full sample and that this limit is tighter for the outer satellites.
Quinn & Goodwin’s calculation is independent of the assumption of circular orbits, as long as the orbits are not closed and one time averages as the apsides of eccentric orbits precess. However, the satellites at large radii in our sample have not completed many orbits, the apsides have not precessed, and so this assumption may be inadequate. A second possible shortcoming of the calculation is that interlopers, apparent satellites that are not physically associated with the system, are not included. The estimated fraction of interlopers for this sample is between 10 and 15% (Zaritsky 1992).
We proceed by examining models with Keplerian orbits that include interlopers. The satellite orbits are taken from a family of fixed eccentricity orbits for any single model (although we explore a range of eccentricity values across all models). The orbital energy is drawn from the power-law distribution given by Bahcall and Tremaine (1981)
$$P(E)=\{\begin{array}{cc}(3s)E^{s4}/E_0^{s3},\hfill & \text{if }E>E_0\text{;}\hfill \\ 0,\hfill & \text{if }EE_0,\hfill \end{array}$$
where $`s<3`$ and $`E=(GM/r)(v^2/2)`$. This choice of $`P(E)`$ generates a number density profile of test particles that is proportional to $`r^s`$ for $`E>E_0`$. We choose $`s=1.8`$ and $`E_0=0.0065`$ to match the observed mean projected separation, $`r_p`$ ($`200`$ kpc), and the radial number density profile (Lake & Tremaine 1980; Zaritsky et al. 1993; Lorrimer et al. 1994). The mass ratio between the satellite and primary is chosen to be 1:20 (comparable to the mean observed ratio assuming equal M/L’s for primary and satellite which is $`1:13`$). The mean anomaly (orbital phase) is selected uniformly from $`(0,2\pi ]`$. The satellite orbits are then randomly oriented using the Euler angle convention and the known distribution of primary disk inclinations for the ZSFW primaries. Simulated orbits are accepted only if the angle between the orbital major axis and the primary disk’s rotation axis is $`\theta _l`$, where $`\theta _l`$ is the orbital inclination limit. In exploring the models we vary $`s`$, $`e`$, $`\theta _l`$, and $`f_{INT}`$, where $`f_{INT}`$ is the fraction of the sample that consists of interlopers. From observations, we limit $`s`$ to between 1.5 and 2 (Lorrimer et al. 1994). On the basis of ZW’s infall simulations we limit $`e`$ to between 0.5 and 0.9, with a preferred value of 0.7. Finally, from various arguments (Zaritsky 1992), our preferred value of $`f_{INT}`$ is 0.1, but we also explore $`f_{INT}=0.05`$ and 0.15. For each model, we generate 10,000 artificial satellites.
We compare the results of these simulations to four subsamples of the ZSFW satellite galaxies. Sample 1 includes all of the satellites of all of the primaries in the ZSFW sample. Sample 2 includes only those satellites beyond $`r_p=`$ 300 kpc, and so is limited to the radial range where a strong azimuthal asymmetry is evident (Figure 2). Sample 3 includes all of the satellites of the primaries with disk inclination angles $`>`$ 45. This sample is less affected by projection and confusion between polar orbits and those in the disk plane. Sample 4 includes only those satellites beyond $`r_p=`$ 200 kpc that are associated with primaries with disk inclination angles $`>`$ 45. The inner radial limit for Sample 4 is decreased from 300 to 200 kpc relative to Sample 2 because the asymmetry is evident in this sample down to $`r_P200`$ kpc and the number of satellites beyond $`r_P=`$ 300 kpc is smaller (this information is summarized in Table 1). To compare the simulations with the data, we calculate the two-sided KS statistic<sup>1</sup><sup>1</sup>1This statistic is better suited to the analysis of data with no natural minimum or maximum (as is the case for position angles) than the standard KS test (Press et al. 1992). for the distributions of satellite position angles relative to the disk major axis. The results for the best fit $`\theta _l`$ and 90% confidence interval are presented for various models and data samples in Column (6) in Table 2. Again, we conclude that the most likely value of the orbital inclination limit for the full sample lies around 45 and that this limit for the outer sample is smaller (about 20). However, the uncertainties on the derived $`\theta _l`$ are large.
As discussed by Zaritsky and White (1994), the full description of the dynamics of satellite systems requires a model of the growth of the primary galaxy’s halo with time and the evolution of the satellite population within that halo. In particular, the assumption that satellites are currently found at a random phase along an orbit (which is necessary for the Keplerian models) is suspect for satellites at large radii, where the orbital period is $``$ Hubble time. Therefore, we proceed to test the results from the Keplerian models (for which parameter space is easily explored) with the results of the spherical-infall halo simulations used by Zaritsky & White (1994) to measure the mass of galaxy halos. Using the simulation that best matches their best fit parameters ($`\mathrm{\Omega }_0=0.3`$), we have derived preferred orbital inclination limits for the four satellite subsets. We present those results in Column (7) of Table 2. The best fit values are indistinguishable from those derived using the Keplerian models, but the 90% confidence ranges vary. In particular, we find that the entire range of $`\theta _l`$ is allowed when the satellite sample includes satellites at all radii, and that none of the range is within the 90% confidence limit when only the outer satellites are considered (indicated by $`[`$—,—$`]`$).
We conclude that all three analysis techniques indicate similar best fit limits ($`15^{}`$ to 60), but that strong (e.g., 90% confidence) statistical conclusions cannot yet be reached. Because of the agreement among the various methods used to determine $`\theta _l`$, the current principal limitation does not appear to lie in the details of the models, but rather with the sample size.
## 3 Discussion
For all samples and all model parameters within our specified ranges, the best fit values of $`\theta _l`$ indicate that the orbits are preferentially polar. The best fit $`\theta _l`$ for Sample 1 and our reference model ($`s=1.8`$, $`e=0.7`$, and $`f_{INT}=0.1`$) indicates that all of the satellites are on orbits that are inclined at least 38 to the disk plane ($`\theta _l=52^{}`$). Over the range of radii where the asymmetry is most pronounced ($`>200`$ kpc) for systems with primaries highly inclined to the line-of-sight ($`>45^{}`$), the best-fit solutions indicate that the orbits are confined to within 20 of the pole (for either Keplerian or Infall models) and that the orbits are confined to within $`60^{}`$ of the pole with greater than 90% confidence.
Before drawing conclusions from these results, we discuss the sensitivity of the models to various parameters. Our reference model is defined to have $`e=0.7`$, $`s=1.8`$, and $`f_{INT}=0.1`$ (the results from this model, as applied to Sample 4, are presented as Model 2 in Table 2). We test all of these choices with the Keplerian models. First, we vary $`e`$ between 0.5 and 0.9 (the 90% confidence limits derived by ZW; Models 1 and 3). This parameter sometimes has a noticeable effect on $`\theta _l`$, so we present results for all three eccentricities. Second, we vary $`s`$ between 1.5 and 2.0 (Models 4 through 9 in Table 2). The results are nearly insensitive to $`s`$. Third, we vary the interloper fraction between 5 and 15% (Models 10 through 15). As with $`s`$, changing this parameter has a minimal effect on $`\theta _l`$. We also present results derived using other subsamples (for the standard parameter choices and $`0.5e0.9`$; Models 16 through 24).
We now determine the number of “missing” satellites implied by a particular $`\theta _l`$. From the range of allowed orbital inclinations, we calculate the fraction of all allowed orbits that are represented in the current sample. This fraction is equivalent to the fraction of the volume of a sphere that lies within $`\theta _l`$ of the pole, which equals $`1\mathrm{cos}\theta _l`$. For an opening angle of 30, the volume within the allowed cone is 13.4% of that within the sphere. If the “original” population of satellite orbits uniformly filled the sphere and our value for $`\theta _l`$ is $`30^{}`$, then the current population is only 13.4% of the original population, or originally there were 7.5 times as many satellites as there are in the observed sample.
The interpretation of $`\theta _l`$ and the “missing” satellite population is complicated by the apparent change in the magnitude of the polar asymmetry at different radii. The data in Figure 2 suggest that in addition to the asymmetry at $`r_P>300`$ kpc, there may be a slight excess of planar satellites at $``$ 180 kpc and a slight excess of polar satellites once again at small radii. Although our simulations illustrate that projection effects result in less apparent asymmetry at small radii even in a model where the limit on orbital inclination is the same at all radii (see Figure 2), we do not reproduce the observed dip in polar angle at $`180`$ kpc. However, this apparent disagreement is not statistically significant ( significance $`1\sigma `$) because of the small number of satellites in each radial bin. The gradual radial increase in the number of satellites ($`r^{0.2}`$) and the lack of a strongly planar asymmetry between 0 and 200 kpc suggest that a large number of outer, planar satellites cannot be hidden as satellites at smaller radii, unless there is a destruction of a comparable number of inner planar satellites to compensate. Therefore, we can calculate the number of “missing” outer satellites and use that quantity as an estimate of the total number of “missing” satellites.
The dependence of the polar asymmetry on radius leads to the result that even an orbital inclination limit of 90 is allowed within the 90% confidence limit when satellites at all radii are included (Samples 1 and 3). The lack of a strong polar signature in the complete sample weakens the claim of polar alignment and argues for larger samples to resolve the issue. However, we remind the reader that the polar alignment at large radius is highly significant (ZSFW) and that polar alignements have also been observed both in the satellite system of our galaxy (cf. Hartwick 1996) and in the Magellanic Irr satellites of other galaxies (Odewahn 1989).
We now estimate the number of satellites destroyed, inhibited, or accreted by field giant spiral galaxies. If $`\theta _l=18^{}`$ (best fit for Keplerian Model 2) for the outer satellite population, a large population of corresponding disk plane satellites (19 times the current number of satellites beyond $`r_p=200`$ kpc) are missing. For the average primary galaxy in our sample, this estimate implies a loss of 13.4 satellites of comparable luminosity as the satellites in our sample (for the $`i>45^{}`$ sample). However, due to the small sample size we cannot exclude the possibility with greater than 90% confidence that the missing population is only comparable in size to the current population (0.7 satellites/primary at $`r_p>200`$ kpc). The latter conclusion obviously places less stringent constraints on the possible current status of the missing satellites.
Over the suite of models and samples, we infer a wide range in the number of missing satellites. If we adopt our best fits for $`\theta _l`$ across all samples for our reference model, then the number of missing satellites inferred per galaxy ranges from 2.4 to 13.4. These values for the population of missing satellites are consistent with the number of “extra” satellites (with velocity dispersion $`\mathrm{}>30`$ km sec<sup>-1</sup>) present in recent simulations (K99 and M99). The average luminosity of a primary in our sample is 13 times the average luminosity of a satellite (this calculation includes a completeness correction factor of 1.4 due to satellites that may have been missed in our spectroscopic survey as derived assuming a Schechter luminosity function with a faint-end slope $`\alpha =1.5`$). Therefore, if the primaries have indeed accreted between 2.4 and 13.4 satellites, a significant fraction of their luminosity (18 to 103%), and possibly their mass if the M/L’s are comparable, comes from these satellites.
The best fit $`\theta _l`$ value for all of the satellites of all the primaries (52) implies that we are missing 2.7 satellites per primary (or about 20% of the disk luminosity). If we assume that these satellites have been accreted, we can compare this value to the estimates of the satellite accretion rate from other studies. An extrapolation of the local accretion rate (Zaritsky & Rix 1997) predicts that 1 to 3 large satellites (7 to 21% of the current luminosity) are accreted over the lifetime of the galaxy. The actual number of satellites accreted over the lifetime of the galaxy is likely to be larger than the extrapolation of this estimate because the interaction rate is expected to increase with redshift. Within the large uncertainties in both approaches, the inferred satellite accretion rates are consistent and imply that satellite material may contribute significantly to the luminosity of the central galaxy. Interestingly, our investigation did not lead to predictions of satellite populations that had $`L_{Total}>>L_{Disk}`$, which would be implausible, or that had $`L_{Total}L_{Disk}`$, which would have made this discussion academic.
We conclude that the evolution conjecture for the polar asymmetry has the following intriguing implications: (1) it enables an estimate of the size of the original satellite population, (2) the inferred population of “missing” satellites would have a luminosity of order that of the disk, and (3) the inferred number of missing satellites is consistent with the excess number of satellites produced by the most recent numerical simulations of galaxy formation (for satellites with velocity dispersions $`\mathrm{}>`$ 30 km sec<sup>-1</sup>). The principal difficulty with the evolution conjecture remains the unidentified physical mechanism necessary to destroy, remove, or inhibit, satellites on planar orbits with large apocenters.
## 4 Summary
We are searching for a signature of hierarchical galaxy formation in the properties of current spiral galaxies and their satellites. Satellites at large radii, or at least the components that would have formed those satellites, appear to have been preferentially “removed” from low inclination orbits (those in the disk plane) leading to the current preferentially polar distribution of satellites (ZSFW). Our quantitative estimate of the orbital inclination limit for the current satellite population has a large uncertainty — but, the best fit models imply that satellites on orbits within 70 to 80 from the disk plane at projected radii $`>`$ 200 kpc have been destroyed, accreted, removed, or inhibited. We use these limits on the inclination of surviving orbits to estimate the number of “missing” satellites in low inclination orbits. The lost luminosity (or mass for constant M/L among satellites and primaries) is consistent both with an extrapolation of the local accretion rate and with the hypothesis that these “missing” satellites contributed substantial luminosity ($`\mathrm{}>`$ 20%) to the central galaxy. The large statistical uncertainties preclude us from determining whether the material in the “missing” disk plane satellites makes a modest ($``$ 10%) or dominant ($`>`$50%) contribution to the luminosity and mass of the central galaxy. The identification of a lost satellite population may also help reconcile recent numerical simulations (K99, M99) that produce many more satellites per primary than observed. The principal weakness of this entire discussion is that no mechanism is demonstrated to appropriately affect satellites with large apocenter and low orbital inclination relative to the primary disk. The distribution of satellite galaxies provides a tool that, with more sophisticated simulations and larger samples, may enable us to further develop our understanding of galaxy formation and the dynamical evolution of galactic halos.
DZ acknowledges partial financial support from an NSF grant (AST-9619576), a NASA LTSA grant (NAG-5-3501), a David and Lucile Packard Foundation Fellowship, and a Sloan Fellowship. AHG acknowledges support from an NSF Graduate Student Fellowship. We thank A. Zabludoff for comments on a preliminary draft.
Figure Caption
|
no-problem/9908/nucl-th9908021.html
|
ar5iv
|
text
|
# Color Molecular-Dynamics for High Density Matter
## Abstract
We propose a microscopic simulation for quark many-body system based on molecular dynamics. Using color confinement and one-gluon exchange potentials together with the meson exchange potentials between quarks, we construct nucleons and nuclear/quark matter. Statistical feature and the dynamical change between confinement and deconfinement phases are studied with this molecular dynamics simulation.
At high baryon density, the nuclear matter is believed to undergo a phase transition to the quark matter because of the color Debye screening and the asymptotic freedom in quantum chromodynamics (QCD) . In qualitative estimates using the Bag model as well as the strong coupling lattice QCD predict a first order transition at baryon density ($`\rho `$) several times over the nuclear matter density ($`\rho _0=0.17\mathrm{fm}^3`$). However, realistic studies of the high density matter based on the first principle lattice QCD simulation are not available yet due to technical difficulties . In this situation, any alternative attempts are welcome to unravel the nature of high density matter. In particular, how the nuclear matter composed of nucleons (which are by themselves composite three-quark objects) dissolve into quark matter is an interesting question to be studied. From the experimental and observational point of view, such transition may occur in high-energy heavy ion collisions and in the central core of neutron stars .
In this Letter, we propose a molecular dynamics (MD) simulation of a system composed of many constituent quarks . As a first attempt, we carry out MD simulation for quarks with SU(3) color degrees of freedom. Spin and flavor are fixed for simplicity, although there is no fundamental problem to include them. Time evolution of the spatial and color coordinates of quarks are governed by the color confining potential, the perturbative gluon-exchange potential and the meson-exchange potential. The confining potential favors the color neutral cluster (nucleon) at low density. However, as the baryon density increases, the system undergoes a transition to the deconfined quark matter, since the nucleons start to overlap with each other. Our color MD simulation (CMD) is a natural framework to treat such a percolation transition. The meson-exchange potential between quarks, which represent the non-perturbative QCD effects, helps to prevent the system to collapse. Although techniques are quite different, physical idea behind CMD with the meson-exchange potential is quite similar in spirit with the quark-meson coupling (QMC) model extensively used to study the nuclear matter from quarks .
We start with a total wave function of the system $`\mathrm{\Psi }`$ as a direct product of single-particle quark wave-functions. The antisymmetrization is neglected at present.
$`\mathrm{\Psi }`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{3A}{}}}\varphi _i(𝐫)\chi _i,`$ (1)
$`\varphi _i(𝐫)`$ $``$ $`(\pi L^2)^{3/4}\mathrm{exp}[(𝐫𝐑_i)^2/2L^2i𝐏_i𝐫],`$ (2)
$`\chi _i`$ $``$ $`\left(\begin{array}{c}\mathrm{cos}\alpha _ie^{i\beta _i}\mathrm{cos}\theta _i\\ \mathrm{sin}\alpha _ie^{+i\beta _i}\mathrm{cos}\theta _i\\ \mathrm{sin}\theta _ie^{i\phi _i}\end{array}\right).`$ (3)
Here $`A`$ is the total baryon number of the system, $`\varphi _i`$ is a Gaussian wave packet centered around $`𝐑_i`$ with momentum $`𝐏_i`$ and a fixed width $`L`$. $`\chi _i`$ is a coherent state in the color SU(3) space parametrized by four angles, $`\alpha _i,\beta _i,\theta _i`$ and $`\phi _i`$. Although general SU(3) vector has six real parameters, the normalization condition $`|\chi _i|=1`$ and the unphysical global phase reduce the number of genuine parameters to four. Note that SU(2) spin coherent state parametrized by two angles has been used in the MD simulation of the many-nucleon system with spin .
Time evolution of the system is given by solving the equations of motion for {$`𝐑_i`$, $`𝐏_i`$, $`\alpha _i`$, $`\beta _i`$, $`\theta _i`$, $`\phi _i`$} obtained from the time-dependent variational principle
$`{\displaystyle \frac{}{q}}`$ $`=`$ $`{\displaystyle \frac{d}{dt}}{\displaystyle \frac{}{\dot{q}}},`$ (4)
together with the classical Lagrangian,
$``$ $`=`$ $`\mathrm{\Psi }|i\mathrm{}{\displaystyle \frac{d}{dt}}\widehat{H}|\mathrm{\Psi }`$ (5)
$`=`$ $`{\displaystyle \underset{i}{}}[\dot{𝐏}_i𝐑_i+\mathrm{}\dot{\beta }_i\mathrm{cos}2\alpha _i\mathrm{cos}^2\theta _i\mathrm{}\dot{\phi }_i\mathrm{sin}^2\theta _i]H,`$ (6)
where $`H=\mathrm{\Psi }\widehat{H}\mathrm{\Psi }`$. The explicit form of the equations of motion reads :
$`\dot{𝐑}_i`$ $`=`$ $`{\displaystyle \frac{H}{𝐏_i}},\dot{𝐏}_i={\displaystyle \frac{H}{𝐑_i}},`$ (7)
$`\dot{\beta }_i`$ $`=`$ $`{\displaystyle \frac{1}{2\mathrm{}\mathrm{sin}2\alpha _i\mathrm{cos}^2\theta _i}}{\displaystyle \frac{H}{\alpha _i}},`$ (8)
$`\dot{\theta }_i`$ $`=`$ $`{\displaystyle \frac{1}{2\mathrm{}\mathrm{sin}\theta _i\mathrm{cos}\theta _i}}{\displaystyle \frac{H}{\phi _i}},`$ (9)
$`\dot{\alpha }_i`$ $`=`$ $`{\displaystyle \frac{1}{2\mathrm{}\mathrm{sin}2\alpha _i\mathrm{cos}^2\theta _i}}{\displaystyle \frac{H}{\beta _i}}{\displaystyle \frac{\mathrm{cos}2\alpha _i}{2\mathrm{}\mathrm{sin}2\alpha _i\mathrm{cos}^2\theta _i}}{\displaystyle \frac{H}{\phi _i}},`$ (10)
$`\dot{\phi }_i`$ $`=`$ $`{\displaystyle \frac{1}{2\mathrm{}\mathrm{sin}\theta _i\mathrm{cos}\theta _i}}{\displaystyle \frac{H}{\theta _i}}+{\displaystyle \frac{\mathrm{cos}2\alpha _i}{2\mathrm{}\mathrm{sin}2\alpha _i\mathrm{cos}^2\theta _i}}{\displaystyle \frac{H}{\alpha _i}}.`$ (11)
As for the color-dependent quark-quark interaction, we employ the one-gluon exchange and the linear confining potentials. To take into account the essential part of the nuclear force, namely, the state independent short range repulsion and the medium range attraction, we include the $`\sigma `$+$`\omega `$ meson-exchange potential acting between quarks following ref.. The total Hamiltonian is written as
$`\widehat{H}`$ $`=`$ $`{\displaystyle \underset{i}{}}\sqrt{m^2+\widehat{𝐩_i}^2}+{\displaystyle \frac{1}{2}}{\displaystyle \underset{i,ji}{}}\widehat{V}_{ij},`$ (12)
$`\widehat{V}_{ij}`$ $`=`$ $`{\displaystyle \underset{a=1}{\overset{8}{}}}t_i^at_j^aV_\mathrm{C}(\widehat{r}_{ij})+V_\mathrm{M}(\widehat{r}_{ij}),`$ (13)
$`V_\mathrm{C}(r)`$ $``$ $`Kr{\displaystyle \frac{\alpha _\mathrm{s}}{r}},`$ (14)
$`V_\mathrm{M}(r)`$ $``$ $`{\displaystyle \frac{g_{\sigma q}^2}{4\pi }}{\displaystyle \frac{e^{\mu _\sigma r}}{r}}+{\displaystyle \frac{g_{\omega q}^2}{4\pi }}{\displaystyle \frac{e^{\mu _\omega r}}{r}},`$ (15)
where $`t^a=\lambda ^a/2`$ with $`\lambda ^a`$ being the Gell-Mann matrices, $`V_\mathrm{C}`$ is the confinement and one-gluon exchange terms, and $`V_\mathrm{M}`$ is the meson exchange term . We introduce a smooth infrared cutoff to the confining potential in $`V_\mathrm{C}(r)`$ to prevent the long-range interaction beyond the size of the box in which we carry out MD simulations. We choose the cutoff scale $`r_{\mathrm{cut}}=3.0`$ fm, which is approximately half of the length of the box. Typical values of the parameters in the quark model for baryons read , $`m=350`$ MeV (the constituent-quark mass), $`\alpha _\mathrm{s}=1.25`$ (the QCD fine structure constant), $`K=0.75`$ GeV/fm (the string tension). The meson-quark coupling constants $`g_{\sigma (\omega )q}`$ are estimated from the meson-nucleon couplings $`g_{\sigma (\omega )N}`$ using the additive quark picture: $`g_{\sigma q}=g_{\sigma N}/3=3.53`$ and $`g_{\omega q}=g_{\omega N}/3=5.85`$. The meson masses are taken to be $`\mu _\omega =782`$ MeV and $`\mu _\sigma =550`$ MeV.
Some comments are in order here on the evaluation of the matrix elements $`H=\mathrm{\Psi }\widehat{H}\mathrm{\Psi }`$.
(i) We have not taken into account the anti-symmetrization of quarks in the total wave function. Because of this, the interaction between quarks in a color-singlet baryon is underestimated by factor 4 when one takes the matrix element of $`t_i^at_j^a`$. To correct this, we use effective couplings $`K^{\mathrm{eff}}=4K`$ and $`\alpha _s^{\mathrm{eff}}=4\alpha _s`$ throughout our CMD simulation.
(ii) $`L`$ (the size of the quark wave-packet) is chosen to be 0.35 fm (corresponding to the r.m.s. radius of the constituent quark of 0.43 fm). This is consistent with the typical value expected from the dynamical breaking of chiral symmetry . This value is to be used for taking the matrix element of the gluonic interaction $`V_\mathrm{C}`$. On the other hand, the meson-quark coupling is intrinsically non-local, since $`\sigma `$ and $`\omega `$ have their own quark structure. Besides, the meson-exchange interaction between nucleons with the nucleon form-factor should be properly reproduced by the superposition of the meson-exchange interaction between quarks. To take into account these facts, we use $`L^{\mathrm{eff}}=0.7`$ fm (corresponding to the r.m.s. radius of 0.86 fm) in taking the matrix element of $`V_\mathrm{M}`$.
(iii) $`H=\mathrm{\Psi }\widehat{H}\mathrm{\Psi }`$ generally contains a kinetic energy originating from momentum variances of wave packets. However, when the width of the wave packet is fixed as a time-independent parameter, this kinetic energy is spurious and neglected in the present calculation.
Let us now describe how to simulate the simplest three-quark system, namely the nucleon, in CMD. We first search for a three-quark state obeying the color neutrality condition
$`{\displaystyle \underset{i=1}{\overset{3}{}}}\chi _i|\lambda ^a|\chi _i=0(a=1,\mathrm{},8).`$ (16)
This is satisfied by solving a cooling equation of motion in the color space with a potential proportional to $`_{i,ji}_{a=1}^8\chi _i|\lambda ^a|\chi _i\chi _j|\lambda ^a|\chi _j`$ with random initial values of $`\chi _i`$. During this cooling procedure, the spatial coordinates of quarks are fixed, e.g. at the three corners of a triangle.
If we start with three quarks in triangular position obtained above and kick each quark by a same amount of energy keeping the total momentum zero, the quarks start to have a breathing motion in 2-dimensional plane. Since the total color is conserved, the color-neutrality is maintained during this time evolution.
By an initial kick to give the time-averaged kinetic energy of 74 MeV, the total energy of the nucleon become 1269 MeV. Accordingly, the r.m.s. radius of the nucleon reads 0.46 fm in terms of $`L`$ (which corresponds to the size of the quark-core of the nucleon) or 0.87 fm in terms of $`L^{\mathrm{eff}}`$ (which corresponds to the physical nucleon size for meson-exchange interaction). The “nucleon” here is certainly a semiclassical object which should be regarded as a mixture of the ground and excited states of three quarks. We use a collection of these nucleons as an initial condition for the CMD simulation of many quarks. Since the interaction among quarks in matter will eventually randomize the internal motion of quarks in the initial nucleon, the way how we kick the quarks does not matter for the final result.
Now, let us study the phase change from the confined hadronic system to the deconfined quark matter. We simulate the infinite matter under the periodic boundary condition and see how the system responds to the change of the baryon density as well as to the energy deposition from outside.
To start with, nucleons constructed as above are randomly distributed in a box with the periodic boundary condition. At this stage, the total system is in its excited state. The minimum energy state of matter is obtained by the frictional cooling procedure, namely we solve a cooling equation of motion with frictional terms. During the cooling, spatial and color motion of quarks in the nucleon are artificially frozen, and the following equations are solved:
$`\dot{𝐑}_i={\displaystyle \frac{1}{3}}{\displaystyle \underset{j\{i\}}{}}\left[{\displaystyle \frac{H}{𝐏_j}}+\mu _R{\displaystyle \frac{H}{𝐑_j}}\right],`$ (17)
$`\dot{𝐏}_i={\displaystyle \frac{1}{3}}{\displaystyle \underset{j\{i\}}{}}\left[{\displaystyle \frac{H}{𝐑_j}}+\mu _P{\displaystyle \frac{H}{𝐏_j}}\right],`$ (18)
$`\dot{\alpha }_i=\dot{\beta }_i=\dot{\theta }_i=\dot{\phi }_i=0,`$ (19)
where $`\mu _R`$ and $`\mu _P`$ are damping coefficients with negative values and $`\{i\}`$ means a set of three quarks in a nucleon to which $`i`$ belongs. Under this cooling procedure, the system approaches to a stable configuration with minimum energy. The system does not collapse due to the repulsive part of the meson exchange potential $`V_\mathrm{M}`$.
After the system reached its energy-minimum by the cooling, internal color and spatial motion of quarks are turned on and the normal equation of motion is solved for several tens of fm/c so that the system gets equilibrated. To study the excited state of the system, extra random motion is also given to the nucleons so that the system has a certain excitation energy.
We judge the confinement/deconfinement by the following criterion. If three quarks are within a certain distance $`d_{\mathrm{cluster}}`$ and are white with an accuracy $`\epsilon `$, these quarks are said to be confined. This can be formulated as
$`\{\begin{array}{cc}|𝐑_i𝐑_j|<d_{\mathrm{cluster}}(i,j=1,2,3),\hfill & \\ {\displaystyle \underset{a=1}{\overset{8}{}}}\left[{\displaystyle \underset{i=1}{\overset{3}{}}}\chi _i|\lambda ^a|\chi _i\right]^2<\epsilon .\hfill & \end{array}`$ (20)
All quarks are checked by this criterion without duplications. The actual numbers we use are $`d_{\mathrm{cluster}}=1`$ fm and $`\epsilon =0.05`$.
Snapshots of matter in equilibrium for different excitation energies per quarks ($`E^{}`$) are displayed in Fig. 1. Quarks in the confined states are shown with thin colors and those in the deconfined state with thick colors. As $`E^{}`$ increases, number of deconfined quarks increases as expected. However, some deconfined quarks still form three-quark clusters even for large $`E^{}`$. This implies that the deconfinement is caused not only by disintegration or percolation of clusters in the coordinate space but also by the color excitation inside each cluster.
Figure 2 shows a “confined ratio of quarks”, $`R`$ (number of confined quarks)/(total quark number). At $`E^{}=0`$, hadronic matter and the quark matter are well characterized by $`R`$ although no sudden transition of $`R`$ between the two phases is observed. For $`E^{}>200`$ MeV/q, $`R`$ is less than 20% for all densities.
To study the “thermal” property of the system, we fit the kinetic energy distribution of quarks by the classical Boltzmann distribution. Then, we can define an effective temperature $`T^{}`$ for given $`E^{}`$. Note that $`T^{}`$ is not really a physical temperature of the system, but is a measure of the averaged kinetic energy per quark. In Fig. 3, plotted is $`T^{}`$ as a function of $`E^{}`$. For $`E^{}>300`$ MeV/q, $`T^{}`$ depends almost linearly on $`E^{}`$ irrespective of baryon density. However, for $`E^{}=100200`$ MeV/q, $`T^{}`$ for low-density matter increases rather slowly as a function of $`E^{}`$. In fact, this corresponds exactly to the region where the confined ratio of low-density matter changes in Fig. 2. This implies that, during the deconfinement process, the energy deposit from outside is consumed to melt the confined clusters (nucleons), which suppresses the effective temperature $`T^{}`$.
In summary, we have proposed a color molecular dynamics (CMD) simulation of the system with many constituent quarks. The system is approximated by the product of the wave packets with SU(3) color coherent state. Adopting the effective interaction between quarks, we study the transition from the nuclear matter to quark matter under the periodic boundary condition. At low baryon density ($`\rho `$) and low excitation energy ($`E^{}`$), the system is in the confined phase where most of the quarks are hidden inside the color singlet nucleons. However, as we increase $`\rho `$ and/or $`E^{}`$, the partial deconfinement takes place due to the disintegration of color-singlet clusters both in the coordinate space and in the color space. This can be seen explicitly by the confined ratio and effective temperature in Fig.2 and Fig.3.
The results of this paper are still in the qualitative level. The refinement of interaction parameters, and the inclusion of flavor and spin degrees of freedom as well as anti-quarks are necessary for more quantitative discussions. The use of the antisymmetrized quark wave function is also an important future problem . The medium modification of the constituent-quark mass should be also considered in relation to the partial restoration of chiral symmetry. In spite of all these reservations, the method proposed in this paper gives a starting point to study the statistical feature of the hadron-quark transition as well as to examine finite nuclei and the dynamics of heavy-ion collisions. Some preliminary simulation on the latter problem has been reported in .
The authors thank Y. Nara, V. N. Kondratyev, S. Chikazumi, K. Niita, S. Chiba, T. Kido and A. Iwamoto for useful suggestions and stimulating discussions. T. H. was partly supported by Grand-in-Aid for Scientific Research No. 10874042 of the Japanese Ministry of Education, Science and Culture, and by Sumitomo Foundation (Grant no. 970248).
|
no-problem/9908/math9908057.html
|
ar5iv
|
text
|
# The Dynamics of Off-center Reflection
## Background and Definition
We study the dynamics of a two-parameter family of circle maps $`R_{r,\mathrm{\Omega }}:𝕊^1𝕊^1`$ called the off-center reflection. When $`\mathrm{\Omega }=\pi `$, this map is a one-dimensional analog of the general map raised in \[Y, problem 21\], which is geometrically a reflection in the circle. For other values of $`\mathrm{\Omega }`$, it can be seen as a reflection with a deviation between the reflected and incident angles. Iterations of this map are not the natural sucessive reflections in the circle; nevertheless, this map is interesting for various reasons. The off-center reflection can also be seen as a perturbation (with small $`r>0`$) of rotation by $`\mathrm{\Omega }`$. In fact, it has an analytical form which extends the well-known Arnold circle map, \[Ar1\]. Its dynamics is related to the perturbation properties of Mathieu type differential equation, \[Ar2\]. Furthermore, when the perturbation parameter $`r`$ goes to 1, the map goes to another famous circle map, the doubling map.
The off-center reflection is introduced in \[AL\] by the following geometric description. Fix a point $`L`$ inside the unit circle $`𝕊^1`$. For a point $`\varphi 𝕊^1`$, a ray is emitted from $`L`$ to $`\varphi `$. This ray is “reflected” to hit $`𝕊^1`$ again at a point, denote $`R_{r,\mathrm{\Omega }}(\varphi )`$ in the future. This point is defined to be the image of $`\varphi `$ under the map. It is quoted “reflected” because the “reflected” angle has a constant deviation $`\frac{\pi \mathrm{\Omega }}{2}`$ from the incident angle $`\iota (\varphi )`$. To have the map geometrically well-defined, there is some restriction on $`\mathrm{\Omega }`$. However, we will see later that it is analytically meaningful for other $`\mathrm{\Omega }`$. In fact, it is sufficient to consider $`\mathrm{\Omega }(\pi ,\pi ]`$. Furthermore, since the action has certain symmetries, it is no loss of generality to assume the point source $`L`$ at $`(r,0)`$ with $`0r<1`$. This is why the off-center reflection is given by the two parameters $`(r,\mathrm{\Omega })`$.
In \[AL\], particular interest is placed on $`R_{r,\pi }`$, i.e., when the reflected angle equals the incident angle. There, the link between dynamics and contact geometry of the map is studied. Here, we deal with the dynamical properties of the family with some focus on $`\mathrm{\Omega }=0,\pi `$. It is because in these two cases, the off-center reflection map repects the symmetry of the circle and symmetric cycles may occur. We are particularly interested in when symmetric cycles of the map break into asymmetric ones. The properties presented in this article may be considered the first steps to understand the dynamics of the map. It is expected that deeper studies may bring forth more understanding to general circle maps.
In §1, we will introduce the basic analytical properties of the map. In §2, the attracting orbits, especially the symmetric ones, of the map are investigated. Then, finally in §3, the bifurcation of the map is looked into. In particular, we give an explanation of how and when the symmetric orbits go through a period preserving pitch-fork bifurcation. The analysis in \[Br\] of similar behavior among certain cubic polynomials is borrowed.
## 1. Analysis
For our purpose, we consider $`𝕊^1^2`$ and it is covered by $``$ under the exponential map $`xe^{𝕚x}`$. Then the off-center reflection
$$\varphi R_{r,\mathrm{\Omega }}(\varphi ):𝕊^1𝕊^1$$
has a unique continuous lifting, $`\stackrel{~}{R}_{r,\mathrm{\Omega }}:`$, which takes $`0`$ to $`\mathrm{\Omega }`$. Since $`\stackrel{~}{R}_{r,\mathrm{\Omega }}(x+2\pi )=\stackrel{~}{R}_{r,\mathrm{\Omega }}(x)+2\pi `$, we may focus our attention on the interval $`(\pi ,\pi ]`$. Let the incident angle be denoted by $`\iota (x)`$ for $`x(\pi ,\pi ]`$. Since $`\iota (x)0`$ as $`x\pm \pi `$, it defines a continuous $`2\pi `$-periodic function on $``$, which is also denoted as $`\iota (x)`$. In fact, it can be written in terms of the principal argument (with values between $`\pi `$ and $`\pi `$) as
$$\iota (x)=\mathrm{Arg}(\mathrm{cos}xr+𝕚\mathrm{sin}x)x;$$
and $`\stackrel{~}{R}_{r,\mathrm{\Omega }}(x)=x+\mathrm{\Omega }2\iota (x)`$. The first few derivatives of $`\stackrel{~}{R}`$ are listed below.
$`\iota ^{}(x)`$ $`={\displaystyle \frac{r(\mathrm{cos}xr)}{(\mathrm{cos}xr)^2+\mathrm{sin}^2x}};`$
$`\stackrel{~}{R}_{r,\mathrm{\Omega }}^{}(x)`$ $`=12\iota ^{}(x)={\displaystyle \frac{14r\mathrm{cos}x+3r^2}{(\mathrm{cos}xr)^2+\mathrm{sin}^2x}}`$
$`\stackrel{~}{R}_{r,\mathrm{\Omega }}^{\prime \prime }(x)`$ $`={\displaystyle \frac{2r(1r^2)\mathrm{sin}x}{\left[(\mathrm{cos}xr)^2+\mathrm{sin}^2x\right]^2}}`$
$`\stackrel{~}{R}_{r,\mathrm{\Omega }}^{\prime \prime \prime }(x)`$ $`={\displaystyle \frac{2r(1r^2)\left[(1+r^2)\mathrm{cos}x2r(1+\mathrm{sin}^2x)\right]}{\left[(\mathrm{cos}xr)^2+\mathrm{sin}^2x\right]^3}}.`$
This incident angle has a series expression given in \[AL\], $`\iota (x)={\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{r^k}{k}}\mathrm{sin}(kx).`$ Therefore, the lift of $`R_{r,\mathrm{\Omega }}`$ mapping $`0`$ to $`\mathrm{\Omega }`$ is given by
$`\stackrel{~}{R}_{r,\mathrm{\Omega }}(x)`$ $`=x+\mathrm{\Omega }2\iota (x)`$
$`=x+\mathrm{\Omega }2{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{r^k}{k}}\mathrm{sin}(kx).`$
From this, we see that the Arnold circle map $`xx+\mathrm{\Omega }\epsilon \mathrm{sin}x`$ can be seen as a truncated version of the off-center reflection. The technique of our analysis in this paper may also be adapted to a similar study on the Arnold circle map.
We will first establish a nice property for the dynamics of a map. The dynamics of the off-center reflection in the range $`0r<1/3`$ is relatively simple. The following proposition is useful for studying the range $`r>1/3`$.
###### Proposition 1.1.
For $`1/3<r<1`$ and all $`\mathrm{\Omega }`$, $`R_{r,\mathrm{\Omega }}`$ is a map with negative Schwarzian derivative.
###### Proof.
The Schwarzian derivative of $`\stackrel{~}{R}_{r,\mathrm{\Omega }}`$ is given by
$$\frac{\stackrel{~}{R}_{r,\mathrm{\Omega }}^{\prime \prime \prime }(x)}{\stackrel{~}{R}_{r,\mathrm{\Omega }}^{}(x)}\frac{3}{2}\left(\frac{\stackrel{~}{R}_{r,\mathrm{\Omega }}^{\prime \prime }(x)}{\stackrel{~}{R}_{r,\mathrm{\Omega }}^{}(x)}\right)^2=\frac{r(1r^2)H(r,\mathrm{cos}x)}{(14r\mathrm{cos}x+3r^2)^2(12r\mathrm{cos}x+r^2)^2}$$
where
$`H(r,\mathrm{cos}x)`$ $`=(2+28r^2+6r^4)\mathrm{cos}x`$
$`r\left[13+19r^2(1r^2)\mathrm{cos}2x+4r\mathrm{cos}3x\right]`$
$`=(14r+18r^3)+(2+40r^2+6r^4)\mathrm{cos}x`$
$`+2r(1r^2)\mathrm{cos}^2x16r^2\mathrm{cos}^3x`$
Let $`y=\mathrm{cos}x`$ and consider $`H(r,y)`$ for $`y[1,1]`$. We have
$`H(r,1)`$ $`=2(1+r)^3(1+3r)`$
$`=212r24r^220r^36r^4<0;`$
$`H(r,1)`$ $`=2(1r)^3(13r)<0,\text{for }1/3<r<1\text{.}`$
Moreover, its derivative wrt $`y`$ satisfies
$`_yH`$ $`=(2+40r^2+6r^4)+4r(1r^2)y48r^2y^2;`$
and for $`1/3<r<1`$,
$`_yH(r,1)`$ $`=2(1+r)(1r^2)(13r)<0,`$
$`_yH(r,1)`$ $`=2(1r)(1r^2)(1+3r)>0.`$
We see that, inside $`(1,1)`$, the cubic polynomial $`yH(r,y)`$ can only have a single critical point and it is a minimum. Hence $`H(r,y)\mathrm{max}\{H(r,1),H(r,1)\}<0`$. $`\mathrm{}`$
## 2. Attracting Orbits
It is easy to see from our earlier formula that $`\stackrel{~}{R}_{r,\mathrm{\Omega }}^{}(x)0`$ for $`0r\frac{1}{3}`$, with equality only when $`r=\frac{1}{3}`$ and $`\mathrm{cos}x=1`$. Thus, $`R_{r,\mathrm{\Omega }}`$ is a homeomorphism for $`0r1/3`$, so the dynamics is trivial. On the other hand, $`R_{r,\mathrm{\Omega }}`$ is only a degree 1 map for $`r>1/3`$. We would like to explore the dynamics of it in the coming sections.
In the study of periodic orbits of $`R_{r,\mathrm{\Omega }}`$, the information about the function $`\iota `$ is often helpful. Since $`\iota `$ is a $`2\pi `$-periodic odd function, it is sufficient to know its properties in the interval $`[0,\pi ]`$. To be precise, $`\iota (x)>0`$ and is concave down for $`x(0,\pi )`$. Its maximum value of $`\pi /2a_r`$ is attained at $`a_r`$ where $`a_r`$ is the angle satisfying $`0a_r\pi /2`$ and $`\mathrm{cos}a_r=r`$. A picture of its graph will be helpful to see its properties.
Furthermore, when $`r`$ varies from $`0`$ to $`1`$, $`\iota (x)`$ varies from the constant zero function to a discontinuous linear function. These will be useful in calculations involving periodic orbits of $`R_{r,\mathrm{\Omega }}`$. For example, one may determine the birth of fixed point according to this knowledge of $`\iota `$.
###### Proposition 2.1.
For all $`r`$ and $`\mathrm{\Omega }[\pi 2a_r,\pi +2a_r]`$, the map $`R_{r,\mathrm{\Omega }}`$ has no fixed point and a saddle-node bifurcation occurs at $`\left|\mathrm{\Omega }\pi \right|=2a_r`$, that is, when $`r`$ is the cosine of the deviation of between incident and reflected angles.
###### Proof.
To find fixed points of $`R_{r,\mathrm{\Omega }}`$, one tries to solve the equation
$$\stackrel{~}{R}_{r,\mathrm{\Omega }}(x)=x+\mathrm{\Omega }2\iota (x)=xmod2\pi ,$$
which can be reduced to
$$\frac{\mathrm{\Omega }}{2}=\iota (x)mod\pi .$$
Since $`\iota (a_r)\iota (x)\iota (a_r)`$, for $`\mathrm{\Omega }(\pi ,\pi ]`$, the equation has solution if and only if
$$\frac{\pi }{2}+a_r\frac{\mathrm{\Omega }}{2}\frac{\pi }{2}a_r.$$
The fixed point of $`\stackrel{~}{R}_{r,\mathrm{\Omega }}(x)`$ at the boundary parameter values $`\mathrm{\Omega }=\pm (\pi 2a_r)`$ occurs at $`x=a_r`$. It is easy to see that for $`0<r<1`$ and any $`\mathrm{\Omega }`$, $`\stackrel{~}{R}_{r,\mathrm{\Omega }}^{}(a_r)=1`$ and $`\stackrel{~}{R}_{r,\mathrm{\Omega }}^{\prime \prime }(a_r)0`$, so as $`(r,\mathrm{\Omega })`$ crosses this boundary, a saddle node bifurcation occurs. $`\mathrm{}`$
###### Corollary 2.2.
Let $`a_r`$ as above and $`b_r(0,a_r)`$ such that $`\mathrm{cos}b_r={\displaystyle \frac{1+2r^2}{3r}}`$. The region that $`R_{r,\mathrm{\Omega }}`$ has attracting fixed point is
$$\{(r,\mathrm{\Omega }):2\iota (b_r)<\left|\mathrm{\Omega }\right|<2\iota (a_r)\}.$$
In fact, the equation $`2\iota (b_r)=\left|\mathrm{\Omega }\right|`$ determines the generic values of $`r`$ for the happening of period-doubling bifurcation.
###### Proof.
If $`x(\pi ,\pi ]`$ corresponds to an attracting fixed point of $`R_{r,\mathrm{\Omega }}`$, we have $`\stackrel{~}{R}_{r,\mathrm{\Omega }}(x)=xmod2\pi `$ and $`\left|\stackrel{~}{R}_{r,\mathrm{\Omega }}^{}(x)\right|=\left|12\iota ^{}(x)\right|<1`$. From the expression of $`\iota ^{}(x)`$, this is equivalent to
$$\mathrm{cos}a_r=r<\mathrm{cos}x<\frac{1+2r^2}{3r}.$$
Then, $`x(a_r,b_r)(b_r,a_r)`$. Since $`\iota (x)`$ is increasing on both $`(a_r,b_r)`$ and $`(b_r,a_r)`$, $`\mathrm{\Omega }/2`$ lies in the intervals defined by the image under $`\iota `$, namely, $`\iota (b_r)<\left|\mathrm{\Omega }/2\right|<\iota (a_r)`$. To see the period doubling bifurcation, it is sufficient to show
$$\frac{(\stackrel{~}{R}_{r,\mathrm{\Omega }}^2)^{}}{r}|_{x=\pm b_r}0,\text{along the curves }\mathrm{\Omega }=\pm 2\iota (b_r).$$
Here we need some calculations which will also be useful in the future.
###### Lemma .
We have
$`{\displaystyle \frac{\stackrel{~}{R}_{r,\mathrm{\Omega }}(x)}{r}}`$ $`={\displaystyle \frac{2\mathrm{sin}x}{12r\mathrm{cos}x+r^2}},`$
$`{\displaystyle \frac{\stackrel{~}{R}_{r,\mathrm{\Omega }}^{}(x)}{r}}`$ $`={\displaystyle \frac{4r2(1+r^2)\mathrm{cos}x}{(12r\mathrm{cos}x+r^2)^2}}.`$
Furthermore,
$$\frac{(\stackrel{~}{R}_{r,\mathrm{\Omega }}^2)^{}(x)}{r}=\frac{\stackrel{~}{R}_{r,\mathrm{\Omega }}^{}(x)}{r}\stackrel{~}{R}_{r,\mathrm{\Omega }}^{}(\stackrel{~}{R}_{r,\mathrm{\Omega }}(x))+\stackrel{~}{R}_{r,\mathrm{\Omega }}^{}(x)\frac{\stackrel{~}{R}_{r,\mathrm{\Omega }}^{}}{r}|_{\stackrel{~}{R}_{r,\mathrm{\Omega }}(x)}\frac{\stackrel{~}{R}_{r,\mathrm{\Omega }}(x)}{r}.$$
In particular, if $`\stackrel{~}{R}_{r,\mathrm{\Omega }}(x)=x`$ or $`\stackrel{~}{R}_{r,\mathrm{\Omega }}(x)=x`$, one has
$$\frac{(\stackrel{~}{R}_{r,\mathrm{\Omega }}^2)^{}(x)}{r}=\stackrel{~}{R}_{r,\mathrm{\Omega }}^{}(x)\frac{\stackrel{~}{R}_{r,\mathrm{\Omega }}^{}(x)}{r}\left[1+\frac{\stackrel{~}{R}_{r,\mathrm{\Omega }}(x)}{r}\right].$$
###### Proof of the lemma.
The first two results only require simple calculus. The third one is a repeated application of the chain rule. Then, using that both $`\stackrel{~}{R}_{r,\mathrm{\Omega }}^{}`$ and $`{\displaystyle \frac{\stackrel{~}{R}_{r,\mathrm{\Omega }}^{}}{r}}`$ are even functions in $`x`$, the last result follows. $`\mathrm{}`$
At $`\mathrm{\Omega }=\pm \iota (b_r)`$, we have $`\stackrel{~}{R}_{r,\mathrm{\Omega }}(b_r)=b_r`$ and $`\stackrel{~}{R}_{r,\mathrm{\Omega }}^{}(b_r)=1`$, therefore
$$\frac{(\stackrel{~}{R}_{r,\mathrm{\Omega }}^2)^{}}{r}|_{x=\pm b_r}=\frac{(\stackrel{~}{R}_{r,\mathrm{\Omega }})^{}}{r}|_{x=b_r}\left[1+\frac{\stackrel{~}{R}_{r,\mathrm{\Omega }}}{r}|_{x=b_r}\right]$$
Moreover, for $`r>1/2`$,
$`{\displaystyle \frac{\stackrel{~}{R}_{r,\mathrm{\Omega }}^{}}{r}}|_{x=b_r}`$ $`={\displaystyle \frac{4r2(1+r^2)\mathrm{cos}b_r}{(12r\mathrm{cos}b_r+r^2)^2}}={\displaystyle \frac{6(12r^2)}{r(1r^2)}}0.`$
$`{\displaystyle \frac{\stackrel{~}{R}_{r,\mathrm{\Omega }}}{r}}|_{x=b_r}`$ $`={\displaystyle \frac{2\mathrm{sin}b_r}{12r\mathrm{cos}b_r+r^2}}={\displaystyle \frac{2\sqrt{4r^21}}{r\sqrt{1r^2}}}1,`$
except at $`r=\sqrt{\frac{15+\sqrt{241}}{2}}0.5119`$. Thus, generically, period-doubling bifurcation occurs at $`\mathrm{\Omega }=\pm 2\iota (b_r)`$. $`\mathrm{}`$
The above information about fixed points is illustrated by the following picture, in which the grey areas are the region in $`(r,\mathrm{\Omega })`$-plane where fixed points exist. The darker area is where attracting fixed points occur.
We would like to look into maps in the family that respect the symmetry of the circle. Let $`\rho :𝕊^1𝕊^1`$ be the reflection across the real axis. Then $`R_{r,\mathrm{\Omega }}\rho =\rho R_{r,\mathrm{\Omega }}`$ if and only if $`\mathrm{\Omega }=0,\pi `$. This symmetry corresponds to the fact that $`\stackrel{~}{R}_{r,\mathrm{\Omega }}(x)\mathrm{\Omega }`$ is an odd function.
For $`\mathrm{\Omega }=0,\pi `$, if $`n`$ is the smallest integer that $`R_{r,\mathrm{\Omega }}^n(\varphi )=\rho (\varphi )\varphi `$, one can easily show that $`\varphi `$ belongs to a periodic orbit of prime period $`2n`$. We call it a symmetric orbit of period $`2n`$. In terms of $`\stackrel{~}{R}_{r,\mathrm{\Omega }}`$, this corresponds to $`\stackrel{~}{R}_{r,\mathrm{\Omega }}^n(x)=xxmod2\pi `$. An orbit is asymmetric of period $`2n`$ if $`R_{r,\mathrm{\Omega }}^{2n}(\varphi )=\varphi `$ but $`R_{r,\mathrm{\Omega }}^n(\varphi )\rho (\varphi )`$. If an asymmetric orbit is formed by $`\varphi `$, then another asymmetric one, called the twin orbit, is formed by $`\rho (\varphi )`$. The twin orbit may be itself when $`\rho (\varphi )=\varphi `$. For the off-center reflection, a self-twin asymmetric orbit must be the 2-cycle $`\{e^{𝕚0},e^{𝕚\pi }\}`$. There are numerous articles about symmetric periodic orbits in dynamical systems, especially on continuous types. An early one is \[D\]. Their attention is on the return map of some reversible mechanic system such as the three-body system.
###### Proposition 2.3.
For any $`r>1/3`$ and $`\mathrm{\Omega }=0,\pi `$, the map $`R_{r,\mathrm{\Omega }}`$ has either no attracting orbit, or one symmetric attracting orbit, or two (counting multiplicity) asymmetric twin attracting orbits.
###### Sketch of proof.
We mainly use two properties of $`R_{r,\mathrm{\Omega }}`$ to conclude this. First, each $`R_{r,\mathrm{\Omega }}`$ has negative Schwarzian derivative, the technique of \[CE\] or \[Br\] is applicable. Therefore, once the map $`\stackrel{~}{R}_{r,\mathrm{\Omega }}`$ has an attracting cycle, at least one critical point of it must be attracted to this attracting orbit. Since $`R_{r,\mathrm{\Omega }}`$ has only two critical points, there are at most two attracting orbits. If the attracting orbit is a reflection symmetric one, then this orbit attracts both critical points. If the orbit is an asymmetric one, its twin orbit attracts another critical point. Multiplicity occurs if there is a self-twin orbit. $`\mathrm{}`$
Besides the above general information about symmetric and asymmetric cycles, it is interesting to know two specific cases about it. The first is about self-twin asymmetric 2-cycles.
###### Proposition 2.4.
A single asymmetric 2-cycle for $`R_{r,\mathrm{\Omega }}`$ occurs if and only if $`\mathrm{\Omega }=\pi `$. The orbit is $`\{e^{𝕚0},e^{𝕚\pi }\}`$ which is attracting when $`r<1/\sqrt{5}`$.
###### Proof.
The orbit for a single asymmetric 2-cycle can be obtained by solving the equation
$$\frac{\mathrm{\Omega }\pm \pi }{2}=\iota (x)mod\pi .$$
The only simultaneous solution exists when $`\mathrm{\Omega }=\pi `$ and $`\varphi =e^{𝕚0},e^{𝕚\pi }`$. Then, whether the orbit is attracting can be easily decided by calculating $`\stackrel{~}{R}_{r,\pi }^{}(0)\stackrel{~}{R}_{r,\pi }^{}(\pi )`$. $`\mathrm{}`$
Secondly, one expects that symmetric cycles occur naturally for $`\mathrm{\Omega }=0,\pi `$. Do such cycles exist even if the map is not “symmetric”? The following result provides a partial evidence for the answer.
###### Proposition 2.5.
The map $`R_{r,\mathrm{\Omega }}`$ has a symmetric 2-cycle if and only if $`\mathrm{\Omega }=0,\pi `$. In both cases, the symmetric 2-cycle is unique. The cycle of $`R_{r,0}`$ is attracting while that of $`R_{r,\pi }`$ is repelling.
###### Proof.
Let $`\{e^{𝕚x},e^{𝕚x}\}`$ be a symmetric 2-cycle, i.e., $`R_{r,\mathrm{\Omega }}(e^{\pm 𝕚x})=e^{𝕚x}`$. Equivalently,
$`x+{\displaystyle \frac{\mathrm{\Omega }}{2}}`$ $`=\iota (x)mod\pi ,`$
and also,
$`x{\displaystyle \frac{\mathrm{\Omega }}{2}}`$ $`=\iota (x)mod\pi .`$
If we subtract the second equation from the first one, we obtained that $`\mathrm{\Omega }=0mod\pi `$ and $`\mathrm{\Omega }(\pi ,\pi ]`$. Clearly, $`\mathrm{\Omega }=0,\pi `$.
For $`\mathrm{\Omega }=0`$, the trivial solutions $`x=0,\pi `$ corresponds to fixed points of $`R_{r,0}`$. Further calculation on $`\iota `$ shows that the equation $`x=\iota (x)`$ has only a nontrivial solution for $`r>1/2`$. In fact, let $`c_1(0,\pi /2)`$ such that $`\mathrm{cos}c_1={\displaystyle \frac{1}{2r}}`$, then $`\pm c_1`$ form a symmetric 2-cycle. Since $`\stackrel{~}{R}_{r,0}^{}(c_1)\stackrel{~}{R}_{r,0}^{}(c_1)=\left[{\displaystyle \frac{13r^2}{r^2}}\right]^2`$, it follows that the cycle is attracting for $`{\displaystyle \frac{1}{2}}<r<{\displaystyle \frac{1}{\sqrt{2}}}`$. Moreover, since $`\left|\iota (x)\right|\pi /2`$, one can only obtain trivial solution $`0,\pi `$ from $`x=\iota (x)+k\pi `$ for $`k0`$.
For $`\mathrm{\Omega }=\pi `$, the above equations always has solution for all $`r`$. In fact, its solution within $`(\pi ,\pi )`$ is given by the following. Let $`c_2(\pi /2,\pi )`$ satisfy $`\mathrm{cos}c_2={\displaystyle \frac{1\sqrt{1+8r^2}}{4r}}`$. We claim that $`c_2`$ and $`2\pi c_2`$ are solutions to the equation $`x\frac{\pi }{2}=\iota (x)`$ and $`x+\frac{\pi }{2}=\iota (x)`$ respectively. Taking tangent to both sides and using that $`\iota (x)=\mathrm{Arg}(\mathrm{cos}xr+𝕚\mathrm{sin}x)\mathrm{Arg}(\mathrm{cos}x+𝕚\mathrm{sin}x)`$, we have
$$\frac{\mathrm{cos}x}{\mathrm{sin}x}=\frac{r\mathrm{sin}x}{1r\mathrm{cos}x}.$$
It follows that $`2r\mathrm{cos}^2x\mathrm{cos}xr=0`$. Hence $`\{e^{𝕚c_2},e^{𝕚c_2}\}`$ is a symmetric 2-cycle. It can be easily checked that
$$\stackrel{~}{R}_{r,\pi }^{}(c_2)=\stackrel{~}{R}_{r,\pi }^{}(2\pi c_2)=\frac{2(\sqrt{1+8r^2}+3r^2)}{1+\sqrt{1+8r^2}+2r^2}>1.$$
Therefore, the symmetric 2-cycle is repelling. Again, by that $`\left|\iota (x)\right|\pi /2`$, we do not have other solutions to $`x\pm \frac{\pi }{2}=\iota (x)+k\pi `$ for $`k0`$. $`\mathrm{}`$
## 3. Bifurcation
In this last section, our aim is to understand bifurcations between symmetric and asymmetric orbits of this family of maps. In particular, we would like to address some of the questions on the dynamics of $`R_{r,\pi }`$ raised in \[AL\]. Let us first look at the asymptotic orbit diagrams of the critical points of $`R_{r,0}`$ and $`R_{r,\pi }`$, in which the bifurcations are shown.
Asymptotic orbits of critical points of $`R_{r,0}`$ (low resolution).
Asymptotic orbits of critical points of $`R_{r,\pi }`$ (low resolution).
In both pictures, there is an obvious complementary nature and there are half-branches of bifurcations. These will be explained analytically in the coming propositions.
###### Proposition 3.1.
The asymmetric 2-cycle of $`R_{r,\pi }`$ bifurcates into a symmetric attracting 4-cycle at $`r=1/\sqrt{5}`$. For $`R_{r,0}`$, there is a pitch-fork period preserving bifurcation of the symmetric 2-cycle into asymmetric attracting ones at $`r=1/\sqrt{2}`$.
###### Proof.
For the 2-cycle $`\{e^{𝕚0},e^{𝕚\pi }\}`$ of $`R_{r,\pi }`$, the derivatives are given by
$`\stackrel{~}{R}_{r,\pi }^{}(0)`$ $`={\displaystyle \frac{13r}{1r}};`$
$`\stackrel{~}{R}_{r,\pi }^{}(\pi )`$ $`={\displaystyle \frac{1+3r}{1+r}}.`$
Thus, the 2-cycle undergoes a period-doubling bifurcation when $`1={\displaystyle \frac{19r^2}{1r^2}}`$, which is exactly $`r=1/\sqrt{5}`$. We claim that the new attracting 4-cycle is reflection symmetric. In fact, the equation for a reflection symmetric 4-cycle of $`R_{r,\pi }`$ is
$$x+\pi =\iota (x)+\iota \stackrel{~}{R}_{r,\pi }(x)mod\pi .$$
Let $`f(x)=x\iota (x)\iota (x+\pi 2\iota (x))+\pi `$. Then we have to solve for $`f(x)=0mod\pi `$. Clearly, $`f(x)\pi `$ is an odd function with $`f(\pi )=0`$, $`f(0)=\pi `$, and $`f(\pi )=2\pi `$. If $`f`$ is decreasing at $`\pi `$, equivalently, it is so at $`\pi `$, then $`f`$ must have a zero modulo $`2\pi `$ in neighborhoods of $`\pm \pi `$. It is easy to compute that
$`f^{}(\pi )=f^{}(\pi )`$ $`=1\iota ^{}(\pi )\iota ^{}(0)\left[12\iota ^{}(\pi )\right]`$
$`={\displaystyle \frac{15r^2}{1r^2}}.`$
Thus, $`f^{}(\pi )0`$ if and only if $`r1/\sqrt{5}`$. This shows that for $`1/\sqrt{5}<r`$, $`R_{r,\pi }`$ has a symmetric 4-cycle. Moreover, it must be attracting when $`r<1/\sqrt{5}+\epsilon `$ by continuity.
We have shown in the previous section that an attracting symmetric 2-cycle exists for $`R_{r,0}`$ with $`1/2<r<1/\sqrt{2}`$. It is given by $`\stackrel{~}{R}_{r,0}(c_1)=c_1mod\pi .`$ One may further calculate according to the lemma in §2 to obtain that, at $`r=1/\sqrt{2}`$ and $`x=c_1`$,
$$\frac{^2R_{r,0}^2}{rx}=2\sqrt{2}80.$$
Then, by an argument making use of the Inverse Function Theorem, one concludes that $`R_{r,0}`$ has a 2-period preserving pitch fork bifurcation at $`r=1/\sqrt{2}`$. $`\mathrm{}`$
###### Proposition 3.2.
There is a pitch-fork bifurcation on $`R_{r,\pi }`$ where a symmetric orbit of period 4 breaks into two asymmetric orbits of period 4.
###### Sketch of proof.
The key is to consider the zero set of $`\stackrel{~}{R}_{r,\pi }^4(x)=x`$ in the $`(r,x)`$-plane. In the above, we have shown that for $`r`$ slightly larger than $`1/\sqrt{5}`$, this can be obtained from the solution set of a symmetric 4-cycle, $`\stackrel{~}{R}_{r,\pi }^2(x)=x`$. We then solve for the specific values of $`(r_0,x_0)`$ such that $`\stackrel{~}{R}_{r_0,\pi }^2(x_0)=x_0`$ and $`(\stackrel{~}{R}_{r_0,\pi }^2)^{}(x_0)=1`$. Our calculation involves the expression of $`R_{r,\pi }`$ as a Blaschke product given in \[AL\],
$$R_{r,\pi }(z)=\frac{z^2(1rz)}{zr}.$$
Then, a symmetric 4-cycle can be solved from $`z=e^{𝕚\varphi }`$ and
$$R_{t,\pi }^2(z)=z^4\frac{(1rz)^2}{(zr)^2}\frac{r^2z^3rz^2z+r}{rz^3z^2rz+r^2}=\frac{1}{z}.$$
After factoring out the obvious factors $`(z+1)(z1)`$, and letting $`y=\mathrm{cos}\varphi `$, we have the polynomial equation
$$14r^2+r^4+2r(1+7r^3)y+4r^2(23r^2)y^224r^3y^3+16r^4y^4=0.$$
For $`r>1/\sqrt{5}`$, all the four roots of this equation are real. Two of them are always $`>1`$, one lies within $`(1,0)`$ and the other within $`(0,1)`$. After taking arccosines, the four solutions form a symmetric 4-cycle. Let $`x_0=x_0(r)`$ be a solution, then we solve for $`r`$ in $`(\stackrel{~}{R}_{r,\pi }^2)^{}(x_0)=1`$, we obtain a numerical value of $`r_0`$ approximately equal to 0.57. We further verify that
$$\frac{(\stackrel{~}{R}_{r,\pi }^4)^{}}{r}(x_0,r_0)0.$$
Many of the above calculations are lengthy and we indeed make use of computation software to help us. Finally, we apply the Inverse Function Theorem to conclude that there is a 4-period preserving pitch fork bifurcation. $`\mathrm{}`$
The local change of the graphs of $`R_{r,\mathrm{\Omega }}^4`$ is shown in the picture.
Graphs of $`R_{r,\mathrm{\Omega }}^4`$ (darker) and $`R_{r,\mathrm{\Omega }}^8`$ (lighter) before and after the pitch-fork bifurcation.
To end this article, we would like to present two pictures, asymptotic orbits for $`\mathrm{\Omega }=\pi /2,\pi /4`$ to illustrate the wide variation of the dynamics of this family with respect to $`\mathrm{\Omega }`$.
Asymptotic orbits of $`R_{r,\pi /2}`$ and $`R_{r,\pi /4}`$ (low resolution).
|
no-problem/9908/hep-ph9908238.html
|
ar5iv
|
text
|
# Fourth Generation 𝑏' decays into b + Higgs
## 1 Sequential Quarks
The simplest realization of a fourth family is to add left-handed doublets and right-handed singlets (with a right-handed neutrino necessary to give the extra neutrino a large mass). The first calculations of $`b^{}b+H`$ were carried out by Hou and Stuart and by Eilam, Haeri and Soni . A much more detailed analysis, which was the first to directly compare the rate with that of $`b^{}b+Z`$, which made no assumptions about mixing angles and which discussed the anomalous thresholds that occur in the calculation, appeared in the subsequent work of Hou and Stuart.
First, consider the ratio of the neutral current decay $`b^{}b+Z`$ to the charged current decay $`b^{}c+W`$. The former decay depends on the mass of the $`t^{}`$ quark and $`|V_{tb^{}}|`$; the latter depends on $`|V_{cb^{}}|`$. For a $`t^{}`$ mass of $`250`$ GeV, the ratio is given by (see Ref for full expressions and a plot)
$$\frac{\mathrm{\Gamma }(b^{}bZ)}{\mathrm{\Gamma }(b^{}cW)}=0.005\frac{|V_{tb^{}}|^2}{|V_{cb^{}}|^2}$$
(1)
For different $`t^{}`$ masses, the ratio varies roughly as $`(m_t^{}^2m_t^2)^2`$ (note the GIM cancellation when the masses are equal). We thus see how sensitive the ratio is to the mixing angles. If one were to choose $`|V_{tb^{}}|/|V_{cb^{}}|`$ to be the same as $`|V_{cb}|/|V_{ub}|=13\pm 3`$, then the above ratio is between $`0.5`$ and $`1.3`$. However, the large top quark mass might indicate a very large mixing angle between the third and fourth generations, leading to a much bigger ratio. Thus, the neutral current $`b^{}b+Z`$ decay is certainly similar to, and could dominate, the charged-current decay.
In the ratio of $`b^{}b+H`$ to $`b^{}b+Z`$, the mixing angles cancel, so there is less arbitrariness in the result. The result, given by Hou and Stuart, is a function of $`M_H`$, $`m_t`$, $`m_t^{}`$ and $`m_b^{}`$. Hou and Stuart give plots of the partial widths as a function of $`m_b^{}`$, for four different values of $`M_H`$, three different values of $`m_t`$ and two different values of $`m_t^{}`$. Fortunately, one of the choices for $`m_t`$ was $`175`$ GeV (the others were $`75`$ and $`125`$ GeV), and the dependence on $`m_t^{}`$, while important for the individual rates, is very weak in the ratio. For $`m_H=100`$ GeV, the ratio of $`b^{}b+H`$ to $`b^{}b+Z`$ is approximately $`(1.0,1.4,1.7,2.0,2.5)`$ for $`m_b^{}=(150,175,200,225,250)`$, respectively. For $`m_H=150`$ GeV, phase space suppression sets in, and the ratio, for the same $`b^{}`$ masses, is $`(0,0.15,0.7,1.0,1.6)`$, respectively. One sees that the two rates are very similar. For a Higgs mass of $`100`$ GeV, and a sequential $`b^{}`$ quark, the assumption that the branching ratio for $`b^{}b+Z`$ is $`100\%`$ is not valid. On the other hand, for a Higgs mass of $`150`$ GeV or higher, it may be reasonable.
Could one improve upon Hou and Stuart’s calculation? We now know the top quark mass (and can distinguish between the Yukawa coupling $`\overline{MS}`$ mass and the pole mass), we know from precision electroweak data that the $`t^{}`$ mass cannot be much bigger than the $`b^{}`$ mass, we have a much better understanding of the production cross sections for heavy quarks, and b-tagging in hadron colliders is much better understood.
However, it would be premature to carry out this analysis. The reason is that a sequential fourth generation has virtually been ruled out by precision electroweak data. Erler and Langacker note that the $`S`$ parameter is in conflict with a degenerate fourth generation by over three standard deviations, or $`99.8\%`$. One can weaken this discrepancy slightly by making the fourth generation non-degenerate, but it appears very unlikely that a sequential fourth generation can be accommodated, if it is the only source of new physics. One way around this discrepancy is to assume that there is new physics which partially cancels the fourth generation contribution to the $`S`$ parameter (such as Majorana neutrinos, additional Higgs doublets, etc.). This certainly can be done, and thus searches for a sequential fourth generation should continue. However, this new physics will likely also contribute to $`b^{}b+H`$ and to $`b^{}b+Z`$. Thus, without some understanding of the new physics, carrying out a high precision improvement of the Hou-Stuart analysis is premature.
## 2 Non-chiral fermions
Of much greater theoretical interest than a sequential fourth generation is a non-chiral (isosinglet or isodoublet) fourth generation. These happen automatically in a wide variety of models, including $`E_6`$-unification models, gauge-mediated supersymmetric models, the aspon CP-violation model and so on. The motivations for these non-chiral generations are discussed in detail in Ref. . They only contribute to the $`S`$ parameter at higher order, and are thus completely in accord with precision electroweak studies. Due to the GIM violation, these models have tree-level $`b^{}bH`$ and $`b^{}bZ`$ vertices, and thus the charged-current decay of the $`b^{}`$ becomes less competitive. Without the $`b^{}b+H`$ decay, the assumption that the branching ratio of $`b^{}b+Z`$ is $`100\%`$ would be completely justified.
Let us first consider the case in which $`b^{}`$ is an isosinglet quark. The first discussion of $`b^{}b+H`$ was given in 1989 by del Aguila, Kane and Quiros (AKQ), who looked at the possibility of using this decay to detect a light Higgs (if the $`b^{}`$ mass were less than $`M_Z+m_b`$, it would be the primary decay mode). This work was followed up by a more extensive analysis by del Aguila, Ametller, Kane and Quiros (AAKQ). A much later analysis of the various phenomenological aspects of isosinglet quarks can be found in the work of Barger, Berger and Phillips.
Following AKQ, consider the case in which the $`b^{}`$ only mixes with the $`b`$. The Higgs doublet gives the usual mass term $`m_b\overline{b}_Lb_R+\mathrm{h}.\mathrm{c}.`$, as well as a term $`m^{}\overline{b}_Lb_R^{}+\mathrm{h}.\mathrm{c}.`$. In addition, there are gauge invariant mass terms $`M_b^{}\overline{b^{}}_Lb_R^{}+\mathrm{h}.\mathrm{c}.`$ and $`M^{}\overline{b^{}}_Lb_R+\mathrm{h}.\mathrm{c}.`$. The $`2\times 2`$ mass matrix can then be diagonalized. The resulting mixing then gives $`b^{}bZ`$ and $`b^{}bH`$ vertices, which are proportional to $`m^{}`$. Thus, one gets tree level interactions, suppressed only by a single Cabibbo-type angle ($`m^{}/M_b^{}`$). The angle cancels in the ratio, giving
$$\frac{\mathrm{\Gamma }(b^{}b+H)}{\mathrm{\Gamma }(b^{}b+Z)}=\frac{M_b^{}^2}{M_b^{}^2+2M_Z^2}\left(\frac{M_b^{}^2M_H^2}{M_b^{}^2M_Z^2}\right)^2$$
(2)
This ratio is unity in the limit of large $`M_b^{}`$, and is $`0.7`$ times the phase space factor for $`M_b^{}=200`$ GeV. There will also be a $`b^{}cW`$ vertex induced by mixing, but this will be doubly-Cabibbo suppressed, and thus should be negligible.
One thus sees that, once again, the $`b^{}b+H`$ decay is comparable to the $`b^{}b+Z`$, assuming the Higgs mass is not close to (or greater than) the $`b^{}`$ mass. Again, the charged current $`b^{}c+W`$ decay is expected to be much smaller (and, as shown in Ref. , the $`b^{}t+W^{}`$ decay will be negligible for all $`b^{}`$ masses below $`300`$ GeV).
Although the isosinglet case is theoretically preferred (since isosinglet quarks automatically appear in all $`E_6`$ unified models, as well as all models with a $`5+\overline{5}`$ of $`SU(5)`$), one can ask what happens if the fourth generation quarks form an isodoublet. The ratio of $`b^{}b+H`$ to $`b^{}b+Z`$ is the same as in the isosinglet case. However, there is one important difference. Although the ratio is the same, the individual rates are much smaller. This is because the GIM mismatch in the isodoublet case occurs in the right-handed sector, and there is a helicity suppression which suppresses the vertex by an additional factor of $`m_b/M_b^{}`$. This means that the charged current decays become much more competitive. It is shown in Ref. that the three-body $`b^{}t+W^{}`$ decay becomes competitive with the $`b^{}b+Z`$ decay for $`b^{}`$ masses of $`200`$ GeV, and greatly exceeds it for masses above $`220`$ GeV. For lighter masses, the $`b^{}c+W`$ decay will still be important, and may dominate depending on the value of the $`V_{cb^{}}/V_{tb^{}}`$ ratio (as in the sequential fermion case).
## 3 Two-Higgs models
In the standard two-Higgs doublet models, the so-called Model I or Model II, the Yukawa couplings to a Higgs are multiplied by a factor of $`\frac{\mathrm{cos}\alpha }{\mathrm{sin}\beta }`$, where $`\alpha `$ is a Higgs mixing angle and $`\beta `$ is a ratio of vacuum expectation values (depending on the specific model and the specific fermion charge, $`v_2/v_1`$ will either be $`\mathrm{tan}\beta `$ or $`\mathrm{cot}\beta `$). In most models, Higgs mixing is fairly small, so $`\mathrm{cos}\alpha `$ is near unity. In all of the above cases, this factor will change the ratio of $`b^{}b+H`$ to $`b^{}b+Z`$ by a factor which is of order one. (It can’t enhance the Higgs decay mode too much in the sequential case, since too large an enhancement will make the $`b^{}`$ or $`t^{}`$ Yukawa coupling non-perturbative.)
A bigger effect might be expected in Model III. In this model, unlike Models I and II, no discrete symmetry is imposed in order to suppress tree level flavor-changing neutral currents (FCNC) and thus FCNC arise, even in the sequential case. The observed lack of large FCNC in processes involving first-generation quarks is explained by noting that many models will have a FCNC coupling given by the geometric mean of the Yukawa couplings of the two quarks. In that case, the tree-level $`b^{}bH`$ coupling (neglecting Higgs mixing) is given by $`g\sqrt{m_bm_b^{}}/\sqrt{2}M_W`$.
How does this coupling affect the results? In the isosinglet case, the Model III coupling is of the same order of magnitude as the expected coupling induced by the GIM violation, and thus none of our arguments change. However, in the sequential fermion case, the Model III coupling is much larger than the one-loop induced $`b^{}bZ`$ coupling. Also, in the isodoublet case, the Model III coupling is much larger than the GIM-violation induced $`b^{}bZ`$ coupling. Thus, since the Higgs coupling is so much larger in these two models, $`b^{}b+H`$ will dominate all $`b^{}`$ decays. We conclude that in Model III, with either a sequential or isodoublet $`b^{}`$, the $`b^{}`$ decay is dominated by $`b^{}b+H`$, and thus the CDF and D0 bounds are completely inapplicable.
## 4 Conclusions
Previous searches for a fourth generation quark assume a $`100\%`$ branching ratio into $`b+Z`$. The other neutral current decay, $`b^{}b+H`$, has been examined, in the sequential case, the isosinglet case, the isodoublet case and a two-Higgs model with tree-level FCNC. In all of these cases, the rate for $`b^{}b+H`$ is comparable to, or greater than, $`b^{}b+Z`$ if the Higgs is kinematically accessible.
Currently, the CDF collaboration is preparing an analysis which will give the bounds as a function of the branching ratio to $`b+Z`$. This analysis is conservative in that it assumes that it is insensitive to other decay channels than $`b^{}b+Z`$. However, suppose that one $`b^{}`$ decays to $`b+Z`$ and the other to $`b+H`$. At least one $`b+Z`$ decay is needed to trigger the event, and the three $`b`$ final state of the other $`b^{}`$ could then be detected. The $`b`$-tag efficiency in these events is expected to be considerably higher than in $`bbZZ`$ events because of the 4-$`b`$ jets final state in which at least two $`b`$-jets have high-$`p_T`$, independently of the $`b^{}`$ and Higgs masses. . This leads to the potential exciting result that the experiment could discover both a fourth-generation quark and a Higgs boson!
The only discouraging model is Model III, in the sequential or isodoublet cases. Pair-production of $`b^{}`$’s would lead to a 6-b final state, in which every $`b`$ comes from a 2-body decay (except in the narrow region of parameter space where $`HW^+W^{}`$ can occur). This would lead to quite dramatic signatures, but without a lepton trigger, finding such a signature would be very difficult.
I thank the CERN Theory Group for its hospitality while this work was written, and Yao Yuan for her assistance. I am also grateful to João Guimarães da Costa for informing me of the continuing interest of CDF in conducting $`b^{}`$ searches, and for many useful discussions. This work was supported by the National Science Foundation.
|
no-problem/9908/astro-ph9908147.html
|
ar5iv
|
text
|
# Towards Understanding Jovian Planet Migration
## 1 Initial conditions
The PPM code and initial conditions are very similar to those presented in Nelson *et al.* 1998. We begin with a one $`M_{}`$ protostar fixed to the origin of our coordinate system. We assume that a disk of mass $`M_D=0.05M_{}`$ is contained between the inner and outer grid boundaries at 0.5 AU and 20 AU and that the disk is self gravitating. A second point mass (the ‘planet’) is set in a circular orbit at a radius 5.2 AU away from the protostar and is free to migrate through the disk in response to gravitational forces. No other forces act on the planet and it does not accrete mass from the disk. In different simulations, we investigate migration rates of different planet masses.
The disk mass is distributed on a 128$`\times `$224 cylindrical ($`r,\varphi `$) grid with a surface density given by a power law, $`\mathrm{\Sigma }\left(r\right)=\mathrm{\Sigma }_1\left(1AU/r\right)^p`$, where $`\mathrm{\Sigma }_1`$ is determined from the assumed disk mass and $`p=3/2`$. We assume an initial temperature profile with a similar power law, $`T\left(r\right)=T_1\left(1AU/r\right)^q`$, where the temperature at 1 AU is $`T_1=250`$ K and $`q=1/2`$. These initial conditions produce a radial profile for which the minimum Toomre $`Q`$ (of $`5`$) is found near the outer disk edge. The profile exhibits a steep increase in the inner regions due to the increased effects of pressure on the orbital characteristics there. A single component isothermal gas equation of state is used to derive pressure at each point in the disk.
Velocities are determined assuming initial rotational equilibrium. Radial velocities throughout are assumed equal to zero, while angular velocities are determined by balancing the gravitational, pressure and centrifugal forces in the disk.
## 2 Migration rates with varying planet mass
Using the initial conditions outlined above we have completed a series of simulations, varying the planet mass assumed for each simulation. We show the effect of a 1$`M_J`$ mass planet on the disk in figure 1. Within a few hundred years, the planet raises very large amplitude spiral structures which lead (trail) the planet radially inside (outside) its orbit radius and cause a gap to form around the planet. These structures are similar to those in previous work (Bryden *et al.* 1999, Kley 1999) where the planet’s trajectory was fixed to a single orbit radius.
The orbital trajectory of the planet (figure 2) is strongly affected by the gravitational torques from the spiral structures. We show the After a $`50100`$ yr period in which the planet first builds spiral structures, the migration rate is constant for the next 500 yr, then drops to zero and the planet reaches a ‘final’ orbit radius of about 3.5 AU, after 800 yr. The migration rates fit for simulations with different disk masses are shown in figure 3, and are valid for the period for which the migration rate is constant (see figure 2), i.e. the period for which the migration can be considered ‘Type I’. The fitted migration rates are very rapid: about 1 AU per thousand years, so that migration in to the stellar surface would occur before the end of the disk lifetime of 10<sup>6</sup> yr. Still higher mass planets move so quickly through the disk that they ‘outrun’ their own gap formation efforts and fall into the star. With more realistic initial conditions (a 2$`M_J`$ planet should already have a gap), we expect this phenomenon to go away.
Over the course of the first several hundred years of evolution, planets with mass less than 2$`M_J`$ hollow out a deep gap in the disk, which extends all the way around the star. In figure 4 we show the azimuth averaged surface density structure at several points during the evolution of the system shown above. The gap forms quickly after the beginning of the simulation and within 500 years has hollowed out a region about 3 AU wide. The surface density near the planet (200–300 gm/cm<sup>2</sup>) is a factor ten below the initial profile and less (100 gm/cm<sup>2</sup>) at its deepest, just outside the planet’s orbit. By the end of the simulation, the gap has deepened to a factor of 100 less than its unperturbed profile and continued to get deeper even at the end of our simulations. It does not substantially increase its width after initial formation however.
The existence of a gap eventually causes the migration to slow and, if it becomes deep and wide enough, decrease to a rate defined by the viscosity of the disk (‘Type II’ migration) rather than to dynamical processes like gravitational torques (‘Type I’ migration). From figure 4 we can determine the approximate disk conditions which define the transition between Type I and Type II migration. From figure 2 we see that the migration rate decreases starting after about 5-600 yr of evolution. Comparison with figure 4 shows that the onset of the transition occurs when the gap is 3 AU wide and has surface density $`\mathrm{\Sigma }_{gap}200300`$gm/cm<sup>2</sup>. The transition is concluded and further rapid orbital decay is suppressed by the time the system has evolved for 700 yr, when the surface density is $`\mathrm{\Sigma }_{gap}100`$ gm/cm<sup>2</sup>.
The values of surface density and gap width required for the onset and completion of the transition to Type II migration are typical of of each of the simulations we have performed, withouth regard to planet mass. Simulations with 0.3–0.5$`M_J`$ planets are able to enter the transition to Type II migration but over the duration of our simulations ($`1800`$ yr) do not complete the transition. We continue to evolve these simulations further in time to determine their ultimate fate.
## 3 The relative importance of the disk close to the planet
Linear theory (e.g. Takeuchi *et al.* 1996) predicts that the most important Fourier components of the spiral patterns raised in the disk will be those with azimuthal wavenumber m=20–40, corresponding to an azimuthal wavelength near the planet of about 1 AU. The Lindblad resonances of these patterns will be at a distance radially inward and outward from the planet of $`R_{LR}=a\left(1\pm 1/m\right)^{2/3}0.3`$ AU, where $`a`$ is the semi-major axis of the planet. Another relevant parameter is the Hill radius, $`R_H=a\left(M_J/3M_{}\right)^{1/3}0.3`$ AU, defining the sphere of influence of the planet. Further, the $`z`$ structure of the disk becomes important on the same spatial scales because in 2D the gravity will be effectively ‘amplified’ by the assumption that all the disk matter is in the $`z=0`$ plane rather than some at high altitudes more distant from the planet. Unfortunately each of these values are similar to the grid resolution size scale of computationally affordable simulations.
In order to characterize both physical and numerical effects we have performed a series of simulations in which we vary the effective gravitational softening length of the planet, effectively ‘turning on’ or off the effect of matter very close to the planet. The migration rates obtained from such a series of simulations are shown in figure 5. We again assume the initial conditions as above, with a planet of mass 0.3$`M_J`$ .
The migration rate increases by a factor of about five as the softening decreases from 0.5 AU to 0.1 AU, clearly showing the importance of the distribution of disk matter close to the planet. The largest increases occur as $`ϵ`$ decreases to the size of the Hill radius or smaller. Below $`ϵ=0.08`$ AU, or half the size of one grid cell, the migration rate is numerically unstable and slight changes in the softening change the migration rate from zero to 1 AU/100 yr (below the bottom of the plot). The dependence of the migration on the disk matter very near the planet is in qualitative agreement with with the conclusion of Ward (1997), who showed that the most important contribution to the migration will come from the disk matter within about one disk scale height (in our case $`0.3`$ AU near the planet) of the planet’s radial position. These results show that very high resolution three dimensional simulations of the region around the planet will be required in order to understand the migration rates of a planet through the disk.
## 4 Summary
In the course of this study we have
$``$ Shown that planets evolved in a 2D disk without a gap migrate inward through a substantial fraction of their initial semi-major axis radius on timescales $`1000`$ yr.
$``$ Shown that planets with masses higher than $`0.5`$$`M_J`$ can open a gap sufficiently wide and deep to drastically slow their migration through the disk (i.e. transition to Type II migration). Conditions for beginning the transition are that surface density of the disk near the planet be about 200-200 gm/cm<sup>2</sup> and that the gap be $`3`$ AU wide. Conditions for completing the transition to Type II migration are surface densities near the disk of $`100`$ gm/cm<sup>2</sup>.
$``$ Demostrated the critical importance of very high spatial resolution of the disk near the planet required for correct evolution of the planet’s migration.
## References
Bryden, G., Chen, X., Lin, D. N. C., Nelson, R. P., Papaloizou, J. C. B. 1999, ApJ, 514, 244
Kley, W., 1999, MNRAS, 303, 696
Nelson, A. F., Benz, W., Adams, F. C., Arnett. W. D., 1999, ApJ, 502, 342
Takeuchi, T., Miyama, S. M., Lin, D. N. C., 1996, ApJ, 460, 832
Ward, W., 1997, Icarus, 126, 261
|
no-problem/9908/math9908053.html
|
ar5iv
|
text
|
# Essential meridional surfaces for tunnel number one knots
## 1. Introduction
In this paper we consider essential surfaces, closed or meridional, properly embedded in the exteriors of tunnel number one knots. The exterior of a knot $`k`$ is denoted by $`E(k)=S^3intN(k)`$. Recall that a knot $`k`$ in $`S^3`$ has tunnel number one if there exists an arc $`\tau `$ embedded in $`S^3`$ with $`k\tau =\tau `$, such that $`S^3intN(k\tau )`$ is a genus 2 handlebody. Such an arc is called an unknotting tunnel for $`k`$. Equivalently, a knot $`k`$ has tunnel number one if there is an arc $`\tau `$ properly embedded in $`E(k)`$, such that $`E(k)intN(\tau )`$ is a genus 2-handlebody; in general, the unknotting tunnels we consider are of this type. Sometimes it is convenient to express a tunnel $`\tau ^{}`$ for a knot $`k`$ as $`\tau ^{}=\tau _1\tau _2`$, where $`\tau _1`$ is a simple closed curve and $`\tau _2`$ is an arc connecting $`\tau _1`$ and $`N(k)`$; by sliding the tunnel we can pass from one expression to the other.
A surface $`S`$ properly embedded in a 3-manifold $`M`$ is essential if it is incompressible, $``$-incompressible, and non-boundary parallel. A surface properly embedded in the exterior of a knot $`k`$ is meridional if each component of $`S`$ is a meridian of $`k`$. Let $`M`$ be a compact 3-manifold, and let $`S`$ be a surface in $`M`$, either properly embedded or contained in $`M`$. Let $`k`$ be a knot in the interior of $`M`$, intersecting $`S`$ transversely. Let $`\widehat{S}=SintN(k)`$. The surface $`\widehat{S}`$ is properly embedded in $`MintN(k)`$, and its boundary on $`N(k)`$, if any, consists of meridians of $`k`$. We say that $`\widehat{S}`$ is meridionally compressible in $`(M,k)`$, if there is an embedded disk $`D`$ in $`M`$, intersecting $`k`$ at most once, with $`\widehat{S}D=D`$, so that $`D`$ is a nontrivial curve on $`\widehat{S}`$, and is not parallel to a component of $`\widehat{S}`$ lying on $`N(k)`$. Otherwise $`\widehat{S}`$ is called meridionally incompressible. In particular if $`\widehat{S}`$ is meridionally incompressible in $`(M,k)`$, then it is incompressible in $`Mk`$.
Some results are available on incompressible surfaces in tunnel number one knot exteriors. Regarding meridional surfaces, it is shown in \[GR\] that the exterior of a tunnel number one knot does not contain any essential meridional planar surface. Another proof of this fact is given in \[M\]. This says that any tunnel number one knot is indecomposable with respect to tangle sum. Considering closed surfaces, it is shown in \[MS\] that there are tunnel number one knots whose complements contain an essential torus, and such knots are classified. In \[E2\] it is proved that for each $`g2`$, there exist infinitely many tunnel number one knots whose complements contain a closed incompressible surface of genus $`g`$; such surfaces are also meridionally incompressible.
In this paper we prove the following,
###### Theorem 3.2
For each pair of integers $`g1`$ and $`n1`$, there are tunnel number one knots $`K`$, such that there is an essential meridional surface $`S`$ in the exterior of $`K`$, of genus $`g`$, and with $`2n`$ boundary components. Furthermore, $`S`$ is meridionally incompressible.
This gives a positive answer to question 1.8 in \[GR\]. It follows from \[CGLS\] that any of the knots of Theorem 3.2 also contains a closed essential surface of genus $`2`$. That surface is obtained by somehow tubing the meridional surface. However such a surface will be meridionally compressible.
Combining the construction of \[E2,§6\] with that of Theorem 3.2, we get the following,
###### Theorem 3.3
For each positive integer $`n`$, there are tunnel number one knots $`K`$, such that in the exterior of $`K`$ there are $`n`$ disjoint, non-parallel, closed incompressible surfaces, each of genus $`n`$.
It follows from the construction that one of the surfaces, say $`S_1`$, is meridionally compressible while the others are meridionally incompressible. It follows also that the surface $`S_1`$ is the closest to $`E(K)`$, that is, $`S_1`$ and $`E(K)`$ bound a submanifold $`M`$ which does not contain any of the other surfaces. It follows from \[CGLS,2.4.3\] that $`S_1`$ remains incompressible after performing any non-integral Dehn surgery on $`K`$, and then so does any of the other surfaces. This fact, Theorem 3.3 and the observation that the exterior of a tunnel number one knot is a compact 3-manifold with Heegaard genus 2, imply the following.
###### Corollary
For each positive integer $`n`$, there are closed, irreducible $`3`$-manifolds $`M`$, with Heegaard genus $`2`$, such that in $`M`$ there are $`n`$ disjoint, non-parallel, closed incompressible surfaces, each of genus $`n`$.
This corollary improves one of the results of \[Q\], where it is shown that for each $`n`$, there exist closed irreducible 3-manifolds with Heegaard genus 2 which contain an incompressible surface of genus $`n`$.
In Theorem 3.3 the genus of the surfaces grows as much as the number of surfaces. This fact is essential, i.e., it is not just a consequence of the construction method. It follows from the main Theorem of the recent paper \[ES\], that it is impossible for an irreducible 3-manifold with Heegaard genus $`g`$, with or without boundary, to contain an arbitrarily large number of disjoint and closed incompressible surfaces of bounded genus.
The idea of the proof of Theorem 3.2 is the following: Start with a tunnel number one knot $`k`$, and unknotting tunnel $`\tau `$, and a closed incompressible surface in the complement of $`k`$ which intersects $`\tau `$ in two points. We know by \[MS\] and \[E2\] that such knots do exist. Now take an iterate of $`k`$ and $`\tau `$, i.e., a knot $`k^{}`$ formed by the union of two arcs $`k^{}=k_1k_2`$, where $`k_1=\tau `$ and $`k_2`$ is an arc lying on $`N(k)`$. Thus $`k^{}`$ intersects $`S`$ in two points. It follows that $`k^{}`$ is a tunnel number one knot (see Lemma 3.1); an unknotting tunnel $`\tau ^{}`$ for $`k^{}`$ is formed by the union of $`k`$ and an arc joining $`k`$ to a point in $`k_1k_2`$. Slide $`\tau ^{}`$ so that it becomes an arc with endpoints on $`k^{}`$, also denoted by $`\tau ^{}`$. Now take an iterate of $`k^{}`$ and $`\tau ^{}`$; this is a knot $`k^{}`$ with tunnel number one which intersects $`S`$ in as many points as desired. If $`k^{}`$ and $`k^{}`$ satisfy certain conditions (Theorem 2.1), the surfaces $`S_1=SintN(k^{})`$ and $`S_2=SintN(k^{})`$ are essential meridional surfaces in the exterior of $`k^{}`$ and $`k^{}`$, respectively.
Throughout, 3-manifolds and surfaces are assumed to be compact, connected and orientable. If $`X`$ is contained in a 3-manifold $`M`$, then $`N(X)`$ denotes a regular neighborhood of $`X`$ in $`M`$; if $`X`$ is contained in a surface $`S`$, then $`\eta (X)`$ denotes a regular neighborhood of $`X`$ in $`S`$. $`\mathrm{\Delta }(\alpha ,\beta )`$ denotes the minimal intersection number of two essential simple closed curves on a torus $`T`$.
I am grateful to E. Sedgwick for suggesting that I prove Theorem 3.3. I am also grateful to the referee, whose many suggestions greatly improved the exposition of the paper.
## 2. Construction of essential meridional surfaces
Let $`k`$ be a knot in $`S^3`$, and let $`\tau ^{}=\tau _1\tau _2`$ be an unknotting tunnel for $`k`$, where $`\tau _1`$ is a simple closed curve, and $`\tau _2`$ is an arc with endpoints in $`N(k)`$ and $`\tau _1`$. Let $`S`$ be a closed surface of genus $`g`$ contained in the exterior of $`k`$; then $`S`$ divides $`S^3`$ into two parts, denoted by $`M_1`$ and $`M_2`$, where, say, $`k`$ lies in $`M_2`$. We say that $`S`$ is special with respect to $`k`$ and $`\tau ^{}`$ if it satisfies:
This definition is a variation of the one given in \[E2,§6\].
Note that by \[MS\], \[E1\], there exist knots with these properties when $`g=1`$; when $`g2`$, the existence of knots like these follows from \[E2,6.1\]. Note that $`M_2N(\tau _2)`$ is a cylinder $`RD^2\times I`$, so that $`RS`$ is a disk $`D_1D^2\times \{1\}`$, and $`RN(k)`$ is a disk $`D_0D^2\times \{0\}`$. Slide $`\tau _1`$ over $`\tau _2`$, to get an arc $`\tau `$ with both endpoints on $`D_0N(k)`$, so that $`\tau M_2`$ consists of two straight arcs contained in $`R`$, i.e., arcs which intersect each disk $`D^2\times \{x\}`$ transversely in one point. The surface $`S`$ and the arc $`\tau `$ then intersect in two points. The arc $`\tau `$ has a neighborhood $`N(\tau )D^2\times I`$, so that $`N(\tau )M_2R`$.
Let $`P`$ be a solid torus, $`D_0`$ a disk contained in $`P`$, and $`\rho =\{\rho _1,\mathrm{},\rho _n\}`$, a collection of arcs properly embedded in $`P`$, so that its endpoints lie in $`D_0`$. We say that this forms a toroidal tangle with respect to $`D_0`$, and denote it by $`(P,D_0,\rho )`$.
Recall that the wrapping number of a knot in a solid torus is defined as the minimal number of times that the knot intersects any meridional disk of such solid torus. We define the wrapping number of an arc $`\rho _i`$ in $`P`$ as the wrapping number of the knot obtained by joining the endpoints of $`\rho _i`$ with an arc in $`D_0`$, and then pushing it into the interior of $`P`$. This is well defined.
The tangle $`(P,D_0,\rho )`$ is good if:
If the tangle $`(P,D_0,\rho )`$ is good, then $`D_0\rho `$ is incompressible in $`P\rho `$, i.e., there is no disk $`D`$ properly embedded in $`P`$, disjoint from $`\rho `$, with $`DD_0`$, and such that $`D`$ is essential in $`D_0\rho `$.
Let $`A`$ be an annulus in $`P`$, essential in $`P`$, so that $`D_0A`$. The tangle $`(P,D_0,\rho )`$ is good with respect to $`A`$ if:
Let $`\widehat{k}`$ be a knot contained in the interior of $`N(k)N(\tau )`$. We say that $`\widehat{k}`$ is specially knotted if:
As $`N(\tau )M_2R`$, it follows that $`\widehat{k}R`$ consists of $`2n`$ straight arcs. So $`\widehat{k}`$ intersects $`S`$ in $`2n`$ points. Let $`\widehat{S}=SE(\widehat{k})`$. This is a surface properly embedded in $`E(\widehat{k})`$, whose boundary consists of $`2n`$ meridians of the knot $`\widehat{k}`$.
###### Theorem 2.1
Let $`k`$ be a knot, $`\tau ^{}=\tau _1\tau _2`$ an unknotting tunnel for $`k`$, and $`S`$ a surface which is special with respect to $`k`$ and $`\tau ^{}`$. Let $`\widehat{k}N(k)N(\tau )`$ be a knot which is specially knotted. Then the surface $`\widehat{S}=SE(\widehat{k})`$ is an essential meridional surface in the exterior of $`\widehat{k}`$. Furthermore, if the surface $`S`$ is meridionally incompressible in $`(S^3,k)`$, then $`\widehat{S}`$ is meridionally incompressible in $`(S^3,\widehat{k})`$. If $`S`$ is meridionally compressible, but the wrapping number of some arc $`\rho _i`$ in $`N(k)`$ is $`3`$, where $`\rho =N(k)\widehat{k}`$, then $`\widehat{S}`$ is meridionally incompressible.
###### Demonstration Proof
To prove that the surface $`\widehat{S}`$ is essential in the exterior of $`\widehat{k}`$, it suffices to show that it is incompressible, because any two-sided, connected, incompressible surface in an irreducible 3-manifold with incompressible torus boundary must be $``$-incompressible, unless it is a boundary-parallel annulus, which is not the case here.
Let $`S^{}=(SRint(D_1))`$, so $`S^{}`$ is isotopic to $`S`$, and let $`\stackrel{~}{S}=S^{}E(\widehat{k})`$; then $`\stackrel{~}{S}`$ is a surface isotopic to $`\widehat{S}`$. Denote by $`M_1^{}`$ and $`M_2^{}`$ the complementary regions of $`\stackrel{~}{S}`$ in $`E(\widehat{k})`$, where $`N(k)E(\widehat{k})`$ lies in $`M_2^{}`$. Let $`T=N(k)int(D_0)`$. This is a once punctured torus, which is properly embedded in $`M_2^{}`$, i.e., $`\stackrel{~}{S}T=T=D_0`$.
Let $`D`$ be a compression disk for $`\stackrel{~}{S}`$. Suppose first that it lies in $`M_1^{}`$. As $`S^{}`$ is essential in $`E(k)`$, it follows that $`D`$ is a trivial curve on $`S^{}`$ which bounds a disk $`D^{}S^{}`$, and $`DD^{}`$ bounds a 3-ball $`B`$. As $`D`$ is supposed to be essential in $`\stackrel{~}{S}`$, one arc $`\alpha `$ of $`\widehat{k}`$ contained in $`M_1^{}`$ must in fact be contained in the 3-ball $`B`$. We may assume that $`\alpha =\tau `$. Note that $`D`$ must be isotopic in $`S^{}\tau `$ to $`D_0`$. Then the tunnel $`\tau `$ is contained in a 3-ball, which implies that $`k`$ is the trivial knot. This is a contradiction.
Suppose then that $`D`$ lies in $`M_2^{}`$. Consider the intersection between $`T`$ and $`D`$. If they do not intersect, then there are two cases: (1) $`D`$ is contained in $`N(k)`$. In this case $`D`$ must lie on $`D_0`$, which implies that $`D`$ is trivial on $`\stackrel{~}{S}`$, or that $`D_0\rho `$ is compressible in $`N(k)\rho `$, which contradicts the hypothesis. (2) $`D`$ is disjoint from $`N(k)`$. One possibility is that $`D`$ is isotopic to $`D_0`$, but in this case the tunnel $`\tau `$ is, as above, contained in a 3-ball which is impossible. Otherwise, by isotoping $`D`$ we may assume that $`D`$ is contained in $`S`$, and then $`D`$ is also a compression disk for $`S`$ disjoint from $`N(k)`$, which contradicts the hypothesis that $`S`$ is incompressible in $`E(k)`$.
Assume then that $`D`$ and $`T`$ have nonempty intersection. This intersection consists of a finite number of arcs and simple closed curves. Assume also that $`D`$ has been chosen, among all compression disks, to have a minimal number of intersections with $`T`$. This implies that any curve or arc of intersection is essential in $`T`$, for if one curve (arc) is trivial, then doing surgery on $`D`$ with the disk bounded by an innermost curve (outermost arc) we get a disk with fewer intersections with $`T`$.
Let $`\sigma `$ be a simple closed curve of intersection which is innermost in $`D`$, so it bounds a disk $`D^{}`$ whose interior is disjoint from $`T`$. If $`D^{}`$ lies in $`N(k)`$, then $`\sigma `$ is either a meridian of $`T`$, or it is parallel to $`T`$, but in both cases it follows that $`D_0\rho `$ is compressible in $`N(k)\rho `$. If the interior of $`D^{}`$ is disjoint from $`N(k)`$, then as $`k`$ is a nontrivial knot, $`\sigma `$ must be trivial on $`T`$, which contradicts the choice of $`D`$.
Assume then that the intersections between $`D`$ and $`T`$ consists only of arcs. Let $`\sigma `$ be an outermost arc in $`D`$ which bounds a disk $`E`$. Suppose first that $`EN(k)`$. Then $`E=\sigma \delta `$, where $`\delta D_0`$. It follows that $`E`$ is nontrivial on $`N(k)`$, i.e., it is a meridian of $`N(k)`$, and then each of the $`\rho _i`$ has wrapping number $`1`$ in $`N(k)`$, which contradicts the hypothesis. So $`E`$ cannot be contained in $`N(k)`$. Again let $`E=\sigma \delta `$, where $`\sigma `$ is contained in $`T`$ and $`\delta `$ in $`\stackrel{~}{S}`$. As $`\sigma `$ is nontrivial in $`T`$ then $`\delta `$ is also nontrivial in $`\stackrel{~}{S}D_0`$. By isotoping $`D`$ we can ensure that $`\delta R`$ consists of two arcs; let $`E^{}R`$ be a disk containing these arcs in its boundary. Now $`EE^{}`$ is an annulus with one boundary component on $`S`$, and the other on $`N(k)`$. Here we apply \[CGLS,2.4.3\], where $`M`$, $`S`$, $`T`$, $`r_0`$ of that theorem correspond in our notation to $`M_2intN(k)`$, $`S`$, $`N(k)`$, and the component of $`(EE^{})`$ lying on $`N(k)`$, which we denote also by $`r_0`$. Clearly $`S`$ compresses after performing meridional surgery on $`N(k)`$. Then part (b) of \[CGLS,2.4.3\] implies that $`\mathrm{\Delta }(\mu ,r_0)1`$, where $`\mu `$ is a meridian of $`N(k)`$. So either $`\mu =r_0`$, or $`r_0`$ goes around $`N(k)`$ once longitudinally. The first possibility implies that $`S`$ is meridionally compressible, and the second one implies that $`k`$ is parallel to a curve lying on $`S`$. So we are done, unless one of these cases happens. Note that each of these possibilities excludes the other, for if $`k`$ is parallel to a curve on $`S`$, and $`S`$ is meridionally compressible, then either $`S`$ is compressible or $`S`$ is isotopic to $`N(k)`$.
Suppose first that $`S`$ is meridionally compressible. Let $`\sigma `$ be an outermost arc in $`D`$, which bounds a disk $`E`$, so that $`E=\sigma \delta `$, where $`\sigma `$ is contained in $`T`$ and $`\delta `$ in $`\stackrel{~}{S}`$. As above, there is a disk $`E^{}R`$, such that $`EE^{}`$ is an annulus with one boundary component on $`S`$, and the other is a meridian of $`N(k)`$. Consider all the outermost arcs on $`D`$; by the argument given above we can assume that any one of them determines a curve on $`T`$ parallel to $`\sigma `$. Let $`F`$ be a region on $`D`$ adjacent to one of the outermost arcs, so that all of its intersections with $`T`$, except at most one, are outermost arcs. To find such an $`F`$, take the collection of arcs in $`D`$ which are not outermost arcs, and among these choose one which is outermost. $`FN(k)`$, and then either $`F`$ is trivial on $`N(k)`$, or $`F`$ is a meridian of $`N(k)`$. $`F`$ consists of, say, $`2m`$ consecutive arcs, $`F=\sigma _1,\delta _1,\mathrm{},\sigma _m,\delta _m`$, where $`\sigma _iT`$, and $`\delta _iD_0`$. Then at least $`m1`$ of the arcs are parallel to $`\sigma `$, say $`\sigma _1,\mathrm{},\sigma _{m1}`$. If $`\sigma _m`$ is not parallel to $`\sigma `$, then $`F`$ would go around $`N(k)`$ once longitudinally, which is impossible, for $`F`$ bounds a disk in $`N(k)`$. We conclude that all the arcs $`\sigma _i`$ are parallel in $`T`$, as in Figure 1.
Figure 1
Let $`E_1,E_2`$ be two meridian disks of $`N(k)`$ whose boundaries are disjoint from $`D_0`$ and $`\sigma _i`$. Then $`E_1E_2`$ bounds a ball $`B`$ in $`N(k)`$ which contains $`D_0`$ and $`F`$, after possibly isotoping $`F`$.
There are two cases:
(1) $`F`$ is parallel to a disk $`D_1N(k)`$. Clearly $`D_1B`$. $`F`$ and $`D_1`$ cobound a 3-ball $`B_1`$. Suppose that an arc $`\rho _i`$ is contained in $`B_1`$. By joining the endpoints of $`\rho _i`$ with an arc contained in $`D_0`$, we get a simple closed curve $`\rho _i^{}`$, which is contained in $`B`$, and then its wrapping number in $`N(k)`$ is 0, which contradicts the hypothesis.
So suppose no arc $`\rho _i`$ is contained in $`B_1`$. Consider $`D_1D_0`$. This is a collection of arcs which divide $`D_1`$ into regions which are in $`D_0`$ or in its complement. If there is an outermost arc on $`D_1`$ which bounds a disk contained in $`D_0`$, then we can isotope $`F`$ (and then $`D`$) through $`D_0`$ to get a compression disk with fewer intersections with $`T`$. If no outermost arc bounds a disk lying in $`D_0`$, choose any region $`D_0^{}`$ of $`D_1D_0`$. There is an arc $`\alpha D_0^{}`$, whose endpoints lie on $`D_1`$ (then $`\alpha intD_0`$), and there is a disk $`E_0B_1`$, so that $`E_0=\alpha \beta `$, where $`\beta `$ is an arc on $`F`$. Cut $`D`$ along $`E_0`$, getting two disks; at least one of them is a compression disk for $`\stackrel{~}{S}`$, but it has fewer intersections with $`T`$.
(2) $`F`$ is a meridian of $`N(k)`$, so $`F`$ is parallel to $`E_1`$ (see Figure 1). So $`F`$ separates the annulus $`Bint(E_1E_2)`$ into two annuli, denoted by $`A_1`$ and $`A_2`$, where $`A_i=E_iF`$. Let $`\rho _i`$ be an arc of $`\rho `$, and $`\rho _i^{}`$ the simple closed curve obtained by joining the endpoints of $`\rho _i`$ with an arc in $`D_0`$. If the endpoints of $`\rho _i`$ lie in the same annulus $`A_j`$, then $`\rho _i`$ is isotopic rel $`\rho _i`$ (when ignoring the other arcs), to an arc disjoint from $`E_1`$. This implies that the wrapping number of $`\rho _i^{}`$ in $`N(k)`$ is 0, for $`\rho _i^{}`$ is isotopic to a curve disjoint from $`E_1`$. If the endpoints of $`\rho _i`$ lie on different annuli, then $`\rho _i`$ is isotopic rel $`\rho _i`$ to an arc which intersects $`E_1`$ in one point. This implies that the wrapping number of $`\rho _i^{}`$ in $`N(k)`$ is 1. This contradicts the hypothesis that at least one of the arcs have wrapping number $`2`$. This completes the proof when the surface $`S`$ is meridionally compressible.
Suppose now that $`k`$ is parallel to a curve on $`S`$. As before, let $`\sigma `$ be an outermost arc in $`D`$, which bounds a disk $`E`$, so that $`E=\sigma \delta `$, where $`\sigma `$ is contained in $`T`$ and $`\delta `$ in $`\stackrel{~}{S}`$. Recall that the union of $`\sigma `$ and an arc on $`D_0`$ is a curve $`\gamma `$ on $`N(k)`$ which cobounds an annulus $`EE^{}`$ with a curve on $`S`$. Let $`A=\eta (\gamma D_0)`$. Consider all the outermost arcs on $`D`$; recall that any one of them determines a curve on $`T`$ parallel to $`\sigma `$. Let $`F`$ be a region on $`D`$ adjacent to one of the outermost arcs, so that all of its intersections with $`T`$, except at most one are outermost arcs. $`F`$ is then a disk properly embedded in $`N(k)`$, which intersects $`D_0`$ in $`r`$ arcs, and all the arcs on $`TF`$, except at most one are parallel. Let $`F=\sigma _1\delta _1\mathrm{}\sigma _r\delta _r`$, where $`\sigma _iT`$, $`\delta _iD_0`$, and $`\sigma _1,\mathrm{}\sigma _{r1}`$ are parallel to $`\sigma `$. There is an annulus $`\mathrm{\Delta }`$ properly embedded in $`N(k)`$, $`\mathrm{\Delta }=A`$. We can assume that $`D_0,\sigma _1,\mathrm{},\sigma _{r1}`$ are contained in $`A`$. If $`\sigma _r`$ is not parallel to $`\sigma `$, then it intersects each component of $`\mathrm{\Delta }`$ in one point. It follows that $`F`$ is trivial in $`N(k)`$ if and only if each arc $`\sigma _i`$ is parallel to $`\sigma `$.
Suppose first that $`F`$ is trivial in $`N(k)`$, then $`FA`$, and $`F`$ is parallel to a disk $`D_1A`$. We can assume that $`F`$ and $`\mathrm{\Delta }`$ do not intersect. $`F`$ and $`D_1`$ cobound a 3-ball $`B_1`$. Suppose there is an arc $`\rho _iB_1`$. The arc $`\rho _i`$ has no local knots, then it is parallel to an arc $`ϵ_iD_1A`$, i.e., the arc $`\rho _i`$ is isotopic to an arc lying in $`A`$, contradicting the hypothesis. See Figure 2.
Figure 2
If there is no arc $`\rho _i`$ in $`B_1`$, proceed as in the analogous case when $`\stackrel{~}{S}`$ is meridionally compressible, to get a disk $`E_0B_1`$, with $`E_0=\alpha \beta `$, where $`\alpha D_0D_1`$ and $`\beta F`$, so that by cutting $`D`$ along $`E_0`$, we get another compression disk for $`\stackrel{~}{S}`$ with fewer intersections with $`T`$.
Suppose now that $`F`$ is a meridian of $`N(k)`$. Then $`F=\alpha \beta `$, where $`\alpha N(k)A`$, $`\beta A`$, so $`\alpha \sigma _r`$. The annulus $`\mathrm{\Delta }`$ can be isotoped so that $`\mathrm{\Delta }F`$ is a single arc. $`A`$ and $`\mathrm{\Delta }`$ bound a solid torus $`\mathrm{\Delta }^{}`$, and $`F\mathrm{\Delta }^{}`$ is a meridian disk for $`\mathrm{\Delta }^{}`$. If $`\rho _i`$ is any of the arcs of $`\rho `$, then $`\rho _i`$ can be isotoped to be in the 3-ball $`\mathrm{\Delta }^{}intN(F)`$, and so it is parallel to an arc lying on $`A`$. See Figure 3. This completes the proof.
Figure 3
Now we sketch a proof that $`\stackrel{~}{S}`$ is meridionally incompressible. Suppose there is a disk $`D`$ embedded in $`S^3`$, with $`\stackrel{~}{S}D=D`$, which is a nontrivial curve on $`\stackrel{~}{S}`$, and so that $`\widehat{k}`$ intersects $`D`$ transversely in one point. If the disk $`D`$ lies in $`M_1^{}`$, then as $`S^{}`$ is incompressible in $`E(k)`$, it follows that $`D`$ is a trivial curve on $`S^{}`$, which is the boundary of a disk contained in $`S^{}`$ which intersects $`\widehat{k}`$ once, so $`D`$ is not a disk of meridional compression.
So assume that $`D`$ lies in $`M_2^{}`$. Look at the intersections between $`D`$ and $`T`$, and suppose $`D`$ has been chosen to have minimal intersection with $`T`$. This implies that any curve or arc of intersection is essential in $`T`$.
Suppose there is a curve of intersection, innermost in $`D`$, which bounds a disk $`D^{}`$ which meets $`\widehat{k}`$ once. Then $`D^{}`$ must lie in $`N(k)`$. If $`D^{}`$ is a meridian of $`N(k)`$, then each $`\rho _i`$ has wrapping number $`1`$. If $`D^{}`$ is not a meridian of $`N(k)`$, then $`D^{}`$ bounds a disk in $`N(k)`$ which either lies in $`T`$ or contains $`D_0`$. In either case it is impossible for $`\rho `$ to meet $`D^{}`$ in exactly one point. This shows that simple closed curves of intersection cannot bound disks which intersect $`\widehat{k}`$, and then these curves can be removed as before. Suppose there is an outermost arc $`\sigma `$ in $`D`$ which bounds a disk $`E`$ disjoint from $`\widehat{k}`$. Doing an argument as the one done to prove the incompressibility of $`\stackrel{~}{S}`$, we have that $`E`$ does not lie in $`N(k)`$. By the same argument, such a disk can exist only if $`S`$ is meridionally compressible, or if $`k`$ is parallel to a curve on $`S`$. Note that it is always possible to find an outermost arc which bounds a disk disjoint from $`\widehat{k}`$. So the proof is complete, except if we have one of the cases just mentioned.
Suppose first $`S`$ is meridionally compressible. In this case we suppose that the wrapping number of some arc $`\rho _i`$ in $`N(k)`$ is $`3`$. Take an outermost arc of intersection in $`D`$, and suppose it bounds a disk $`D^{}`$ contained in $`N(k)`$, which intersects $`\widehat{k}`$ in at most one point. $`D^{}`$ is a meridian of $`N(k)`$, and then the wrapping number of any arc $`\rho _i`$ in $`N(k)`$ is $`2`$, contradicting the hypothesis in this case. So suppose all outermost arcs bound disks which do not lie in $`N(k)`$. As in the proof of the incompressibility of $`\stackrel{~}{S}`$, these arcs in $`T`$ are all parallel, and each one of them, together with an arc in $`D_0`$ is a meridional curve on $`N(k)`$. If there is a region $`FD`$, such that all the intersections of $`F`$ with $`T`$, except at most one are outermost arcs, and so that $`F`$ is disjoint from $`\widehat{k}`$, proceed as in the proof of the incompressibility of $`\stackrel{~}{S}`$. If there is no such region $`F`$, then $`TD`$ consists of $`m`$ arcs, all of which are outermost arcs in $`D`$, so that the complement of the arcs is a single region $`F^{}`$ contained in $`N(k)`$ and intersecting $`\widehat{k}`$ once. There are two cases:
(1) $`F^{}`$ is parallel to a disk $`D_1N(k)`$. $`F^{}`$ and $`D_1`$ cobound a 3-ball $`B_1`$. If some arc $`\rho _j`$ is contained in $`B_1`$ then its wrapping number in $`N(k)`$ is $`0`$, contradicting the hypothesis. So there is just one arc $`\rho _i`$ which intersects $`B_1`$; one of its endpoints is in $`D_1`$ and the arc intersects $`F^{}`$ in one point. As $`\rho _i`$ has no local knots, $`B_1\rho _i`$ is an unknotted spanning arc in $`B_1`$. As in the case of the incompressibility, there is a disk $`E_0B_1`$, so that $`E_0=\alpha \beta `$, where $`\beta `$ is an arc on $`F^{}`$, and $`\alpha `$ is an arc in $`D_0D_1`$. Cut $`D`$ along $`E_0`$, getting two disks; at least one of them is a meridional compression disk for $`\stackrel{~}{S}`$, but it has fewer intersections with $`T`$.
(2) $`F^{}`$ is a meridian of $`N(k)`$. The same proof as in the case of the incompressibility show that if this happens then the wrapping number of any arc $`\rho _i`$ in $`N(k)`$ is $`2`$.
Suppose now that $`k`$ is parallel to a curve on $`S`$. If $`\sigma `$ is an outermost arc of intersection in $`D`$, bounding a disk $`E`$ which does not intersect $`\widehat{k}`$, then $`E`$ is not contained in $`N(k)`$, and $`E=\sigma \delta `$, where $`\delta `$ is contained in $`\stackrel{~}{S}`$. The union of $`\sigma `$ and an arc on $`D_0`$ is a curve $`\gamma `$ on $`N(k)`$ which cobounds an annulus with a curve on $`S`$; so $`\gamma `$ is a curve which goes around $`N(k)`$ once longitudinally. If there is an outermost arc of intersection in $`D`$ bounding a disk $`D^{}`$ which intersects $`\widehat{k}`$ in one point, then $`D^{}`$ is a meridian disk of $`N(k)`$. In particular this shows that $`DT`$ cannot consist of just one arc. As before, if $`\sigma ^{}`$ is another outermost arc in $`D`$ which bounds a disk disjoint from $`\widehat{k}`$, then $`\sigma ^{}`$ is parallel to $`\sigma `$.
Now proceed as in the proof of the incompressibility of $`\stackrel{~}{S}`$ in the case that $`k`$ is parallel to a curve on $`S`$. The point is to find a region $`FD`$, such that all the intersections of $`F`$ with $`T`$, except at most one, are outermost arcs, and so that $`F`$ is disjoint from $`\widehat{k}`$. If such region exists we are done. If there is an outermost arc on $`D`$ which bounds a disk which intersects $`\widehat{k}`$, then such region $`F`$ does exists, for otherwise $`DT`$ will consist of just one arc. If such a region $`F`$ does not exist, the only possibility left is that $`DT`$ consists of $`m`$ arcs, all of which are outermost arcs, and $`DN(k)`$ is a single disk $`F^{}`$ which meets $`\widehat{k}`$ once. Then $`F^{}`$ is completely contained in the annulus $`A=\eta (\gamma D_0)`$, which implies that $`F^{}`$ is trivial on $`A`$. So $`F^{}`$ bounds a disk $`D_1`$ contained in $`A`$; $`F^{}`$ and $`D_1`$ cobound a ball $`B_1`$. If some arc $`\rho _j`$ is contained in $`B_1`$ then it is parallel to an arc lying on $`A`$, contradicting the hypothesis. So there is just one arc $`\rho _i`$ which intersects $`B_1`$; one of its endpoints is in $`D_1`$ and the arc intersects $`F^{}`$ in one point. As $`\rho _i`$ has no local knots, $`B_1\rho _i`$ is an unknotted spanning arc in $`B_1`$. As in the proof of the incompressibility of $`\stackrel{~}{S}`$, we can boundary compress $`D`$, getting another meridional compression disk for $`\stackrel{~}{S}`$, but with fewer intersections with $`T`$. ∎
###### Remark Remark
The conditions imposed on the tangle $`(N(k),D_0,\rho )`$ are somehow local, i.e., they consider each arc separately. Giving to the tangle some global property might produce a slightly stronger theorem.
### 3. Tunnel number one knots and meridional surfaces
Let $`k`$ be a tunnel number one knot, and $`\tau `$ an unknotting tunnel for $`k`$ which is an embedded arc with endpoints lying on $`N(k)`$. Assume that a neighborhood $`N(k\tau )`$ is decomposed as $`N(k\tau )=N(k)N(\tau )`$, where $`N(k)`$ is a solid torus, $`N(\tau )D^2\times I`$, $`N(k)N(\tau )`$ consists of two disks $`E_0`$ and $`E_1`$, and $`\tau =\{0\}\times I`$.
Let $`k^{}`$ be a knot formed by the union of two arcs, $`k^{}=k_1k_2`$, such that $`k_1`$ is contained in $`N(k)`$, and $`k_2=\tau `$. We say that $`k^{}`$ is an iterate of $`k`$ and $`\tau `$.
###### Lemma 3.1
Let $`k`$ and $`\tau `$ be as above, and let $`k^{}`$ be an iterate of $`k`$ and $`\tau `$. Then $`k^{}`$ is a tunnel number one knot. An unknotting tunnel $`\beta ^{}`$ for $`k^{}`$ is given by the union of $`k`$ and a straight arc in $`N(k)`$ connecting $`k^{}`$ and $`k`$.
###### Demonstration Proof
$`N(k)k`$ is homeomorphic to a product $`T\times [0,1)`$. Let $`\delta `$ be a straight arc in $`N(k)`$ connecting $`k`$ and one of the points $`k_1k_2`$, i.e., it is an arc which intersects each torus $`T\times \{x\}`$ in one point. Then $`\beta ^{}=k\delta `$ is an unknotting tunnel for $`k^{}`$. To see that, slide $`k_1`$ over $`\delta `$ and then over $`k`$ to get a 1-complex which is clearly equivalent to $`k\tau `$, so its complement is a genus 2 handlebody. ∎
Let $`k^{}`$ be an iterate of $`k`$ and $`\tau `$. It follows by construction that $`k^{}N(k\tau )`$. Also if $`\beta ^{}`$ is the unknotting tunnel for $`k^{}`$ given by the lemma, then $`k^{}\beta ^{}N(k\tau )`$. Now $`\beta ^{}`$ can be modified to be an arc $`\beta `$ with endpoints in $`k^{}`$. It follows that if $`k^{}`$ is an iterate of $`k^{}`$ and $`\beta `$, then $`k^{}`$ can be isotoped to lie in $`N(k\tau )`$. By isotoping $`k^{}`$, if necessary, we have that $`k^{}N(\tau )`$ consists of a collection of arcs parallel to $`\tau `$.
###### Theorem 3.2
For each pair of integers $`g1`$ and $`n1`$, there are tunnel number one knots $`K`$ such that there is an essential meridional surface $`\widehat{S}`$ in the exterior of $`K`$, of genus $`g`$, and with $`2n`$ boundary components. Furthermore, $`\widehat{S}`$ is meridionally incompressible.
###### Demonstration Proof
Let $`k`$ be a tunnel number one knot. Suppose that $`k`$ has an unknotting tunnel $`\tau ^{}=\tau _1\tau _2`$, where $`\tau _1`$ is a simple closed curve, and $`\tau _2`$ is an arc connecting $`k`$ and $`\tau _1`$. Suppose there is a closed surface $`S`$ of genus $`g1`$ embedded in the exterior of $`k`$, which is special with respect to $`k`$ and $`\tau ^{}`$.
$`S`$ divides $`S^3`$ into two parts, $`M_1`$ and $`M_2`$, where, say, $`\tau _1`$ is contained in $`M_1`$. $`M_2N(\tau ^{})`$ is a cylinder $`RD^2\times I`$, so that $`RS`$ is a disk $`D_1`$, and $`RN(k)`$ is a disk $`D_0`$. Slide $`\tau _1`$ over $`\tau _2`$, to get an arc $`\tau `$ with both endpoints on $`D_0N(k)`$, so that $`\tau M_2`$ consists of two straight arcs contained in $`R`$. The surface $`S`$ and the arc $`\tau `$ intersect in two points.
Let $`k^{}`$ be an iterate of $`k`$ and $`\tau `$; then $`k^{}=k_1k_2`$, where $`k_2`$ is an arc parallel to $`\tau `$, so it intersects $`S`$ in two points. Now $`k_1`$ is an arc in $`N(k)`$ whose endpoints lie on $`D_0`$. By pushing $`k_1`$ into the interior of $`N(k)`$ we get a properly embedded arc in $`N(k)`$. Clearly $`k_1`$ can be chosen so that $`(N(k),D_0,k_1)`$ forms a good tangle, just by taking an arc whose wrapping number in $`N(k)`$ is $`2`$. Note that $`k_1`$ has no local knots in $`N(k)`$, for it is parallel to an arc lying in $`N(k)`$. If $`k`$ is parallel to a curve on $`S`$, let $`\lambda `$ be the curve on $`N(k)`$ which cobounds an annulus with a curve on $`S`$, so that $`\lambda `$ meets $`D_0`$ in one arc; let $`A=\eta (\gamma D_0)`$. Clearly $`k_1`$ can be chosen so that $`(N(k),D_0,k_1)`$ is good with respect to $`A`$, say by twisting $`k_1`$ meridionally as many times as necessary; this can be done because the annulus $`A`$ goes longitudinally once around $`N(k)`$, and the wrapping number of $`k_1`$ in $`N(k)`$ is $`2`$. So $`k^{}`$ can be chosen to be specially knotted in $`N(k\tau )`$. It follows from Theorem 2.1 that $`\widehat{S}=SE(k^{})`$ is an essential meridional surface in $`E(k^{})`$, and $`\widehat{S}`$ consists of two meridians of $`k^{}`$.
This implies that $`k^{}`$ is a tunnel number one knot which has an unknotting tunnel $`\beta ^{}=k\delta `$, where $`\delta `$ is a straight arc connecting $`k^{}`$ and $`k`$. Let $`\beta `$ be the arc obtained after sliding $`k`$ over $`\delta `$. $`N(k^{}\beta )`$ can be chosen so that it is contained in $`N(k\tau )`$. Let $`k^{}`$ be an iterate of $`k^{}`$ and $`\beta `$, so $`k^{}=\kappa _1\kappa _2`$, where $`\kappa _1N(k^{})`$, and $`\kappa _2`$ is the tunnel $`\beta `$. Note that $`k^{}N(k\tau )`$. The arc $`\kappa _1`$ can be isotoped so that $`\kappa _1N(\tau )`$ consist of straight arcs, and it can be chosen so that $`\kappa _1N(\tau )`$ consists of $`n`$ arcs, $`n`$ being a fixed positive integer. So $`k^{}`$ intersects $`S`$ in $`2n`$ points. $`k^{}N(k)`$ then consists of $`n`$ arcs, $`\rho _0,\mathrm{},\rho _n`$, which are properly embedded on $`N(k)`$, and whose endpoints lie in $`D_0`$. Clearly $`k^{}`$ can be chosen so that $`(N(k),D_0,\rho )`$ forms a good tangle, say by choosing them so that each arc $`\rho _i`$, except one, is parallel to $`k_1`$, and so that each has wrapping number $`2`$. The remaining arc can be chosen to be a band sum of the arc $`k_1`$ and the knot $`k`$, so it can be chosen to have wrapping number $`3`$. If $`k`$ is parallel to $`S`$, again $`k^{}`$ can be chosen so that $`(N(k),D_0,\rho )`$ is good with respect to $`A`$. Then by Theorem 2.1, the surface $`\widehat{S}=SE(k^{})`$ is an essential meridional surface in $`E(k^{})`$, and $`\widehat{S}`$ consists of $`2n`$ meridians of $`k^{}`$.
If $`S`$ is meridionally incompressible, then $`\widehat{S}`$ is meridionally incompressible. If $`S`$ is meridionally compressible, then $`k^{}`$ and $`k^{}`$ can be chosen so that $`\widehat{S}`$ is meridionally incompressible. ∎
Figure 4
It follows from the proof of Theorem 3.2 that for the knots $`k`$ constructed in \[E2\], there are many iterates of $`k`$, whose exteriors contain an essential meridional surface. This is because for such knots, there is an unknotting tunnel $`\tau ^{}`$ and a surface $`S`$ which is special with respect to $`k`$ and $`\tau ^{}`$. Note also that some of these knots $`k`$ are parallel to the surface $`S`$, while others are not \[E2,8.2\].
An example which illustrates Theorem 3.2 is shown in Figure 4. Let $`k`$ be the (2,-11)-cable of the left hand trefoil; there is a torus $`S`$ and unknotting tunnel $`\tau ^{}`$ for $`k`$, so that $`S`$ is special with respect to $`k`$ and $`\tau ^{}`$. Note that $`k`$ is parallel to a curve on $`S`$. The knot $`k^{}`$ shown in Figure 4 is an iterate of $`k`$ and the tunnel $`\tau ^{}`$. It is not difficult to check that $`k^{}`$ satisfies the conditions of Theorem 2.1. So it follows that $`\widehat{S}`$ is an essential meridional surface in $`E(k^{})`$.
Combining the last theorem and the construction given in \[E2,§6\], we get the following.
###### Theorem 3.3
For each positive integer $`n`$, there are tunnel number one knots $`K`$, such that in the exterior of $`K`$ there are $`n`$ disjoint, non-parallel, closed incompressible surfaces. Each of the surfaces has genus $`n`$. One of the surfaces is meridionally compressible; the others are meridionally incompressible.
###### Demonstration Proof
Recall the construction given in \[E2,§6\]. Let $`k_1`$ be a knot, $`\tau ^{}=\tau _1\tau _2`$ an unknotting tunnel, and $`S_1`$ an essential surface of genus $`g`$ embedded in $`E(k_1)`$, which intersects $`\tau ^{}`$ in one point. So $`S_1`$ is special with respect to $`k_1`$ and $`\tau ^{}`$ (in both definitions, the one given here and the one in \[E2,§6\]; see \[E2,6.1\], which shows that this is true). Let $`T=N(k_1)`$. Let $`A`$ be an annulus contained in $`T`$, and let $`\alpha `$ be the core of this annulus. Suppose that $`\alpha `$ wraps around $`N(k_1)`$ at least twice longitudinally. If $`k_1`$ is parallel to $`S_1`$, suppose also that $`\mathrm{\Delta }(\gamma ,\alpha )2`$, where $`\gamma `$ is a curve on $`N(k_1)`$ which cobounds an annulus with a curve on $`S_1`$.
$`S_1`$ divides $`S^3`$ into two parts, $`M_1`$ and $`M_2`$, where, say, $`k_1`$ is contained in $`M_2`$. Let $`\tau _2^{}=M_2\tau _2`$; so $`\tau _2^{}`$ is an arc with an endpoint on $`S_1`$ and the other on $`N(k)`$, which we assume lies on the curve $`\alpha `$. The curve $`\alpha `$ goes around $`N(k)`$ at least twice longitudinally, then it is a toroidal graph of type 1 in $`N(k)`$, as defined in \[E2,§4\]. Let $`M=M_2intN(k)`$. $`M`$ is a 3-manifold with incompressible boundary. To show that $`\tau _2^{}\alpha `$ is a cabled graph in $`M_2`$, as defined in \[E2,§6\], it suffices to prove that $`S_1`$ remains incompressible after Dehn filling $`M`$ along $`N(k)`$ with slope $`\alpha `$. If $`k`$ is not parallel to a curve on $`S_1`$, then as $`\mathrm{\Delta }(\alpha ,\mu )2`$, this follows from the main Theorem of \[Wu\]. If $`k`$ is parallel to a curve on $`S_1`$, then by hypothesis, $`\mathrm{\Delta }(\alpha ,\gamma )2`$, and by \[CGLS,2.4.3\] it follows that $`S_1`$ remains incompressible.
$`N(\tau _2^{})`$ is a cylinder $`RD^2\times I`$, so that $`RS_1`$ is a disk $`D_1`$, and $`RN(k_1)`$ is a disk $`D_0`$. Assume that $`D_0A`$. Consider the manifold $`W=M_1RN(A)`$, and let $`\mathrm{\Sigma }=W`$. As $`\tau _2^{}\alpha `$ is a cabled graph in $`M_2`$, it follows from \[E2,6.3\] that $`\mathrm{\Sigma }`$ is incompressible in $`S^3intW`$.
Let $`\tau `$ be the arc obtained by sliding $`\tau _1`$ over $`\tau _2`$, so that $`M_2\tau R`$. Now take an iterate $`k_2`$ of $`k_1`$ and $`\tau `$ of a special form. As before $`k_2=\kappa _1\kappa _2`$, where $`\kappa _2=\tau `$, and $`\kappa _1`$ is an arc in $`N(k_1)`$. Suppose that $`\kappa _1`$ is contained in $`A`$, so that its wrapping number in $`N(A)`$ is $`2`$ (i.e., $`\rho =k_2N(A)`$ is a properly embedded arc in $`N(A)`$ whose endpoints lie on $`D_0^{}=RN(A)`$, and we are requiring that the curve obtained from $`\rho `$ by joining its endpoints with an arc lying on $`D_0^{}`$ has wrapping number $`2`$ in $`N(A)`$). Then $`k_2W`$, and it follows from \[E2,6.4\] that $`\mathrm{\Sigma }`$ is incompressible and meridionally incompressible in $`(W,k_2)`$. So $`\mathrm{\Sigma }`$ is a meridionally incompressible surface contained in the exterior of $`k_2`$ of genus $`g+1`$. By \[E2,8.2\], it follows that $`k_2`$ is not parallel to a curve lying on $`\mathrm{\Sigma }`$
It is not difficult to see that the knot $`k_2`$ also satisfies the hypothesis of Theorem 2.1; in particular, note that the arc $`\kappa _1`$ has wrapping number $`4`$ in $`N(k_1)`$ (for $`\kappa _1`$ has wrapping number $`2`$ in $`N(A)`$, and $`\alpha `$ has winding number $`2`$ in $`N(k_1)`$). Therefore the surface $`\widehat{S}=S_1E(k_2)`$ is meridionally incompressible in $`E(k_2)`$, its boundary consists of two meridians of $`k_2`$. By tubing $`\widehat{S}`$, we get two closed surfaces in $`E(k_2)`$, of genus $`g+1`$. By an application of the handle addition Lemma \[J\], one of the surfaces must be incompressible in $`E(k_2)`$; this has to be the surface lying on $`M_2`$, for the one lying in $`M_1`$ bounds a handlebody. Denote by $`\overline{S}`$ such an incompressible surface; note that it is meridionally compressible. Then there are two different closed incompressible surfaces in $`E(k_2)`$, $`\mathrm{\Sigma }`$ and $`\overline{S}`$. By isotoping $`\overline{S}`$ into $`W`$, these surfaces become disjoint and are obviously non-parallel.
Note that there is an unknotting tunnel $`\beta ^{}=k_1\delta `$ for $`k_2`$, where $`\delta `$ is a straight arc in $`N(k_1)`$ connecting $`k_1`$ and $`k_2`$ which intersects both surfaces $`\mathrm{\Sigma }`$ and $`\overline{S}`$ in one point. Then $`\mathrm{\Sigma }`$ and $`\overline{S}`$ are both special with respect to $`k_2`$ and $`\beta `$. Note that $`\overline{S}`$ is closer to $`k_2`$ and $`\mathrm{\Sigma }`$ is closer to $`k_1`$; that is, the arc $`\delta `$, when going from $`k_2`$ to $`k_1`$, intersects first $`\overline{S}`$ and then $`\mathrm{\Sigma }`$.
We have proved that there is a tunnel number one knot $`k_2`$ which has an unknotting tunnel $`\tau ^{}=\tau _1\tau _2`$, and two disjoint, non-parallel closed incompressible surfaces in its exterior, each of genus $`g+1`$, denoted by $`\mathrm{\Sigma }`$ and $`\overline{S}`$, and which are special with respect to $`k_2`$ and $`\tau ^{}`$. $`\mathrm{\Sigma }`$ is meridionally incompressible and $`\overline{S}`$ is meridionally compressible, and the arc $`\tau _2`$, when going from $`k_2`$ to $`\tau _1`$ intersects first $`\overline{S}`$ and then $`\mathrm{\Sigma }`$. Furthermore, $`k_2`$ is not parallel to a curve lying on any of the two surfaces.
Suppose by induction that we have a tunnel number one knot $`k_n`$, which has an unknotting tunnel $`\tau ^{}=\tau _1\tau _2`$, and $`n`$ disjoint, non-parallel closed incompressible surfaces in its exterior, of genus $`g+n`$, denoted by $`S_1,S_2,\mathrm{},S_n`$, which are special with respect to $`k_n`$ and $`\tau ^{}`$. $`S_2,\mathrm{},S_n`$ are meridionally incompressible and $`S_1`$ is meridionally compressible, and the arc $`\tau _2`$, when going from $`k_n`$ to $`\tau _1`$ intersects the surfaces in the order $`S_1,S_2,\mathrm{},S_n`$. Furthermore, $`k_n`$ is not parallel to a curve lying on any of the surfaces.
The above construction can be repeated with $`k_n`$, $`\tau ^{}=\tau _1\tau _2`$ and $`S_1,S_2,\mathrm{},S_n`$. $`S_i`$ divides $`S^3`$ into $`M_1^i`$ and $`M_2^i`$, where $`k_n`$ lies in $`M_2^i`$. Clearly, if $`i<j`$ then $`M_2^iM_2^j`$. Let $`\alpha `$ be a simple closed curve on $`N(k_n)`$, which goes at least twice longitudinally around $`N(k_n)`$. Suppose that the endpoint of $`\tau _2`$ lies on $`\alpha `$. Let $`\tau _2^i=M_2^i\tau _2`$; then, as above, $`\alpha \tau _2^i`$ is a cabled graph in $`M_2^i`$, for $`k_n`$ is not parallel to a curve lying on $`S_i`$.
Let $`R_i`$ be a regular neighborhood of $`\tau _2^i`$ in $`M_2^i`$, so that $`R_iS_i`$ is a disk $`D_1^i`$, and $`RN(k_n)`$ is a disk $`D_0^i`$. Assume that $`D_0^iA`$, where $`A=\eta (\alpha )`$. Consider the manifold $`W_i=M_1^iR_iN_i(A)`$, where $`N_i(A)`$ is a neighborhood of $`A`$. Let $`\mathrm{\Sigma }_i=W_i`$. As $`\tau _2^i\alpha `$ is a cabled graph in $`M_2^i`$, it follows from \[E2,6.3\] that $`\mathrm{\Sigma }_i`$ is incompressible in $`S^3intW_i`$. The neighborhoods $`R_iN_i(A)`$ can be chosen to be thinner if $`j>i`$, that is, $`M_2^i(R_jN_j(A))R_iN_i(A)`$ if $`i<j`$. Then the surfaces $`\mathrm{\Sigma }_1,\mathrm{\Sigma }_2,\mathrm{},\mathrm{\Sigma }_n`$ are disjoint.
Let $`\tau `$ be the arc obtained by sliding $`\tau _1`$ over $`\tau _2`$, so that $`M_2^i\tau R_i`$, for all $`i`$. Now take an iterate $`k_{n+1}`$ of $`k_n`$ of a special form. As before $`k_{n+1}=\kappa _1\kappa _2`$, where $`\kappa _2=\tau `$, and $`\kappa _1`$ is an arc in $`N(k_n)`$. Suppose that $`\kappa _1`$ is contained in $`A`$, so that its wrapping number in $`N_n(A)`$ is $`2`$. Then $`k_{n+1}W_i`$, and it follows from \[E2,6.4\] that $`\mathrm{\Sigma }_i`$ is incompressible and meridionally incompressible in $`(W_i,k_{n+1})`$. So $`\mathrm{\Sigma }_i`$ is a meridionally incompressible surface in the exterior of $`k_{n+1}`$ of genus $`g+n+1`$. Again by \[E2,8.2\], it follows that $`k_{n+1}`$ is not parallel to a curve lying on $`\mathrm{\Sigma }_i`$
The knot $`k_{n+1}`$ intersects the surface $`S_n`$ in two points, the wrapping number of $`\kappa _2`$ in $`N(k_n)`$ is $`4`$, and $`k_n`$ is not parallel to a curve on $`S_n`$. So $`k_{n+1}`$ and $`S_n`$ satisfy the conditions of Theorem 2.1, and then $`\widehat{S}_n=S_nE(k_{n+1})`$ in an incompressible, meridionally incompressible surface in $`E(k_{n+1})`$ whose boundary consists of two meridians of $`E(k_{n+1})`$. Then as above, by tubing $`\widehat{S}_n`$ on the side of $`M_2^n`$ and isotoping into $`W_n`$, we get a closed surface $`\mathrm{\Sigma }_{n+1}`$ which is incompressible but meridionally compressible in the exterior of $`k_{n+1}`$. The tube added to the surface can be chosen so that it lies in the interior of $`R_nN_n(A)`$; this ensures that $`\mathrm{\Sigma }_{n+1}`$ is disjoint from $`\mathrm{\Sigma }_i`$, for $`1in`$.
(From the surface $`S_i`$ we can also get a meridionally compressible surface $`\mathrm{\Sigma }_i^{}`$, but it will intersect $`\mathrm{\Sigma }_j`$, if $`i<j`$. But note that $`\mathrm{\Sigma }_{n+1}=\mathrm{\Sigma }_n^{},\mathrm{\Sigma }_{n1}^{},\mathrm{},\mathrm{\Sigma }_2^{},\mathrm{\Sigma }_1^{},\mathrm{\Sigma }_1`$, are disjoint).
There is an unknotting tunnel $`\beta ^{}`$ for $`k_{n+1}`$ of the form $`\beta ^{}=k_n\delta `$, where $`\delta `$ is a straight arc in $`N(k_n)`$ connecting $`k_n`$ and $`k_{n+1}`$. Note that $`\delta `$ intersects each surface $`\mathrm{\Sigma }_i`$ in one point; this implies that $`\mathrm{\Sigma }_i`$ is special w.r.t. $`k_{n+1}`$ and $`\beta ^{}`$. Note also that the arc $`\delta `$, when going from $`k_{n+1}`$ to $`k_n`$, intersects the surfaces in the order $`\mathrm{\Sigma }_{n+1},\mathrm{\Sigma }_n,\mathrm{},\mathrm{\Sigma }_1`$. Finally, note that the surfaces cannot be parallel, for if two of them were, then two of the surfaces $`S_i`$ would also be parallel.
This shows that $`k_{n+1}`$, $`\beta ^{}`$, and $`\mathrm{\Sigma }_{n+1},\mathrm{\Sigma }_n,\mathrm{},\mathrm{\Sigma }_1`$ satisfy the induction hypothesis. This completes the proof.
By starting with a surface $`S`$ of genus 1, and repeating the construction $`n1`$ times, we get the desired conclusion. ∎
###### Remark Remark
It follows from the proof of the above theorem that by changing the induction hypothesis, we can find a tunnel number one knot $`k`$, with $`n`$ incompressible surfaces in its exterior, $`S_1,S_2,\mathrm{},S_n`$, so that $`S_n`$ is meridionally incompressible, but $`S_i`$, for $`1in1`$, is meridionally compressible, and $`S_n`$ is the surface which is farthest from the knot. It follows also that there are tunnel number one knots whose exteriors contain two collection of disjoint incompressible surfaces, $`S_1,\mathrm{},S_n`$, and $`\mathrm{\Sigma }_1,\mathrm{},\mathrm{\Sigma }_n`$, where the $`S_i`$ are meridionally incompressible, and the $`\mathrm{\Sigma }_i`$ are meridionally compressible.
### References
|
no-problem/9908/cond-mat9908411.html
|
ar5iv
|
text
|
# Temperature enhanced persistent currents and “ϕ₀/2 periodicity”
## I Introduction
Although the magnitude of persistent current amplitudes in metallic and semiconductor mesoscopic rings has received experimental attention , much attention has not been given to qualitative features of the persistent current. Qualitative features reflect the underlying phenomena, and are more important than the order of magnitude. Incidentally, the order of magnitude and sign of the persistent currents in metallic rings is still not understood.
With this background in mind, we study the temperature dependence of persistent currents in a ring strongly coupled to a stub . We predict a non-monotonous temperature dependence of the amplitude of persistent currents in this geometry both for the grand-canonical as well as for the canonical case. We show that there is a crossover temperature ($`T^{}`$) above which it decreases with temperature and below which it increases with temperature, and energy scales determining this crossover temperature are quantified. This is in contrast to the fact that in the ring, temperature monotonously affects the amplitude of persistent currents. However, so does dephasing and impurity scattering, which are again directly or indirectly temperature dependent , except perhaps in very restrictive parameter regimes where it is possible to realize a Luttinger liquid in the ring in the presence of a potential barrier . Recent study, however, shows that in the framework of a Luttinger liquid, a single potential barrier leads to a monotonous temperature dependence of the persistent currents for non-interacting as well as for interacting electrons . We also show a temperature-induced switch over from $`\varphi _0`$ periodicity to $`\varphi _0/2`$ periodicity. This is a very non-trivial temperature dependence of the fundamental periodicity that cannot be obtained in the ring geometry.
There is also another motivation behind studying the temperature dependence of persistent currents in this ring-stub system. In the ring, the monotonous behavior of the persistent current amplitude with temperature stems from the fact that the states in a ring pierced by a magnetic flux exhibit a strong parity effect . There are two ways of defining this parity effect in the single channel ring (multichannel rings can be generalized using the same concepts and mentioned briefly at the end of this paragraph). In the single-particle picture (possible only in the absence of electron-electron interaction), it can be defined as follows: states with an even number of nodes in the wave function carry diamagnetic currents (positive slope of the eigenenergy versus flux) while states with an odd number of nodes in the wave function carry paramagnetic currents (negative slope of the eigenenergy versus flux) . In the many-body picture (without any electron-electron interaction), it can be defined as follows: if $`N`$ is the number of electrons (spinless) in the ring, the persistent current carried by the $`N`$-body state is diamagnetic if $`N`$ is odd and paramagnetic if $`N`$ is even . Leggett conjectured that this parity effect remains unchanged in the presence of electron-electron interaction and impurity scattering of any form. His arguments can be simplified to say that when electrons move in the ring, they pick up three different kinds of phases: 1) the Aharonov-Bohm phase due to the flux through the ring, 2) the statistical phase due to electrons being Fermions and 3) the phase due to the wave-like motion of electrons depending on their wave vector. The parity effect is due to competition between these three phases along with the constraint that the many-body wave function satisfy the periodic boundary condition (which means if one electron is taken around the ring with the other electrons fixed, the many-body wave function should pick up a phase of 2$`\pi `$ in all). Electron-electron interaction or simple potential scattering cannot introduce any additional phase, although it can change the kinetic energy or the wave vector and hence modify the third phase. Simple variational calculations showed that the parity effect still holds . Multichannel rings can be understood by treating impurities as perturbations to decoupled multiple channels, which means small impurities just open up small gaps at level crossings within the Brillouin zone and keep all qualitative features of the parity effect unchanged. Strong impurity scattering in the multichannel ring can, however, introduce strong level correlations, which is an additional phenomenon. Whether and how the parity effect gets modified by these correlations is an interesting problem.
In a one-dimensional (1D) system where we have a stub of length $`v`$ strongly coupled to a ring of length $`u`$ (see the left bottom corner in Fig. 1), we can have a bunching of levels with the same sign of persistent currents, i.e., many consecutive levels carry persistent currents of the same sign. This is essentially a breakdown of the parity effect. The parity effect breaks down in this single channel system because there is a new phase that does not belong to any of the three phases discussed by Leggett and mentioned in the preceding paragraph. This new phase cancels the statistical phase and so the N-body state and the (N+1)-body state behave in similar ways or carry persistent currents of the same sign . When the Fermi energy is above the value where we have a node at the foot of the stub (that results in a transmission zero in transport across the stub), there is an additional phase of $`\pi `$ arising due to a slip in the Bloch phase (the Bloch phase is the third kind of phase discussed above, but the extra phase $`\pi `$ due to slips in the Bloch phase is completely different from any of the three phases discussed above because this phase change of the wave function is not associated with a change in the group velocity or kinetic energy or the wave vector of the electron ). The origin of this phase slip can be understood by studying the scattering properties of the stub structure. One can map the stub into a $`\delta `$-function potential of the form $`k\mathrm{cot}(kv)\delta (xx_0)`$ . So one can see that the strength of the effective potential is $`k\mathrm{cot}(kv)`$ and is energy dependent. Also the strength of the effective potential is discontinuous at $`kv=n\pi `$. Infinitesimally above $`\pi `$ an electron faces a positive potential while infinitesimally below it faces a negative potential. As the effective potential is discontinuous as a function of energy, the scattering phase, which is otherwise a continuous function of energy, in this case turns out to be discontinuous as the Fermi energy sweeps across the point $`kv=\pi `$. As the scattering phase of the stub is discontinuous, the Bloch phase of the electron in the ring-stub system is also discontinuous. This is pictorially demonstrated in Figs. 2 and 3 of Ref. . In an energy scale $`\mathrm{\Delta }_u1/u`$ (typical level spacing for the isolated ring of length $`u`$) if there are $`n_b\mathrm{\Delta }_u/\mathrm{\Delta }_v`$ (where $`\mathrm{\Delta }_v1/v`$, the typical level spacing of the isolated stub) such phase slips, then each phase slip gives rise to an additional state with the same slope and there are $`n_b`$ states of the same slope or ithe same parity bunching together with a phase slip of $`\pi `$ between each of them . The fact that there is a phase slip of $`\pi `$ between two states of the same parity was generalized later, arguing from the oscillation theorem, which is equivalent to Leggett’s conjecture for the parity effect . Transmission zeros are an inherent property of Fano resonance generically occurring in mesoscopic systems and this phase slip is believed to be observed in a transport measurement . For an elaborate discussion on this, see Ref. . A similar case was studied in Ref. , where they show the transmission zeros and abrupt phase changes arise due to degeneracy of “dot states” with states of the “complementary part” and hence these are also Fano-type resonances.
The purpose of this work is to show a very non-trivial temperature dependence of persistent currents due to the breakdown of the parity effect. The temperature effects predicted here, if observed experimentally, will further confirm the existence of parity-violating states, which is a consequence of this new phase. To be precise, the new phase is the key source of the results discussed in this work.
## II Theoretical treatment
We concentrate on the single channel system to bring out the essential physics. The multichannel ring also shows a very strong bunching of levels even though the rotational symmetry is completely broken by the strongly coupled stub and wide gaps open up at the level crossings within the Brillouin zone. Hence let us consider a one-dimensional loop of circumference $`u`$ with a one-dimensional stub of length $`v`$, which contain noninteracting spinless electrons. The quantum-mechanical potential is zero everywhere. A magnetic flux $`\varphi `$ penetrates the ring (see the left bottom corner in Fig. 1). In this paper we consider both the grand-canonical case (when the particle exchange with a reservoir at temperature $`T`$ is present and the reservoir fixes the chemical potential $`\mu `$; in this case we will denote the persistent current as $`I_\mu `$) and the canonical case (when the number $`N`$ of particles in the ring-stub system is conserved; in this case we will denote the persistent current as $`I_N`$). For the grand canonical case we suppose that the coupling to a reservoir is weak enough and the eigenvalues of electron wave number $`k`$ are not affected by the reservoir . They are defined by the following equation .
$$\mathrm{cos}(\alpha )=0.5\mathrm{sin}(ku)\mathrm{cot}(kv)+\mathrm{cos}(ku),$$
(1)
where $`\alpha =2\pi \varphi /\varphi _0`$, with $`\varphi _0=h/e`$ being the flux quantum. Note that Eq. (1) is obtained under the Griffith boundary conditions, which take into account both the continuity of an electron wave function and the conservation of current at the junction of the ring and the stub; and the hard-wall boundary condition at the dead end of the stub. Each of the roots $`k_n`$ of Eq.(1) determines the one-electron eigenstate with an energy $`ϵ_n=\mathrm{}^2k_n^2/(2m)`$ as a function of the magnetic flux $`\varphi `$. Further we calculate the persistent current $`I_{N/\mu }=F_{N/\mu }/\varphi `$ , where $`F_N`$ is the free energy for the regime $`N=const`$ and $`F_\mu `$ is the thermodynamic potential for the regime $`\mu =const`$. In the latter case for the system of noninteracting electrons the problem is greatly simplified as we can use the Fermi distribution function $`f_0(ϵ)=(1+\mathrm{exp}[(ϵ\mu )/T])^1`$ when we fill up the energy levels in the ring-stub system and we can write the persistent current as follows .
$$I_\mu =\underset{n}{}I_nf_0(ϵ_n),$$
(2)
where $`I_n`$ is a quantum-mechanical current carried by the $`n`$th level and is given by
$$\frac{\mathrm{}I_n}{e}=\frac{2k_n\mathrm{sin}(\alpha )}{\frac{u}{2}\mathrm{cos}(k_nu)\mathrm{cot}(k_nv)[\frac{v}{2}\mathrm{cosec}^2(k_nv)+u]\mathrm{sin}(k_nu)}.$$
(3)
For the case of $`N=const`$ we must calculate the partition function $`Z`$, which determines the free energy $`F_N=T\mathrm{ln}(Z)`$ ,
$$Z=\underset{m}{}\mathrm{exp}\left(\frac{E_m}{T}\right),$$
(4)
where $`E_m`$ is the energy of a many-electron level. For the system of $`N`$ spinless noninteracting electrons $`E_m`$ is a sum over $`N`$ different (pursuant to the Pauli principle) one-electron energies $`E_m=_{i=1}^Nϵ_{n_i}`$, where the index $`m`$ numbers the different series $`\{ϵ_{n_1},\mathrm{},ϵ_{n_N}\}_m`$. For instance, the ground-state energy is $`E_0=_{n=1}^Nϵ_n`$.
## III Results and discussions
First we consider the peculiarities of the persistent current $`I_\mu `$, $`i.e.,`$ for the regime $`\mu =const`$. In this case the persistent current is determined by Eqs.(1)-(3). Our calculations show that the character of the temperature dependence of the persistent currents is essentially dependent on the position of the Fermi level $`\mu `$ relative to the groups of levels with similar currents. If the Fermi level lies symmetrically between two groups (which occurs if $`u/\lambda _F=n`$ or $`n+0.5`$, where $`n`$ is an integer and $`\lambda _F`$ is the Fermi wavelength), then the current changes monotonously with the temperature that is depicted in Fig. 1 (the dashed curve). In this case the low-lying excited levels carry a current which is opposite to that of the ground-state; the line shape of the curve is similar to that of the ring . On the other hand, if the Fermi level lies within a group ($`u/\lambda _Fn+0.25`$) then low-lying excited states carry persistent currents with the same sign. In that case there is an increase of a current at low temperatures as shown in Fig. 1 (the dotted curve). At low temperatures the currents carried by the low-lying excited states add up with the ground-state current. However, these excited states are only populated at the cost of the ground state population. Although in the clean ring higher levels carry larger persistent currents, this is not true for the ring-stub system. This is because the scattering properties of the stub are energy-dependent and at a higher energy the stub can scatter more strongly. Hence a lot of energy scales such as temperature, Fermi energy and number of levels populated compete with each other to determine the temperature dependence. A considerable amount of enhancement in persistent current amplitudes as obtained in our calculations appears for all choices of parameters whenever the Fermi energy is approximately at the middle of a group of levels that have the same slope. At higher temperatures when a large number of states get populated, the current decreases exponentially. So in this case the current amplitude has a maximum as a function of the temperature and we can define the temperature corresponding to the maximum as the crossover temperature $`T^{}`$.
It is worth mentioning that in the ring system, although there is no enhancement of persistent currents due to temperature, one can define a crossover temperature below which persistent currents decrease less rapidly with temperature. Essentially this is because at low temperatures thermal excitations are not possible because of the large single-particle level spacings. Hence this crossover temperature is the same as the energy scale that separates two single-particle levels, $`i.e.,`$ the crossover temperature is proportional to the level spacing $`\mathrm{\Delta }=hv_F/L`$ in the ideal ring at the Fermi surface, where $`v_F`$ is the Fermi velocity and $`L`$ is the length of the ring. The crossover temperature obtained by us in the ring-stub system is of the same order of magnitude, $`i.e.,`$ $`\mathrm{\Delta }_u=hv_F/u`$, although different in meaning.
In the case of $`u/\lambda _F=n+0.25`$ at low temperatures we show the possibility of obtaining $`\varphi _0/2`$ periodicity, although the parity effect is absent in this system. This is shown in Fig. 2, where we plot $`I_\mu /I_0`$ versus $`\varphi /\varphi _0`$ at a temperature $`k_BT/\mathrm{\Delta }_u`$=0.01 in solid lines, which clearly show a $`\varphi _0/2`$ periodicity. Previously two mechanisms are known that can give rise to $`\varphi _0/2`$ periodicity of persistent currents. The first is due to the parity effect , which does not exist in our system, and the second is due to the destructive interference of the first harmonic that can only appear in a system coupled to a reservoir so that the Fermi energy is an externally adjustable parameter. The later mechanism can be understood by putting $`k_FL`$=$`(2n\pi +\pi /2)`$ in eq. 2.11 in Ref. . If this later case is the case in our situation, then the periodicity should remain unaffected by temperature and for fixed $`N`$ we should only get $`\varphi _0`$ periodicity , because then the Fermi energy is not an externally adjustable parameter but is determined by $`N`$. We show in Fig. 2 (dashed curve) that the periodicity changes with temperature and in the next two paragraphs we will also show that one can obtain $`\varphi _0/2`$ periodicity for fixed $`N`$. The dashed curve in Fig. 2 is obtained at a temperature $`k_BT/\mathrm{\Delta }_u`$=0.15 and it shows a $`\varphi _0`$ periodicity. As it is known, the crossover temperature depends on the harmonic number $`m`$: $`T_m^{}=T^{}/m`$ , in this case a particular harmonic can actually increase with temperature initially and decrease later, different harmonics reaching their peaks at different temperatures. Therefore, the second harmonic that peaks at a lower temperature than the first harmonic can exceed the first harmonic in certain temperature regimes. At higher temperatures it decreases with the temperature faster than the first harmonic and so at higher temperature $`\varphi _0`$ periodicity is recovered.
In view of a strong dependence of the considered features on the chemical potential, we consider further the persistent current $`I_N`$ in the ring-stub system with a fixed number of particles $`N=const`$. In this case we calculate the persistent current using the partition function (Eq.(4)).
The numerical calculations show that in this case there is also a non-monotonous temperature dependence of the persistent current amplitude in the canonical case as in the grand-canonical case. This is shown in Fig. 1 by the solid curve. The maximum of $`I_N(T)`$ is more pronounced if $`v/u`$ is large and the number of electrons ($`N`$) is small. Besides, if the number of electrons is more than $`n_b/2`$, then the maximum does not exist. The crossover temperature is higher by a factor 2 as compared to that in $`I_\mu `$. This was also found for the 1D ring , where, as mentioned before, the crossover temperature has a different meaning. To show that one can have $`\varphi _0/2`$ periodicity for fixed $`N`$, we plot in the inset of Fig. 2 the first harmonic $`I_1/I_0`$ (solid curve) and the second harmonic $`I_2/I_0`$ (dotted curve) of $`I_N`$ for $`N`$=5, $`v`$=7$`k_F`$ and $`u`$=2.5$`k_F`$. At low temperature the second harmonic exceeds the first harmonic because the stub reduces the level spacing and in a sense can adjust the Fermi energy in the ring to create partial but not exact destruction of the first harmonic. There are distinct temperature regimes where $`I_1`$ exceeds $`I_2`$ and vice versa and the two curves peak in completely different temperatures. $`I_2`$ also exhibits more than one maxima. Experimentally different harmonics can be measured separately and the first harmonic as shown in Fig. 2 can show tremendous enhancement with temperature.
An important conclusion that can be made from Fig. 2 is that observation of $`\varphi _0/2`$ periodicity as well as $`\varphi _0`$ is possible even in the absence of the parity effect quite naturally because the absence of the parity effect also means one can obtain an enhancement of the persistent current amplitude with temperature, and as a result an enhancement of a particular harmonic with temperature, resulting in different harmonics peaking at different temperatures.
## IV Conclusions
In summary, we would like to state that the temperature dependence of persistent currents in a ring strongly coupled to a stub exhibits very nontrivial features. Namely, at small temperatures it can show an enhancement of the amplitude of persistent currents in the grand-canonical as well as in the canonical case. The fundamental periodicity of the persistent currents can change with temperature. If detected experimentally, these can lead to a better understanding of the qualitative features of persistent currents. It will also confirm the existence of parity-violating states that is only possible if there is a new phase apart from the three phases considered by Leggett while generalizing the parity effect. This new phase is the sole cause of the nontrivial temperature dependence. There is a crossover temperature $`T^{}`$ above which the amplitude of persistent currents decreases with temperature. How the crossover temperature is affected by electron correlation effects and dephasing should lead to interesting theoretical and experimental explorations in the future.
Finally, with the large discrepancies between theory and experiments for the persistent currents in disordered rings, one cannot completely rule out the possibility of parity violation in the ring system as well. The stub is not the only way to produce this new phase that leads to a violation of the parity effect. There can be more general ways of getting transmission zeros that may also be parity violation. In that case, the ring-stub system may prove useful as a theoretical model to understand the consequences of parity violation. Its consequences on the temperature dependence shown here may motivate future works in this direction.
Figure captions
Fig. 1. The ring of length $`u`$ with a stub (resonant cavity) of length $`v`$ threaded by a magnetic flux $`\varphi `$ (left bottom corner). The dependence of the current amplitude $`I_\mu `$ in units of $`I_0=ev_F/u`$ on the temperature $`T`$ in units of $`\mathrm{\Delta }_u/2\pi ^2k_B`$ for the regime $`\mu =const`$ with $`v=15\lambda _F`$ and $`u=(5+x)\lambda _F`$ at $`x=0`$ (dashed curve) and $`x=0.25`$ (dotted curve); and $`I_N/I_0`$ for the isolated ring-stub system with $`v/u=10`$, and $`N=3`$ (solid curve). For the appropriate scale the curves 2 and 3 are multiplied by factors of 3 and 15, respectively.
Fig. 2. The dependence of the persistent current $`I_\mu `$ in units of $`I_0=ev_F/u`$ on the magnetic flux $`\varphi `$ in units of $`\varphi _0`$ for the regime $`\mu =const`$ with $`v=15\lambda _F`$ and $`u=5.25\lambda _F`$ for $`T/\mathrm{\Delta }_u=0.01`$ (dashed curve) and $`T/\mathrm{\Delta }_u=0.15`$ (solid curve). The curve 2 is multiplied by a factor of 5 for the appropriate scale. The inset shows the first harmonic $`I_1`$ (solid curve) and second harmonic $`I_2`$ (dotted curve) of $`I_N`$ in units of $`I_0`$ for N fixed at 5, $`v`$=7$`k_F`$ and $`u`$=2.5$`k_F`$ versus temperature in units of $`\mathrm{\Delta }_u/2\pi ^2k_B`$.
|
no-problem/9908/hep-ph9908420.html
|
ar5iv
|
text
|
# Testing the direct CP violation of the Standard Model without knowing strong phases
## I Introduction
In the minimal Standard Model with three generations (MSM) there is only one independent CP-violating parameter. Therefore, in principle, determining the weak phase $`\beta `$ from the CP violating $`B^0\overline{B}^0`$ mixing is sufficient within the MSM. One of the major purposes of exploring for the phase $`\gamma `$ in direct CP violation processes is to test consistency of other CP violation phenomena with the MSM and to search for possible sources of CP violation beyond the MSM.
Many proposals have been made as to how to extract the phase $`\gamma `$ from direct CP-violating processes. The difficulty is that the weak phases are entangled with unknown strong phases due to final state interactions (FSI). In many cases, one can in principle determine both weak and strong phases by measuring sufficiently many decay modes. Since experimental errors accumulate with the number of measured values, however, an unrealistically high precision is often required for measurement. Use of flavor SU(3) symmetry is a powerful way to simplify the theoretical analysis by reducing the number of independent decay amplitudes. Nevertheless, additional dynamical approximations and/or assumptions are needed to make the extraction of $`\gamma `$ feasible. While a model such as the factorization model may give us some idea of the relative magnitudes of decay amplitudes, the strong phases of amplitudes are much harder to compute unless short-distance QCD should completely dominate.<sup>*</sup><sup>*</sup>*If the strong phases of two-body $`B`$ decay are dominated by short-distance QCD, all strong phases would be small and calculable in principle. However, a convincing quantitative proof is yet to be given for the short-distance dominance. Because of the uncertainty of strong phases some are content only with setting bounds on the phase $`\gamma `$.
In order to extract the weak phases from direct CP violations, we need a set of decay modes which are described by two or more of independent decay amplitudes differing in the strong phase. The fewer the independent amplitudes are, the simpler the analysis is. We would like to avoid theoretical assumptions and approximations on those decay amplitudes as much as possible, preferably treating them as free parameters without theoretical prejudice. For this reason, we should study a set of decay modes that involves the smallest number of independent amplitudes. With SU(3) symmetry He recently derived several relations for the rate differences of the two-body octet pseudoscalar-meson decay modes which do not depend on strong interaction effects at all. The final states considered by He contain two or more of isospin or SU(3) eigenstates to generate a strong phase difference. However, high inelasticity and multichannel coupling of the final states of the B decay make a CP asymmetry observable even in the final states which are eigenstates of isospin or SU(3). We shall briefly remind this important fact in Section II in order to add a few more promising relations of the same nature to the list of . In Section III, we derive the relation for the rate differences of singlet-octet two-body final states, which are not only isospin eigenstates but also octet eigenstates of SU(3). Comments will be made on feasibility of test in Section IV.
## II Final state interaction
When many decay channels are open in a heavy particle decay, the FSI phases of decay amplitudes for experimentally measured final-states are not simply related to the phases of pure strong interaction. Take, for example, a two-body final state $`|ab`$. The state $`|ab`$ is not one of the eigenstates $`|\alpha `$ of strong interaction S matrix. When the eigenchannels of S matrix are defined by
$`\beta |S|\alpha `$ $`=`$ $`\beta ^{\mathrm{out}}|\alpha ^{\mathrm{in}},`$ (1)
$`=`$ $`\delta _{\beta \alpha }e^{2i\delta _\alpha },`$ (2)
an experimentally observable final-state is a linear combination of them. Take, for example, a two-body final state $`|ab`$. The state $`|ab`$ is expanded as
$$|ab=\underset{\alpha }{}O_{ab,\alpha }|\alpha .$$
(3)
Time reversal invariance of strong interaction allows us to choose the S matrix to be symmetric and $`O_{ab,\alpha }`$ to be an orthogonal matrix. For a CP-even decay operator $`𝒪`$<sub>i</sub>, time-reversal operation leads us to
$$\alpha ^{\mathrm{out}}|𝒪_i|B=B|𝒪_i|\alpha ^{\mathrm{out}}\alpha ^{\mathrm{out}}|\alpha ^{\mathrm{in}}.$$
(4)
Therefore the decay amplitude takes the form
$$\alpha ^{\mathrm{out}}|𝒪_i|B=M_\alpha ^ie^{i\delta _\alpha },$$
(5)
where
$$(M_\alpha ^i)^{}=M_\alpha ^i.$$
(6)
This is the well-known phase theorem in the case that the final state is an eigenstate of S matrix. When $`|ab`$ is not an eigenstate of S matrix, but is given by Eq. (3), the decay amplitude for $`Bab`$ is a superposition of $`B\alpha `$:
$$M^i(Bab)=\underset{\alpha }{}O_{ab,\alpha }e^{i\delta _\alpha }M_\alpha ^i.$$
(7)
We should learn two important facts from Eq. (7). One is that the net (strong) phase of $`M(Bab)`$ is not simply related to the eigenphase shifts $`\delta _\alpha `$ of S matrix. It is not given by a phase of any pure strong interaction process, elastic or inelastic, of $`|ab`$. The other is that the phase of $`M(Bab)`$ is dependent on the operator $`𝒪`$<sub>i</sub>. For instance, the strong phase of the $`BK\pi `$ amplitude into total isospin 1/2 takes different values for the tree decay process and for the penguin decay process. There is no reason to expect that the two values are even close to each other, since the different quark structures of $`𝒪`$<sub>1,2</sub> and $`𝒪`$<sub>3∼10</sub> generate very different sets of $`M_\alpha ^i`$ in general. The strong phases of the tree and the penguin amplitude of $`(K\pi )_{I=1/2}`$ can be just as much different as those of $`(K\pi )_{I=1/2}`$ and $`(K\pi )_{I=3/2}`$ are, or as those of $`(K\pi )_\mathrm{𝟖}`$ and $`(K\pi )_{\mathrm{𝟐𝟕}}`$ of SU(3) are.
Thanks to this property of the FSI in the B decay, the CP asymmetry can appear even in an isospin eigenstate or an SU(3) eigenstate. A merit of considering such final states is that since their strong interaction parametrization is very simple, we can more easily disentangle the weak phases from the strong phases.
## III SU(3) analysis
We cast the effective Hamiltonian of the B decay into the form
$$H_{eff}2\sqrt{2}G_F\underset{q=d,s}{}(V_{ub}V_{uq}^{}\underset{i=1}{\overset{2}{}}C_i𝒪_i^qV_{tb}V_{tq}^{}\underset{j=3}{\overset{10}{}}C_j𝒪_j^q)+\mathrm{H}.\mathrm{c}.,$$
(8)
where the decay operators are defined by
$`𝒪_1^q`$ $`=`$ $`(\overline{u}\gamma ^\mu b_L)(\overline{q}\gamma _\mu u_L)(\overline{c}\gamma ^\mu b_L)(\overline{q}\gamma ^\mu c_L),`$ (9)
$`𝒪_2^q`$ $`=`$ $`(\overline{q}\gamma ^\mu b_L)(\overline{u}\gamma _\mu u_L)(\overline{q}\gamma ^\mu b_L)(\overline{c}\gamma _\mu c_L),`$ (10)
$`𝒪_3^q`$ $`=`$ $`{\displaystyle \underset{q^{}=u,d,s,c}{}}(\overline{q}\gamma ^\mu b_L)(\overline{q^{}}\gamma _\mu q_L^{})+{\displaystyle \frac{C_2}{C_3}}(\overline{q}\gamma ^\mu b_L)(\overline{c}\gamma _\mu c_L),`$ (11)
$`𝒪_4^q`$ $`=`$ $`{\displaystyle \underset{q^{}=u,d,s,c}{}}(\overline{q_\alpha }\gamma ^\mu b_{\beta L})(\overline{q_\beta ^{}}\gamma _\mu q_{\alpha L}^{})+{\displaystyle \frac{C_1}{C_4}}(\overline{q_\alpha }\gamma ^\mu b_{\beta L})(\overline{c_\beta }\gamma _\mu c_{\alpha L}),`$ (12)
$`𝒪_5^q`$ $`=`$ $`{\displaystyle \underset{q^{}=u,d,s,c}{}}(\overline{q}\gamma ^\mu b_L)(\overline{q^{}}\gamma _\mu q_R^{}),`$ (13)
$`𝒪_6^q`$ $`=`$ $`{\displaystyle \underset{q^{}=u,d,s,c}{}}(\overline{q_\alpha }\gamma ^\mu b_{\beta L})(\overline{q_\beta ^{}}\gamma _\mu q_{\alpha R}^{}),`$ (14)
$`𝒪_7^q`$ $`=`$ $`{\displaystyle \frac{3}{2}}{\displaystyle \underset{q^{}=u,d,s,c}{}}(\overline{q}\gamma ^\mu b_L)e_q^{}(\overline{q^{}}\gamma _\mu q_R^{}),`$ (15)
$`𝒪_8^q`$ $`=`$ $`{\displaystyle \frac{3}{2}}{\displaystyle \underset{q^{}=u,d,s,c}{}}(\overline{q_\alpha }\gamma ^\mu b_{\beta L})e_q^{}(\overline{q_\beta ^{}}\gamma _\mu q_{\alpha R}^{}),`$ (16)
$`𝒪_9^q`$ $`=`$ $`{\displaystyle \frac{3}{2}}{\displaystyle \underset{q^{}=u,d,s,c}{}}(\overline{q}\gamma ^\mu b_L)e_q^{}(\overline{q^{}}\gamma _\mu q_L^{}),`$ (17)
$`𝒪_{10}^q`$ $`=`$ $`{\displaystyle \frac{3}{2}}{\displaystyle \underset{q^{}=u,d,s,c}{}}(\overline{q_\alpha }\gamma ^\mu b_{\beta L})e_q^{}(\overline{q_\beta ^{}}\gamma _\mu q_{\alpha L}^{}).`$ (18)
In grouping the terms in $`H_{eff}`$, we have expressed the coefficient $`V_{cb}V_{cq}^{}`$ of the tree operators involving $`c`$ and $`\overline{c}`$ in terms of $`V_{ub}V_{uq}^{}`$ and $`V_{tb}V_{tq}^{}`$ by using the unitarity relations of three generations,
$$V_{ub}V_{uq}^{}+V_{cb}V_{cq}^{}+V_{tb}V_{tq}^{}=0,(q=d,s),$$
(19)
and have distributed them into $`𝒪`$<sub>1∼4</sub> in Eq. (9) $``$ (12). The tree operators of $`c\overline{c}`$ are potentially important if the FSI should allow a substantial conversion of $`c\overline{c}`$ light quark pairs.
It is important to notice here that all decay operators ($`𝒪`$$`{}_{}{}^{d}{}_{i}{}^{}`$, $`𝒪`$$`{}_{}{}^{s}{}_{i}{}^{}`$) ($`i=110`$) form doublets under the U-spin rotation ($`ds`$) of an SU(3) subgroup. Under U-spin, $`B^\pm `$ are singlets while $`(B^0,B_s^0)`$ forms a doublet. Likewise $`(\pi ^{},K^{})`$ is a doublet.This U-spin property immediately leads to six of the relations written in . Here we consider the $`B^\pm `$ decay into $`\pi ^\pm \eta ^{}`$ and $`K^\pm \eta ^{}`$ instead of the $`B^0/\overline{B}^0`$ and $`B_s/\overline{B}_s`$ decays:
$`B^\pm `$ $``$ $`K^\pm \eta ^{},`$ (20)
$`B^\pm `$ $``$ $`\pi ^\pm \eta ^{}.`$ (21)
In the SU(3) symmetry limit leaving out the $`\eta \eta ^{}`$ mixing, the decay amplitudes for $`B^\pm \pi ^\pm \eta ^{}`$ and $`K^\pm \eta ^{}`$ are parametrized in the form
$`M(\pi ^+\eta ^{})`$ $`=`$ $`V_{ud}V_{ub}^{}T+V_{td}V_{tb}^{}P,`$ (22)
$`M(K^+\eta ^{})`$ $`=`$ $`V_{us}V_{ub}^{}T+V_{ts}V_{tb}^{}P,`$ (23)
where
$`T`$ $`=`$ $`2\sqrt{2}G_F\pi ^+\eta ^{}|{\displaystyle \underset{i=1}{\overset{2}{}}}C_i𝒪_i^{}|B^+`$ (24)
$`P`$ $`=`$ $`2\sqrt{2}G_F\pi ^+\eta ^{}|{\displaystyle \underset{j=3}{\overset{10}{}}}C_j𝒪_j^{}|B^+.`$ (25)
The QCD and electroweak penguin contributions have been combined into a single term
$$P=P_{QCD}+P_{EW}.$$
(26)
The decay amplitudes for $`B^{}\pi ^{}\eta ^{}`$ and $`K^{}\eta ^{}`$ are obtained from Eqs. (22) and (23) by complex conjugation of the quark mixing matrix elements.
The FSI turns the amplitudes $`T`$ and $`B`$ complex and, according to our argument in Section II, their phases are different from each other in general. Therefore the rate differences
$`\mathrm{\Delta }(\pi ^\pm \eta ^{}(K^\pm \eta ^{}))`$ $`=`$ $`\mathrm{B}(B^+\pi ^+\eta ^{}(K^+\eta ^{}))\mathrm{B}(B^{}\pi ^{}\eta ^{}(K^{}\eta ^{}))`$ (27)
$`=`$ $`4|T||P|\mathrm{sin}\delta \theta \mathrm{Im}(V_{uq}V_{ub}^{}V_{tb}V_{tq}^{})(q=d,s),`$ (28)
where $`\delta \theta =\mathrm{arg}(T^{}P)`$, are nonvanishing. Though the final states are isospin eigenstates, $`\mathrm{\Delta }(\pi ^\pm \eta ^{})`$ and $`\mathrm{\Delta }(K^\pm \eta ^{})`$ can be just as large as those of isospin non-eigenstates. The imaginary part of the product of the quark mixing matrix elements is common to $`q=d`$ and $`s`$ up to a sign:
$$\mathrm{Im}(V_{ud}V_{ub}^{}V_{tb}V_{td})=\mathrm{Im}(V_{us}V_{ub}^{}V_{tb}V_{ts}).$$
(29)
We thus come to the relation,
$$\mathrm{\Delta }(\pi ^\pm \eta ^{})=\mathrm{\Delta }(K^\pm \eta ^{}).$$
(30)
This relation is not useful in extracting the weak phase $`\gamma `$ unless we know $`|T||P|`$ and $`\delta \theta `$ beforehand from somewhere else. From the viewpoint of testing CP violations in the MSM, however, it is one of the cleaner tests and will serve the same goal as determining $`\gamma `$ through complex procedures.
## IV Comments on SU(3) breaking
The $`K^\pm \eta ^{}`$ mode is the largest in branching fraction among all charmless two-body $`B^\pm `$ decay modes so far measured. The $`\pi ^\pm \eta ^{}`$ mode has not been measured. In a theoretical analysis based on SU(3), $`\pi ^\pm \eta ^{}`$ is expected to be competitive with $`\pi ^\pm \pi ^0`$ and to be one of the largest in branching fraction among the flavorless final states. Measurability of a CP asymmetry in $`\pi ^\pm \eta ^{}`$ was actually pointed out by the authors of and . The competitive rates of $`K^\pm \eta ^{}`$ and $`\pi ^\pm \eta ^{}`$ may give an advantage to Eq. (30) over the relation $`\mathrm{\Delta }(\pi ^+K^0/\pi ^{}\overline{K}^0)=\mathrm{\Delta }(K^+\overline{K}^0/K^{}K^0)`$ of .
We have ignored SU(3) breaking of strong interaction in Eq. (30). It is likely that the SU(3) breaking in rescattering dynamics is insignificant at the energy of B mass. In the factorization model, the SU(3) breaking associated with each meson can be incorporated by $`\mathrm{\Delta }f_{\pi (K)}\mathrm{\Delta }`$. We shall learn more about reliability of factorization by comparing the theoretical predictions with experiment.
The $`\eta \eta ^{}`$ mixing is one manifestation of SU(3) breaking. This may be viewed as a disadvantage of our relation. Recently a dynamical model was proposed to compute the decay matrix elements of $`B^\pm \pi ^\pm \eta ^{}`$ and $`K^\pm \eta ^{}`$. In this model $`eta^{}`$ is generated through two gluons in the penguin diagrams while $`u\overline{u}`$ forms $`\eta ^{}`$ in the tree diagrams as a color-favored process. If these processes are the dominant ones, the $`\eta \eta ^{}`$ mixing correction appears as a common factor on both sides of Eq .(30) and does not affect the relation.
Since we expect $`\mathrm{B}(B^\pm K^\pm \eta ^{})`$ to be much larger than $`\mathrm{B}(B^\pm \pi ^\pm \eta ^{})`$, a small difference between two large numbers will be searched for in the right-hand side of Eq. (30), while the left-hand side will be obtained hopefully as a fairly large difference between two smaller numbers. If we take the estimates by the authors of as a ballpark figure, their preferred values for $`B^\pm \pi ^\pm \eta ^{}`$ lead to $`\mathrm{\Delta }(\pi ^+\eta ^{})\mathrm{\Delta }(\pi ^{}\eta ^{})4\times 10^6`$ which corresponds to a 40% asymmetry. Then we shall be looking for a 3% of asymmetry in the $`K^\pm \eta ^{}`$ mode up to a possible 22% upward correction due to $`f_K/f_\pi `$. If this is the case, testing the relation with the MSM will be rather a remote possibility in the B factory experiment.
The same relation as Eq. (30) should hold for $`B^\pm \rho ^\pm \eta ^{}`$ and $`K^\pm \eta ^{}`$:
$$\mathrm{\Delta }(\rho ^\pm \eta ^{})=\mathrm{\Delta }(K^\pm \eta ^{}).$$
(31)
We can replace $`\rho ^\pm `$ and $`K^\pm `$ with the corresponding components of any meson octet, respectively.
Finally, it is tempting to try for $`B^\pm \pi ^\pm \psi `$ and $`K^\pm \psi `$
$$\mathrm{\Delta }(\pi ^\pm \psi )=\mathrm{\Delta }(K^\pm \psi ),$$
(32)
since the relation is free from the $`\eta \eta ^{}`$ mixing contamination. Here again we may replace $`\pi ^\pm `$ and $`K^\pm `$ with the corresponding components of any meson octet. Furthermore, the rates are high and the experimental signature of $`l^+l^{}\pi ^\pm (K^\pm )`$ is very clean. Unfortunately the asymmetries are will be even smaller.
###### Acknowledgements.
This work was supported in part by the Director, Office of Science, Office of High Energy and Nuclear Physics, Division of High Energy Physics, of the U.S. Department of Energy under Contract DE–AC03–76SF00098 and in part by the National Science Foundation under Grant PHY–95–14797.
|
no-problem/9908/astro-ph9908108.html
|
ar5iv
|
text
|
# On the formation of hydrogen-deficient post-AGB stars
## 1 Introduction
Stars on the so called Asymptotic Giant Branch (AGB) have strong stellar winds, which gradually reduce the mass of the hydrogen-rich stellar envelope. When this envelope mass falls below a critical value, the stars leave the AGB to become post-AGB stars, central stars of planetary nebulae (CSPNe), and finally white dwarfs. Post-AGB stars show a variety of surface abundances Méndez (1991). About $`80\%`$ of all CSPNe show a solar-like composition while the remaining ones are hydrogen-deficient. Among the latter are Wolf-Rayet type CSPNe (\[WR\]-CSPNe) and the extremly hot PG 1159 stars with typical surface abundances of \[He/C/O\]=\[0.33/0.50/0.17\] (Dreizler and Heber Dreizler and Heber (1998); see also Koesterke and Hamann Koesterke and Hamann (1997) and references in both papers). Hydrogen-deficiency is also found in white dwarfs of spectral type DO Dreizler and Werner (1996).
The origin of the hydrogen-deficiency in post-AGB stars is a longstanding problem. Most post-AGB calculations predict a hydrogen-rich surface composition Schönberner (1979, 1983); Wood and Faulkner (1986); Vassiliadis and Wood (1994); Blöcker (1995a); Blöcker and Schönberner (1997). So far, no post-AGB models reproduced the observed high carbon and oxygen abundance. The most promising scenario for obtaining a hydrogen-deficient surface composition envokes a very late thermal pulse Fujimoto (1977); Schönberner (1979); Iben et al. (1983) — i.e. a pulse which occurs after the star has already left the AGB — during which the pulse driven convection zone can mix hydrogen-free material out to the stellar surface. Within this born-again scenario, Iben and McDonald Iben and McDonald (1995) obtain surface mass fractions of \[He/C/O\]=\[0.76/0.15/0.01\], i.e. their model indeed became strongly hydrogen-deficient. Hovever, the large oxygen abundance found in most H-deficient post-AGB stars could not be reproduced be these, nor by any other calculation. These difficulties have posed a strong limitation to the whole scenario.
In this *Letter* we present a post-AGB model sequence starting from an AGB model computed with overshoot Herwig et al. (1997), and using a numerical method of computing nuclear burning and time-dependent convective mixing simultaneously.
## 2 Numerical method
The stellar models are based on the stellar evolution code described by Blöcker Blöcker (1995b). However, the treatment of the chemical evolution was entirely replaced by a numerical scheme which solves the time dependence of the considered nuclear species — i.e., the changes due to thermonuclear reactions and due to mixing — in one single step. This enables us, in contrast to earlier investigations of very late thermal pulses, to reliably predict the chemical abundance profiles and the nuclear energy generation rates in situations where the time scales of nuclear burning and mixing are comparable. The abundance change for each isotope at each mesh point due to diffusive mixing and nuclear processing is given by
$$\left(\frac{\mathrm{d}X_j}{\mathrm{d}t}\right)=\frac{}{m}\left[\left(4\pi r^2\rho \right)^2D\frac{X_j}{m}\right]+\widehat{F}_jX_j,$$
(1)
where $`X_j`$ contains the abundances of all considered isoptopes at the $`j^{\mathrm{th}}`$ mesh point, $`\widehat{F}_j`$ is the nuclear rate matrix, $`D`$ is the diffusion coefficient describing the efficiency of convective mixing, $`r`$ is the radius, $`m`$ the mass coordinate and $`\rho `$ the density. This leads to a set of non-linear equations with $`MN`$ unknowns, where M is the number of grid points and $`N`$ is the number of isotopes. In the present calculations, $`M`$ is of the order of 2000, and $`N=15`$ as the main thermonuclear reactions for hydrogen burning through the pp chains and the CNO cycle as well as the main helium burning reactions are included. The solution is obtained fully implicit with a Newton-Raphson iteration scheme by making use of the band-diagonal structure of the problem. The scheme converges to sufficient precision within about 3 iterations. A coupled solution of one nuclear reaction at a time and time-dependent mixing, including also the structure equations, has already been applied by Eggleton Eggleton (1972).
## 3 The AGB starting model
We start with an AGB model with $`M_{\mathrm{ZAMS}}=2\mathrm{M}_{}`$ which has been evolved over 22 thermal pulses, including convective overshoot at all convective boundaries. The treatment and efficiency (f=0.016) of overshoot is the same as in Herwig et al. Herwig et al. (1997). In comparison to models without overshoot the intershell region is much stronger enriched in carbon and oxygen (mass fractions \[He/C/O\]=\[0.35/0.43/0.19\]), which causes a stronger third dredge-up Herwig et al. (1999). At the $`16^{\mathrm{th}}`$ thermal pulse (TP) the hydrogen-free core has a mass of $`M_{\mathrm{core}}=0.573\mathrm{M}_{}`$ and dredge-up starts to operate, leading to a carbon star model at the last computed TP. At this stage the model star has a total mass of $`M=1.42\mathrm{M}_{}`$ and $`M_{\mathrm{core}}=0.604\mathrm{M}_{}`$. We then artificially increase the mass loss ($`\dot{M}>10^3\mathrm{M}_{}/\mathrm{yr}`$) in order to force the model to leave the AGB at the right phase to develop a very late thermal pulse. This procedure is justified for this exploratory work because it does not affect the nucleosynthesis and mixing during the very late TP.
## 4 Evolution through the very late thermal pulse
Two cases of the born-again scenario should be distinguished. Depending on the time when the post-AGB thermal pulse occurs, shell hydrogen burning may still be active or may already have ceased.
In the first case, the He-flash driven convection zone cannot extend into the hydrogen-rich envelope due to the entropy barrier generated by the burning shell Iben (1976). In the second case, which is realized in our model sequence, hydrogen shell burning is extinct and the star has entered the white dwarf cooling domain (Fig. 1). We designate a TP in this situation as a very late TP.
As the helium luminosity increases in the course of the He-flash in our model sequence (first mark in Fig. 1), the corresponding region of convective instability enlarges (Fig. 2). When the upper convective boundary reaches the mass coordinate where the hydrogen abundance increases, convective mixing transports protons downwards into the hot interior (Fig. 3). The protons are at some point captured by $`{}_{}{}^{12}\text{C}`$ via the reaction $`{}_{}{}^{12}\text{C}(p,\gamma )^{13}\text{N}`$. The peak of the resulting luminosity due to hydrogen burning (see also Fig. 2) is located at the mass coordinate where the nuclear time scale equals the mixing time scale ($`\mathrm{one}\mathrm{hour}`$).
The profile of hydrogen in Fig. 3 and 4 demonstrates that a correct treatment of simultaneous burning and convective mixing is essential for this evolutionary phase. A treatment of convective mixing which does not include the simultaneous computation of the isotopic abundances according to the equations of the nuclear network would fail to predict a correct hydrogen profile. In particular, such a treatment would possibly let the protons travel too deep into the convective region, without considering that they would have been captured already on the way. Then, the energy generation rate due to proton captures may be overestimated and not correctly located.
The energy from proton captures is released in the upper part of the He-flash driven convection zone, which leads to a split (at $`m_\mathrm{r}=0.595\mathrm{M}_{}`$) of the convective region (Fig. 4). The two convective regions are then connected by the overlapping overshoot extensions, but Fig. 2 shows that the second convective zone is only short lived since the amount of hydrogen available in the envelope is quickly consumed.
Figure 4 shows that the hydrogen burning convection zone extends over $`10^2\mathrm{M}_{}`$ and reaches from $`m_\mathrm{r}=0.595\mathrm{M}_{}`$ up to the surface of the stellar model. The surface hydrogen abundance declines rapidly due to mixing and proton captures in the deeper layers. The period of the largest hydrogen burning luminosity (shown in Fig. 4) of $`L_\mathrm{H}10^8\mathrm{L}_{}`$ lasts for less than a week, and the whole episode of convective hydrogen burning is a matter of about a month. Overall, $`510^5\mathrm{M}_{}`$ of hydrogen are burnt. At peak hydrogen luminosity the hydrogen mass fraction at the surface is $`3.410^4`$ and the total amount of hydrogen still present in the star is $`M_\mathrm{H}=7.810^6\mathrm{M}_{}`$. Thus, in this sequence the star is already hydrogen-deficient before it returns to the AGB domain in the HRD.
Figure 5 shows abundance profiles before and after the mixing and burning event due to the very late TP. While the star still shows the typical hydrogen-rich AGB abundance pattern before the convective region has reached into the envelope (top panel, Fig. 5), the mixing during the convective hydrogen burning leads to a hydrogen-free surface with \[He/C/O\]=\[0.38/0.36/0.22\] and a mass fraction of $`3.5\%`$ of neon. The step in the abundances of $`{}_{}{}^{4}\text{He}`$, $`{}_{}{}^{12}\text{C}`$ and $`{}_{}{}^{16}\text{O}`$ at $`0.596\mathrm{M}_{}`$ (lower panel) corresponds to the split of the convective region due to hydrogen burning. While the hydrogen burning leads not to a significant abundance changes for the major isotopes (only $`510^5\mathrm{M}_{}`$ of hydrogen are processed), helium burning continues to process helium at the bottom of the He-flash convective zone. The final surface abundances are very similar to the intershell abundances during the thermal pulse.
After most of the hydrogen is burnt, the corresponding upper convection zone disappears when the local luminosity drops. It takes about one year until the He-flash convection zone has recovered to its original extent (Fig. 2). The star then follows the evolution as known from the born-again scenario (for a recent account on this scenario see Blöcker and Schönberner, 1997). Energetically, the return into the AGB domain is almost exclusively driven by the energy release due to helium burning, which exceeds the additional supply of energy from hydrogen burning by orders of magnitude.
## 5 Conclusions
Using a numerical method to treat nuclear burning and mixing simultaneously in stellar evolution calculations, which allows a reliable and robust modelling of very late thermal pulses, we have shown that the general surface abundance pattern observed in hydrogen-deficient post-AGB stars can be explained within the born-again scenario. Our new post-AGB sequence shows that due to the energy generation and convective mixing during a very late thermal pulse a born-again star forms which displays its previous intershell abundance at the surface.
We have based the calculation on an AGB model sequence computed with overshoot, which shows a high carbon and oxygen intershell abundance. Thus, the fact that the abundance pattern of our post-AGB model after the thermal pulse agrees with the observation of hydrogen-deficient post-AGB stars like PG 1159 and \[WC\]-CSPNe strongly supports the assumption of extra mixing beyond the convective boundary of the He-convection zone in AGB stars. We conclude that the very late thermal pulses can indeed be identified as one cause for the hydrogen-deficiency in post-AGB stars.
However, we note that not all H-deficient post-AGB stars are completely free of hydrogen Leuenhagen and Hamann (1998), as predicted by our model. Other possibilities than the born-again scenario to achieve H-deficiency might also exist Tylenda (1996); Waters et al. (1998). Whether post-AGB models which are not entirely hydrogen-free can be obtained within this scenario requires a study of the variation of the late thermal puls with the inter-pulse phase at which the star leaves the AGB Iben (1984), and possibly the consideration of other mixing processes, e.g. due to rotational effects Langer et al. (1999), which has to be left to future investigations.
###### Acknowledgements.
We are grateful to W.-R. Hamann and L. Koesterke for many useful discussions. This work has been supported by the *Deutsche Forschungsgemeinschaft* through grant La 587/16.
|
no-problem/9908/astro-ph9908230.html
|
ar5iv
|
text
|
# A Far-Infrared Survey of Molecular Cloud Cores
## 1 Introduction
The denser regions of the interstellar medium are usually known as molecular clouds, and stars form in the very highest density parts of these clouds, which are usually referred to as molecular cloud cores. The process by which material is turned from a molecular cloud core into a star is far from being fully understood, since many complex physical processes are involved (see, for example, Mouschovias 1991, for a review).
The key to understanding the process of star formation appears to lie in the earliest stages of molecular cloud core evolution, since the initial conditions determine much of what subsequently takes place. Many observational studies have been conducted of such dense cores (e.g. Myers & Benson 1983; Myers, Linke & Benson 1983; Clemens & Barvainis 1988; Benson & Myers 1989; Bourke, Hyland & Robinson 1995a), usually starting from an optically selected sample of dark clouds on sky survey plates. In this paper we present a sample of cloud cores selected on the basis of far-infrared optical depth using the IRAS Sky Survey Atlas (ISSA) images (Wheelock et al. 1994), with a view to broadening the range of the physical parameters of cores that have been explored, and hence gaining further insight into the evolution of molecular cloud cores.
The paper is laid out as follows: Section 2 describes the manner in which optical depth maps were constructed from the ISSA data, including technical details such as background subtraction, and goes on to discuss cloud core selection and methodology verification techniques; Section 3 describes the new molecular cloud core catalogue that we have constructed from the data and the associations with previously known molecular clouds and IRAS point sources; Section 4 contrasts the properties of the new catalogue with previous molecular cloud core catalogues and compares the ensemble of mean catalogue properties with the theoretical predictions of models of ionisation-regulated star formation; and Section 5 presents the conclusions of the paper. Readers who are more theoretically inclined may wish to read Sections 3 & 4 before going back to Section 2.
## 2 Observational details
### 2.1 Background subtraction
We chose to select the clouds using the IRAS Sky Survey Atlas (ISSA), which is an all-sky set of images at each of the four IRAS wavebands of 12, 25, 60 & 100 $`\mu `$m (Beichman et al. 1988; Wheelock et al. 1994). However, since we wished to select the coldest, densest clouds, we concentrated on the two longest wavelengths, 60 & 100 $`\mu `$m. We used these to construct optical depth and temperature maps of the regions of interest, in a similar manner to that adopted by Wood, Myers & Daugherty (1994). These latter authors selected previously known molecular cloud regions, whereas we endeavoured to construct a distinct sample of previously little-studied clouds drawn from the all-sky set of plates. But before optical depth and temperature maps can be made, it is necessary to ensure that all background emission has been removed from the images. This is because in making optical depth and temperature maps, one must take the ratio of emission at two different wavelengths (in this case 60 & 100 $`\mu `$m). Therefore, any offset at one or other wavelength due to background emission not associated with the object will affect the ratio measurement. The ISSA images have already been fully processed and corrected for most of the Zodiacal background emission, but they still contain extended emission from the Galactic Plane and some residual Zodiacal emission. These are the backgrounds we now address.
Pixel histograms of all the ISSA fields were constructed by binning the pixels in each field into histograms of surface intensity. On examination it was found that some fields’ pixel histograms contained a single peak, some contained double or multiple peaks and some had much more complicated structures. Each field was searched to identify whether the histogram had a peak with a width of less than 2MJysr<sup>-1</sup> (i.e. $``$ 10 $`\times `$ the calibration noise) and at low enough intensity to be consistent with an area of background. Any such peak was taken as evidence of an area of low level emission in the field which can be described as an area of background containing only low level cirrus. A number of the fields were inspected visually. It was found that the pixels in the low level peak of the histogram in each case came from a contiguous, discrete area of the sky, and were not randomly isolated pixels, confirming that this automatic method was indeed finding genuine areas of sky.
A similar method was previously carried out for optical images by Beard, MacGillivray & Thanisch (1990), who showed that histograms of pixels’ intensities contain valuable information about the background regions within the image. In relatively empty areas of the sky the pixel histogram takes the form of a single Gaussian peak whereas in regions densely populated with real sources several further peaks appear in the histogram at higher intensities.
The position of the peak was recorded and used as a measure of the background surface brightness in the field. 272 ISSA fields were identified as having suitable background regions by this technique at 100$`\mu `$m and 368 were found at 60$`\mu `$m from a total of 430 for each wavelength. The widths of the low intensity peaks in the pixel histograms were generally at least a factor of two or three larger than the average calibration errors for the field – as estimated from the standard deviation in pixel values from one Hours Confirmed scan (HCON) to the next – indicating the existence of real cirrus structure within the background regions.
To investigate the overall nature of the background, maps which interpolated from region to region of background were constructed. This was done by first recording the values of the pixel histogram low intensity peaks (in MJysr<sup>-1</sup>) and recording the central position of the field in which it was found. A simple interpolation technique was devised, somewhat akin to box-car averaging. For every square degree on the celestial sphere the distance to the centre of the nearest few ISSA fields containing a region of background was calculated, to allow averaging. This gives a series of angular displacements to the background regions ($`\theta _1,\theta _2,\theta _3,\theta _4,\mathrm{}.\theta _n`$), where:
$$\mathrm{cos}(\theta _\mathrm{i})=\mathrm{cos}(\mathrm{dec}_\mathrm{i})\mathrm{cos}(\mathrm{dec})\mathrm{cos}(\mathrm{ra}\mathrm{ra}_\mathrm{i})\mathrm{sin}(\mathrm{dec}_\mathrm{i})\mathrm{sin}(\mathrm{dec}).$$
Here the right ascensions and declinations of the positions to which we wish to interpolate are denoted ra and dec respectively, and the position of each background region is denoted $`\mathrm{ra}_\mathrm{i}`$ and $`\mathrm{dec}_\mathrm{i}`$. We then attributed to each position the value of surface brightness:
$$I=\frac{^{\theta _\mathrm{i}<\varphi }\mathrm{cos}\left(\frac{\pi \theta _\mathrm{i}}{\varphi }\right)I_\mathrm{i}}{^{\theta _\mathrm{i}<\varphi }\mathrm{cos}\left(\frac{\pi \theta _\mathrm{i}}{\varphi }\right)}.$$
This was chosen because it interpolates between positions smoothly, does not produce discontinuities and weights more heavily the nearest background regions, even given the non-uniform spatial sampling of background surface brightness. It essentially involves smoothing by a cosine bell of radius $`\varphi `$. The result of this process was then projected onto a 2d surface with equal area projection to create a map of background brightness for the whole sky.
Fig. 1 shows the map of the background constructed by this technique at 100$`\mu `$m for $`\varphi =`$ 12 degrees. The plot is an ‘equal area’ projection in Galactic coordinates. The intensity varies smoothly, and increases towards the Galactic Plane indicating that the cirrus intensity within background regions increases towards the Plane. The map contains negative values showing that the Zodiacal background subtraction used in producing the ISSA dataset was liable to over-compensate in some regions. The apparent gap through the centre of the map is due to our having to discard regions of high source confusion in the Galactic Plane, where our Galactic background subtraction did not work due to source confusion.
The residual ‘striping’ characteristic of images produced by IRAS was reduced to very low levels in the ISSA images, because of the careful calibration used to produce them. However, the act of ratio-ing two images tends to enhance any striping effects that remain. In some fields this striping was clearly visible, and in some cases dominated the structure of the optical depths maps. These fields were discarded. We selected 60 of the fields used in the interpolation for further study, based on their background regions being clearly identifiable and measurable, and on the quality of the images.
### 2.2 Colour temperatures
We used the STARLINK data reduction package IRAS90 and, in particular the routine COLTEMP, to create colour temperature and optical depth maps from the ISSA images. We briefly describe the technique here (for further details, see Berry 1993a & b).
The far infrared radiation from a cloud of temperature $`\mathrm{T}`$ emitting a black body spectrum, $`\mathrm{B}(\nu ,\mathrm{T})`$, and absorbing $`\mathrm{I}(\nu ,\mathrm{T})\mathrm{d}\tau _\nu `$, at a position in the cloud with an optical depth $`\tau _\nu `$, leads to a flux received by an observer $`\mathrm{f}(\nu ,\mathrm{T})`$ given by:
$$\mathrm{f}(\nu ,\mathrm{T})=\left(1\mathrm{e}^{\tau _\nu }\right)\mathrm{B}(\nu ,\mathrm{T}).$$
Generally $`\tau _\nu `$ is dependent on $`\nu `$ in such a way (Hildebrand 1983) that:
$$\tau (\nu )=\left(\frac{\nu }{\nu _\mathrm{c}}\right)^\beta ,$$
where $`\beta `$ is the dust emissivity index and $`\nu _c`$ is the critical frequency at which the optical depth is unity. Throughout this work we use the value suggested by Hildebrand (1983) of $`\beta =1`$ at far-infrared wavelengths. These two equations describe a ‘greybody’ spectrum. We assume that the cloud is optically thin at the wavelengths observed – Wood et al. (1994) show that even towards the Galactic Plane this is true. Because $`\tau _\nu 1`$, $`1\mathrm{e}^{\tau _\nu }\tau _\nu `$ and we therefore use the expression:
$$\mathrm{f}(\nu ,\mathrm{T})\left(\frac{\nu }{\nu _\mathrm{c}}\right)\mathrm{B}(\nu ,\mathrm{T}).$$
The IRAS detectors were sensitive over a wide bandpass and there were 4 separate wavebands (i=1, 2, 3, 4 for 12, 25, 60 and 100 $`\mu `$m respectively) so that the measured flux in waveband i is:
$$\mathrm{f}_\mathrm{i}=\mathrm{R}_\mathrm{i}(\nu )\left(\frac{\nu }{\nu _\mathrm{c}}\right)\mathrm{B}(\nu ,\mathrm{T})d\nu $$
where $`\mathrm{R}_\mathrm{i}(\nu )`$ is the spectral response curve for the waveband i receiver (see Beichman et al. 1988).
By taking the ratio of intensities at two wavebands i and j, and using the preceding equation, one can derive
$$\frac{\mathrm{f}_\mathrm{i}}{\mathrm{f}_\mathrm{j}}=\frac{\mathrm{R}_\mathrm{i}(\nu )\nu \mathrm{B}(\nu ,\mathrm{T})d\nu }{\mathrm{R}_\mathrm{j}(\nu )\nu \mathrm{B}(\nu ,\mathrm{T})d\nu }.$$
This value is dependent on T, and the response curves of the receivers. Using the listed response curves in the IRAS Explanatory Supplement (Beichman et al. 1988), one can tabulate $`\mathrm{f}_\mathrm{i}/\mathrm{f}_\mathrm{j}`$ versus T. In the routine COLTEMP a spline giving T as a function of $`\mathrm{f}_\mathrm{i}/\mathrm{f}_\mathrm{j}`$ is created by fitting to the tabulated values. For any observations of a cloud at 2 wavelengths one can then estimate the temperature and calculate the critical frequency at which $`\tau _{\nu _\mathrm{c}}=1`$:
$$\nu _\mathrm{c}=\frac{\mathrm{R}_\mathrm{i}(\nu )\nu \mathrm{B}(\nu ,\mathrm{T})d\nu }{\mathrm{f}_\mathrm{i}}.$$
Using this, one can calculate the optical depth of the cloud at another wavelength by using the equation for $`\tau `$ above. We altered COLTEMP to allow temperatures as low as 10K to be used (the publicly available version only accepts temperatures greater than 30K). In this way, optical depth maps were made of our chosen regions at 100 $`\mu `$m.
### 2.3 Optical depth maps
The optical depth at 100$`\mu `$m due to dust along the line of sight is expressible (Hildebrand 1983) as:
$$\tau _{100}=\pi a^2Q_{100}N_\mathrm{g},$$
where $`N_\mathrm{g}`$ is the column density of grains, $`a`$ is the average radius of the grains and $`Q_{100}`$ is the emission efficiency of the dust grains at 100$`\mu `$m. The mass column density of the grains is then given by:
$$\sigma _\mathrm{d}=\frac{4}{3}\left(\frac{a\rho }{Q_{100}}\right)\tau _{100},$$
where $`\rho `$ is the average grain density. The total mass of the dust in a cloud of projected area A is therefore given by:
$$M_{\mathrm{dust}}=A\sigma _d,$$
where $`\sigma _d`$ is the mean column density of dust in the cloud. The typical dust-to-gas mass ratio in the local ISM is normally assumed to be approximately 1:100. However it is clear that at 100 $`\mu `$m a significant fraction of the cold dust in the ISM is not detected, due to temperature and optical depth effects (see e.g. Wood et al. 1994). Hence a simple application of this ratio to these data will underestimate the total mass of gas. Wood et al. (1994) argued that only 1/50 of the total dust is detected at IRAS wavelengths, and used a dust-to-gas ratio of 1:2000 in their analysis. They arrived at this value by using two relations: Firstly they adopted $`A_\mathrm{v}2\times 10^4\times \tau _{100}`$ (Langer et al. 1989); and secondly they used $`N_{\mathrm{H}_2}(cm^2)10^{21}\times A_\mathrm{V}`$ (Bohlin, Savage & Drake 1978). We follow Wood et al. (1994) and use a value of 1/2000 in the current study. With a value for $`(a\rho /Q_{100})32`$gcm<sup>-1</sup>, one obtains the expression for the mass of material in a cloud:
$$M_{cloud}/\mathrm{M}_{}1.25\times 10^2\times D^2(\mathrm{pc})\underset{\mathrm{cloud}}{}\tau _{100},$$
where $`D`$ is the distance to the cloud, and the optical depth of each pixel in the cloud image is summed.
### 2.4 Cloud selection
Of the resultant optical depth maps, some contained from one to three cloud complexes, varying in size from a few pixels to half a field, while several more were discarded, either due to having very little structure, being too noisy, or having major contamination from residual Zodiacal emission that none of the processing had been able to remove. Some of the clouds were found at the edges of the fields and hence were truncated. ¿From the remaining maps, 17 clouds were selected for further study. The clouds we selected are listed in Table 1: Column 1 lists the name we assigned to each cloud, derived from the ISSA field number in which it was found; Columns 2 & 3 give the approximate position of the centre of the cloud; and the remainder of the table assigns distances, velocities and associations to the clouds we have selected.
It was found that 3 of the clouds were previously identified by Lynds as dark clouds (Lynds 1962), 2 contained Lynds bright nebulae (Lynds 1965), 7 had been identified by other authors (Taylor et al. 1987; Ramesh 1994), who had subsequently measured their CO velocities, and 3 had nearby open cluster associations. Of the 3 open clusters, two have been dated and were found to be old: NGC 7142 is thought to be 4 billion years old; Merlotte 66 is 6 billion years old; in both cases implying that they were probably not linked with the cloud. In addition, comparison with the CO Galactic plane surveys (Dame et al. 1987) revealed that several clouds were associated with known cloud complexes. Only two had no previously published associations. Column 4 of Table 1 lists the known cloud associations.
Unlike molecular maps and surveys, which give the velocity of the clouds and hence give an estimate of the distances, these ISSA selected clouds – like the optically selected clouds of Lynds (1962) – do not have easily derivable distances. Distances were estimated either from velocity and spatial association with the Orion, Cepheus, Chameleon and Ophiuchus complexes, or by estimating an upper limit for distance obtained by assuming the clouds lie in the Galactic Disc – i.e. less than 60 pc away from the Galactic Plane (c.f. Clemens, Sanders & Scoville 1988). Clouds found in Cepheus present a particular problem when one attempts to assign a distance. There are two different complexes along the line of sight: one at approximately 300 pc with a velocity $``$ 0 kms<sup>-1</sup>; and one in the local spiral arm at approximately 800 pc and with a velocity of $``$ $``$12kms<sup>-1</sup> (Grenier et al. 1989). Some of the Cepheus clouds we selected had been sampled with CO observations (Taylor et al. 1987), and the measured velocities revealed the clouds belonged to one or other complex. This revealed that the sample presented here contains clouds from both complexes and hence that the three Cepheus clouds in the sample without CO observations could be at either of the two distances. In total, 11 of the 16 clouds had a single distance assigned to them, 2 had upper limits, and 3 could be at either of 2 distances. Column 5 of Table 1 gives the velocity of the cloud (if previously measured), and column 6 gives the estimated distance.
### 2.5 The case of L1689
As a cross-check of both the procedure used and the mass estimates derived, we made a map of a previously studied cloud, L1689. Colour temperature and optical depth maps were constructed and the results are presented in Figure 2(a) as a 100-$`\mu `$m optical depth map of the region. An extended region of high 100-$`\mu `$m optical depth containing some structure is seen. Figure 2(b) shows a <sup>13</sup>CO map of the same region taken from Loren (1989). A similar morphology is seen in both maps. The two star-forming cores R57 (alias L1689N), and R59 (alias L1689S) are clearly visible, as is the isolated core R65 (alias L1689B).
At the centre of the cloud there is an apparent ‘hole’ in the optical depth map at the position of the bright point source, IRAS16288, which is a young protostar at the centre of L1689S. Wood et al. (1994) also noted that a bright, point-like source dominating the IRAS emission can cause an apparent hole in the 100-$`\mu `$m optical depth. They ascribed this to a beam filling factor effect. The right hand side of the above equation for $`\tau _{(100)}`$ also contains a term for the solid angle of the source, $`\mathrm{\Omega }`$, which has been set equal to 1 in our analysis. For an extended molecular cloud source this is a valid assumption, but it fails when there is a bright point source in the beam. This is the effect we see in the case of IRAS16288.
We calculated the mass of L1689, using the above equation, from the 100-$`\mu `$m optical depth map in Figure 2(a), and found a value of 448 M (assuming a distance of 160 pc). Loren (1989) used the <sup>13</sup>CO data shown in Figure 2(b), and obtained a value of 566 M. These two measurements are consistent to within $`\pm `$20 per cent, which is as accurate as either method can claim, and so we conclude that the technique outlined not only gives accurate qualitative information on the morphology of interstellar clouds, but also appears to provide relatively good quantitative estimates of cloud masses. Nonetheless, we realise that masses derived from 100-$`\mu `$m optical depths might be underestimated in some cases, if the 100-$`\mu `$m emission is completely optically thick. In none of the cloud cores that we studied did this appear to be the case.
## 3 The core catalogue
### 3.1 Core properties
Figures 3 & 4 show grey-scale images, with isophotal contours overlaid, of nine of our molecular cloud regions. They can be seen to vary in size from 1 to 5 degrees across, and also to vary in complexity and structure. Some residual striping can be seen in two of the fields. However the typical structures associated with molecular clouds can be seen in all of the images – namely cores, filaments and other structures on all scales within the maps. Some of the structure seen in the maps corresponds to previously known molecular clouds, but some of the clouds, and many of the cores, had not been previously catalogued. For example, Figure 3(d), 420B, and Figure 4(d), 423A, both contain a Lynds dark nebula and an open cluster. 423A also contains an HII region and two further catalogued clouds. Some of the regions contain no previous associations – especially those in the southern hemisphere. For a full list of known associations see Table 1 and section 2.4 above.
A catalogue of the most opaque regions in each cloud was produced and the maps were examined. The definition of what constitutes a core in a molecular cloud is of necessity somewhat subjective. We chose to define a core as the most opaque 1% of a cloud’s area. This level was chosen to ensure that the area defined as a core was relatively small compared to the cloud, and hence would only include the most dense regions where we would expect star formation to take place. We then calculated the mass of each core.
Table 2 lists the properties of our new core catalogue. Column 1 lists the new number we have designated for each core in order of Right Ascension. Column 2 lists the cloud in which it is located, following the numbering convention of Table 1. Columns 3 & 4 list the position of the centre of each core. Column 5 lists the derived colour temperature of each core. Column 6 lists the solid angle of each core in square arcmin. Column 7 lists the mean optical depth, and column 8 lists the peak optical depth of the core. Column 9 gives the ellipticity, column 10 lists the mean radius, column 11 lists the position angle (north through east) of the major axis and column 12 lists the mass of each core.
The median 100-$`\mu `$m optical depth of our 60 cores is 1.9 $`\times `$ 10<sup>-4</sup>, corresponding to a column density of N(H<sub>2</sub>) = 3.8 $`\times `$ 10<sup>21</sup> cm<sup>-2</sup>. The median radius of the cores for which a distance is known is 0.31 pc, with a mean radius of 0.41 pc. The mean is skewed by a small number of large cores, so we prefer to use the median to characterise our sample. The median volume density of the sample is $``$2 $`\times `$ 10<sup>3</sup> cm<sup>-3</sup> (the mean volume density is very similar). Hence we see that, by selecting our core sample based on a wavelength of 100 $`\mu `$m, we have typically selected somewhat lower density cores than many previous surveys of molecular cloud cores (see below). The difference is probably mainly due to our selecting relatively isolated clouds as a result of our constraints on background emission.
### 3.2 PSC associations
All of the IRAS point sources within the boundaries of each core were located using the Point Source Catalogue (PSC), and are listed in Table 3. The core in which the source is found is noted along with the source flux densities at each IRAS waveband. A superscript in the last column indicates that the IRAS PSC listed a known association for the source: (1) indicates that the source appears in the Smithsonian Astrophysical Observatory Star Catalogue (SAO); (2) represents the catalogue of Ohio State University Radio Sources; (3) is the IRAS Serendipitous Survey Catalogue; (4) is the Dearborn Observatory Catalog of Faint Red Stars; and (5) is the IRAS Small Scale Structure Catalog (Beichman et al. 1988 and references therein).
There are 21 source associations, of which: 11 are detected only at 100 $`\mu `$m; two are detected only at 60 $`\mu `$m; with one detected at 60 & 100 $`\mu `$m, but no shorter wavelengths; five are only detected at 12$`\mu `$m; one source is detected at 12 & 25 $`\mu `$m, but not at longer wavelengths; and one source is detected at all four wavebands. The eleven 100-$`\mu `$m-only sources have upper limits at the shorter wavelengths such that we can say that their spectral energy distributions peak at a wavelength of around 100 $`\mu `$m or longer. The one source detected at all four wavebands and the source detected at 60 & 100 $`\mu `$m only, also have rising spectra to longer wavelengths. The other two 60-$`\mu `$m-only sources are also consistent with having rising spectra. This is the form of spectral energy density we would expect for deeply embedded young stellar objects (YSOs) or protostars. The six sources detected only at the shortest wavelengths have spectra consistent with field stars or other objects. Hence there appear to be 15 PSC sources associated with our 60 cores.
However, some of the PSC associations may still be chance alignments, and in addition the 100-$`\mu `$m-only sources could be cirrus associated with the clouds rather than embedded YSOs. So two ‘control samples’ were produced by offsetting the core positions by first 2 degrees and then 5 degrees in declination, and selecting the IRAS point sources associated with the new positions. In effect this produces two false populations of cores, one of which is still located in the clouds (the 2-degree offset population), and one of which is located outside of the clouds. This technique was chosen because it was thought that it would introduce the least bias in the estimate of the numbers of associations that were purely chance alignments.
In the control sample produced from a 2-degree displacement we found ten sources. Four were detected only at 100$`\mu `$m, and five sources were detected at 12 and 25$`\mu `$m – all of which were brighter at 12$`\mu `$m than 25$`\mu `$m. These are thus thought to be main sequence field stars – one had been positively identified as a B star. The one remaining source, which was detected at all wavelengths, was previously catalogued as a galaxy in the Upsalla General Catalogue of Galaxies. In the sample produced at a displacement of 5 degrees there were also ten sources, four of which were 100-$`\mu `$m-only sources. Six of the sources were detected at both 12 and 25 $`\mu `$m and were brighter at 12 $`\mu `$m than 25 $`\mu `$m, and again are probably field stars.
Hence we find no evidence for a significant difference between the two control samples, either due to the increased displacement, or to the location being inside or outside the clouds. Likewise, we see that the number of foreground stars remains roughly constant both in the control samples and the real sample. Of the remaining 15 PSC associations in the real sample, we conclude that six are probably chance alignments. Of the remainder, up to half may be 100-$`\mu `$m-only cirrus sources – i.e. the point source might be the cloud core itself (see, for example: Benson & Myers 1989; Reach, Heiles & Koo 1993; Bourke et al. 1995b). Hence we estimate that the number of PSC sources, which are embedded YSOs and are associated with our sample of 60 cores, is five.
## 4 Comparison with previous core catalogues
### 4.1 Densities
We can compare our new catalogue of molecular cloud cores with catalogues produced by previous authors using different methods. For example, Myers et al. (1983) surveyed 90 cores in C<sup>18</sup>O and <sup>13</sup>CO. Using these observations they showed that the cores’ C<sup>18</sup>O optical depths, as estimated from the ratio of C<sup>18</sup>O brightness to <sup>13</sup>CO brightness, was reasonably tightly distributed around a mean of 0.35 to 0.45. They found that the C<sup>18</sup>O column density, $`N_{18}`$, had a mean of $``$ 1.6 $`\times `$ 10<sup>15</sup> cm<sup>-2</sup>. This led them to estimate a typical $`N(H_2)`$ column density of $``$ 10<sup>22</sup> cm<sup>-2</sup>.
Clemens & Barvainis (1988) also carried out a survey of molecular cloud cores, and the IRAS images of these cores were studied by Clemens, Yun & Heyer (1991). They found a typical 100-$`\mu `$m optical depth for their sample of 2.5 $`\times `$ 10<sup>-4</sup>. From this we can estimate a column density of N(H<sub>2</sub>) $``$ 5.0 $`\times `$ 10<sup>21</sup> cm<sup>-2</sup>, using the above equations. Wood et al. (1994) carried out a survey of the ISSA data for some known star-forming regions, and produced a catalogue of molecular cloud cores. They found that their cores had a mean column density N(H<sub>2</sub>) $``$ 4.5 $`\times `$ 10<sup>21</sup> cm<sup>-2</sup>.
The mean volume density of the cores in each of these surveys can also be estimated. Myers et al. (1983) found a typical volume density in their cores of $`n(H_2)8\times 10^3`$ cm<sup>-3</sup>. Wood et al. (1994) did not quote a typical volume density but a value can be estimated from their column density if a typical radius for the cores is known. The optimum value to use is complicated by the fact that they defined cores to be areas with visual extinction A<sub>v</sub> $`>`$ 4. This leads to the inclusion of several very large ‘cores’ which we would define as ‘clouds’ (their largest ‘core’ is 329 pc<sup>2</sup>). This skews their mean to a value much larger than the median. We therefore take the the second quartile boundary of radius, which is $``$ 0.5 pc in their sample. This leads to an estimate for the typical number density of 3 $`\times `$ 10<sup>3</sup> cm<sup>-3</sup>.
The typical number densities of the Clemens & Barvainis (1988) cores can also be estimated if the typical size of the cores is known. Clemens et al. (1991) claim that the typical radius of the cores is 0.35pc. This was calculated from the mean solid angle of the cores and an assumed distance of 600 pc. However, this distance estimate is somewhat uncertain. It was derived originally by Clemens & Barvainis (1988) from two considerations: firstly that the cores were generally within 12 degrees of the Galactic Plane; and secondly that the cores had LSR velocities of between 0 and 10 kms<sup>-1</sup>.
However, Bourke et al. (1995a) argued that, because these cores are seen in extinction against the background stars of the Galactic Plane, the survey is biased towards detecting cores near the Plane. Hence they derive an estimate of the typical distance to the cores of 300 pc. Using this value a typical radius of 0.175pc is derived, and hence a volume density of $`n(H_2)`$ $``$ 4.8 $`\times `$ 10<sup>3</sup> cm<sup>-3</sup> is calculated for this sample. This is significantly higher than the value quoted by Clemens et al. (1991), mainly due to the different distance assumed, but also partly because we have followed Wood et al. (1994) in calculating column density from 100-$`\mu `$m optical depth. Lemme et al. (1996) studied a subset of the Clemens & Barvainis (1988) cores and reached a similar conclusion: namely that Clemens et al. (1991) may have underestimated the typical density of the cores in the Clemens & Barvainis (1988) sample.
Bourke et al. (1995b) presented data for two samples of molecular cloud cores: one sample consisted of isolated Bok globules, and the other was a set of cores in more extended clouds. These had mean column densities of 5 $`\times `$ 10<sup>21</sup> cm<sup>-2</sup> and 1.6 $`\times `$ 10<sup>22</sup> cm<sup>-2</sup> respectively, and volume densities of 10<sup>4</sup> cm<sup>-3</sup> and 3 $`\times `$ 10<sup>4</sup> cm<sup>-3</sup> respectively. Hence, it can be seen that each of these samples has selected cores with somewhat different properties. We here label the Bourke et al. (1995b) extended sample Bourke(1), and the Bok globule sample Bourke(2). All of the column and volume density estimates have associated errors from a number of causes. These include chiefly the assumed fractional abundance of the different tracers used in each set of observations. We believe that the values quoted are accurate relative to one another to within 20 per cent. Table 4 summarises the mean properties of each of the core samples we have discussed.
### 4.2 Protostellar content and pre-stellar life-time
A useful parameter in the study of dense cores is the fraction of cores in a given sample that contain protostars or YSOs. This parameter can be used to estimate a mean statistical life-time for the sample. This method was first used by Beichman et al. (1986), who studied the embedded YSOs within the core sample of Myers et al. (1983). They found that 35 cores had IRAS sources meeting the colour selection criteria of embedded YSOs and 43 had no embedded IRAS sources.
This method was also used by Wood et al. (1994) with slightly different selection criteria to discard foreground main sequence stars. They found that 59 out of the 255 cores in their sample had at least one embedded source. We carried out the same test for the cores in the Clemens & Baravainis (1988) sample, and found 65 cores out of 248 have embedded IRAS sources (using the Beichman et al. selection criteria). Bourke et al. (1995b) found that 27 out of their 76 Bok globules had IRAS point sources satisfying the Beichman criteria, while 36 out of 59 of their cores in extended clouds had embedded sources.
Beichman et al. (1986) used the percentage of cores with embedded sources to estimate the lifetime of a core without an embedded YSO by comparing it with the life-time of the embedded YSO phase. They found the life-time of a starless core in this way to be a $`10^6`$ years. This was based on assumptions relating to the time taken for a star to accrete its total mass, and the time for it to become visible. The uncertainty in this figure is probably about a factor of 2, but this does not affect the following statistical arguments, it would simply move the absolute time-scale as a whole by a factor of 2 (i.e. the absolute values of the vertical axes in Figures 5 and 6 can move up or down by a factor of 2, but the relative positions of the data-points do not move – see below).
Following this method, we here infer a statistical estimate of the pre-stellar lifetime, $`\tau `$, of the cores studied in each of the samples discussed above. We also make the assumption that in each survey the cores without embedded sources will go on to form protostars in their centres. The cores with embedded sources are assumed to have an average lifetime which is the same in each sample and equal to the embedded YSO time-scale derived by Beichman et al. (1986). This is simply expressible as:
$$\tau =\frac{\text{No. of cores without embedded sources}}{\text{No. of cores with embedded sources}}\times 10^6\text{years},$$
where we have taken the lifetime of cores with embedded sources to be $`10^6`$ years as discussed above (c.f. Ward-Thompson et al. 1994). $`\tau `$ was calculated for each core sample discussed in section 4.1, and is listed in Table 4.
The fraction of cores with embedded sources has random errors, and in the catalogues where the number of cores with embedded sources is large, the 1 $`\sigma `$ error is simply the square root of the number of cores with embedded sources. The number of cores in the sample presented in this paper has a low number of cores with embedded sources and therefore the uncertainty in the pre-stellar lifetime is dominated by the systematic effects discussed in section 3.2 above. These errors are quoted in Table 4.
It is apparent from Table 4 that the percentage of cores with embedded sources increases with both the mean column density and volume density of the cores. Hence the pre-stellar core lifetime decreases with both column and volume densities. The results are plotted in Figs. 5 & 6.
Figure 5 shows prestellar lifetime versus column density. Each of the points represents one of the data sets listed in Table 4 (B1 signifies Bourke et al.(1) etc.). We fitted a relation to the data of the form $`\tau \mathrm{exp}(10^{22}/N)`$. This is shown as a solid line. Figure 6 shows pre-stellar lifetime versus volume density (using the same labelling as in Figure 5). Power law fits to the data points were carried out, and the best-fit was found to be $`\tau n(H_2)^{0.85\pm 0.15}`$, which is shown as a solid line on Figure 6.
Figure 7 plots column density against volume density for each of the core data sets in Table 4, using the same labelling once again. The solid line is a best fit to the data, which is $`N(H_2)n(H_2)^{0.55\pm 0.13}`$. This is of a similar form to the empirical observations usually referred to as Larson’s relations (Larson 1981), although with a somewhat different slope. If we make the assumption that each catalogue is a representitive subset of all star-forming clouds, and the different membership of each subset is simply due to the characteristic density sensitivity of the different tracers used in each study, then we can compare these empirical results with theoretical predictions.
### 4.3 Comparison with theory
We can compare the dependence of prestellar time-scale on volume density seen here, with the ambipolar diffusion time-scale. However, it must be noted that the densities plotted in Figure 6 are the mean densities for each core sample, volume-averaged over the whole core in each case. They are not the central densities of each core. Hence care must be taken when comparing Figure 6 with ambipolar diffusion models, which usually plot central density versus time-scale.
The ambipolar diffusion time-scale is proportional to the ionisation fraction, $`\chi _i`$. The ionization of the gas can be caused both by ultra violet radiation and cosmic ray ionisation. The canonical form for the relation between cosmic ray ionisation and volume density is usually taken to be a power-law. For example, Mouschovias (1991) uses:
$$\chi _in(H_2)^{0.5},$$
which leads to the derivation of the ambipolar diffusion time-scale relative to density of:
$$\tau _{AD}n(H_2)^{0.5}.$$
This behavior is plotted as a dotted line in Fig. 6. When additional factors are included (such as chemistry, multiple charge carriers etc.), the volume density has a slightly different influence on recombination rates, and hence ionisation levels. For example, McKee (1989) suggests:
$$\chi _in(H_2)^{0.75},$$
leading to:
$$\tau _{AD}n(H_2)^{0.75}.$$
This is plotted as a dashed line on Figure 6. It can be seen that the steeper relation is a closer match to the data, and is in fact consistent to within 1 $`\sigma `$.
The fact that the lowest density point on Figure 6 lies above the best-fit line suggests that at low densities an additional effect may be starting to become significant. This is somewhat tentative, but does have a possible theoretical explanation. For example, McKee (1989) treats ionization and recombination in some detail, and incorporates the role of metals, and UV penetration. He derives an expression for the star formation time-scale which is dependent on both volume density and column density (see his equation 4.5, and his fig. 1). He also shows that UV ionisation affects the star formation timescale, yielding an exponential dependence on column density (McKee equation 5.2) of the form:
$$\tau _{SF}e^{(N_c/N)},$$
where $`\tau _{SF}`$ is the star formation timescale and $`N_c`$ is the critical column density. $`\tau _{SF}`$ is not the same as the timescale we have plotted in Figure 5, since McKee was referring to the timescale in which an entire molecular cloud is converted to stars and we are considering the timescale for the dense cores within clouds to form stars. Hence his timescales are roughly an order of magnitude greater than ours. However, we may be observing a similar exponential behaviour. The constant in the exponent, $`N_c`$, was derived by McKee to be 1.6 $`\times `$ 10<sup>22</sup> cm<sup>-2</sup> (deduced from his critical extinction estimate of A<sub>V</sub>=16), which is consistent, to within the errors, with the value of 1.0 $`\times `$ 10<sup>22</sup> cm<sup>-2</sup> that we derive in Figure 5.
The three parameters of column density, volume density and lifetime are all inter-dependent. Hence Figures 5 & 6 are not independent. Therefore the empirical fits which we have applied to these plots are not strictly separable in the manner we have adopted. However, we used this approach for the sake of clarity and of demonstrating the potential underlying physical processes. From these data we cannot make definitive statements about ionisation mechanisms, but we have shown that the data, not only from our current survey, but also from those earlier surveys that we have discussed above, are all consistent with a picture in which the ionisation levels in molecular cloud cores regulate the star formation timescale.
## 5 Conclusions
We have presented a catalogue of 60 cores situated in medium opacity molecular clouds, with the aim of broadening the range of physical environments in which star formation has been studied. The catalogued cores typically have lower column densities and volume densities than previously studied samples and a lower fraction of the cores have formed stars. We found a clear trend for cloud cores to form stars more rapidly with increasing volume and column density. We hypothesised that this can be interpreted in the framework of ionisation-regulated star formation.
## Acknowledgments
NEJ acknowledges PPARC for studentship funding whilst at the University of Edinburgh. IRAS was operated by the US National Aeronautics and Space Administration. Data handling facilities for the UK were provided by the Rutherford Appleton Laboratory. The ISSA images were produced by the Infrared Processing and Analysis Centre (IPAC) at the Jet Propulsion Laboratory (JPL), California. The authors would also like to thank STARLINK, and particularly David Berry, for provision of the COLTEMP routine and assistance in modifying this routine to produce the core catalogue.
|
no-problem/9908/quant-ph9908051.html
|
ar5iv
|
text
|
# A Cat-State Benchmark on a Seven Bit Quantum Computer
We propose and experimentally realize an algorithmic benchmark that demonstrates coherent control with a sequence of quantum operations that first generates and then decodes the cat state $`(|000\mathrm{}+|111\mathrm{})/\sqrt{2}`$ to the standard initial state $`|000\mathrm{}`$. This is the first high fidelity experimental quantum algorithm on the currently largest physical quantum register, which has seven quantum bits (qubits) provided by the nuclei of crotonic acid. The experiment has the additional benefit of verifying a seven coherence in a generic system of coupled spins. Our implementation combines numerous nuclear magnetic resonance (NMR) techniques in one experiment and introduces practical methods for translating quantum networks to control operations. The experimental procedure can be used as a reliable and efficient method for creating a standard pseudo-pure state, the first step for implementing traditional quantum algorithms in liquid state NMR. The benchmark and the techniques can be adapted for use on other proposed quantum devices.
Quantum information processing (QIP) offers significant advantages over classical information processing, both for efficient algorithms and for secure communication . As a result it is important to establish that sufficient and scalable control of a large number of qubits can be achieved in practice. There are a rapidly growing number of proposed device technologies for QIP, and to compare them it is necessary to establish benchmark experiments that are independent of the underlying physical system. A good benchmark for QIP should demonstrate the ability to reliably and coherently control a reasonable number of qubits. This requires that elementary operations can be implemented with small error regardless of the state of the qubits, as sufficiently small error is one of the most important prerequisites for robust QIP . The cat-state benchmark proposed here is perhaps the simplest demonstration of control which can be implemented for any number of qubits and involves coherence in a non-trivial way.
To explain and realize the cat-state benchmark we use the example of NMR based QIP. At least two proposals for quantum devices are based on using nuclear spins controlled by radio frequency (RF) fields: The first involves the use of molecules forming an ensemble of quantum registers and the second uses nuclei embedded in a semiconductor . Of these proposals, the first is presently accessible to experimental investigation by the use of off-the-shelf equipment for liquid state NMR. By means of the technique of preparing pseudo-pure states, it is possible to benchmark quantum algorithms involving up to about ten qubits to determine how well coherence is preserved and to measure how reliable the available control methods are. There have been numerous experiments implementing various quantum algorithms on up to five qubits using NMR. The experiment reported here coherently implements a quantum algorithm on seven qubits with a verifiable fidelity. It also introduces a reliable method for preparing pseudo-pure states and for verifying maximal coherences in generic spin systems.
NMR QIP uses spin $`\frac{1}{2}`$ nuclei as qubits. Examples are protons and carbon $`13`$ bound in a molecule. QIP requires the ability to couple different qubits. In molecules in a liquid at high magnetic field, scalar couplings can be used for this purpose and controlled with refocusing methods . Thus, each molecule can be considered as a quantum register consisting of (some of) its spin $`\frac{1}{2}`$ nuclei. The initial state is prepared by allowing enough time for thermal relaxation and readout is performed by an ensemble measurement using standard NMR methods. We use deviation density matrices for describing the state of the nuclei. To simplify the discussion, we use a three qubit example of the cat-state benchmark. The thermal equilibrium state of a molecule with one proton and two carbon $`13`$ nuclei at high field in a liquid is given by $`\sigma _z^{(H)}`$ $`+`$ $`.25\sigma _z^{(C_1)}`$ $`+`$ $`.25\sigma _z^{(C_2)}`$ with high accuracy, up to an overall scale factor and a multiple of the identity. The standard Pauli matrices are used as an operator basis, and superscripts on operators refer to the particle the operator acts on. The cat-state benchmark for this system begins by eliminating signal from the carbons to obtain the initial state $`\sigma _z^{(H)}`$. Next a sequence of quantum gates is used to achieve the state $`\sigma _y^{(H)}\sigma _y^{(C_1)}\sigma _x^{(C_2)}`$ (Fig.1). This state is a sum of several coherences. In particular, it contains the three coherence $`|000111|+|111000|`$, which is the deviation of the operator for the cat state $`(|000+|111)/\sqrt{2}`$. If each qubit is rotated by a phase $`\varphi `$ around the $`z`$-axis, the three coherence rotates by $`3\varphi `$, while all other components of this (or any other) state will rotate by $`0`$, $`\varphi `$ or $`2\varphi `$. This feature can be used to label the three coherence and eliminate all other components of the state, for example by using a magnetic field gradient. An efficient alternative using z-pulses or phase cycling is given below. The three coherence can be decoded to the state $`\sigma _x^{(H)}|0000|`$ (Fig.1). This is then observed after inverting the labeling gradient at three times the original strength. In a fully resolved reference spectrum obtained by applying a $`90`$deg rotation to the proton in the initial state, the proton shows four peaks, one for each of the states $`|0000|`$, $`|0101|`$, $`|1010|`$ and $`|1111|`$ of the carbons. After decoding the cat state, only a single peak should be left in the spectrum. The ratio $`F`$ of the intensity of this peak to the intensity of the corresponding peak in the reference spectrum is unity if everything works perfectly. $`F`$ is reduced by errors in the preparation and decoding steps. Under the assumption that error in the phase labeling method is negligible, it can be shown that $`F`$ is a lower bound on the average of the fidelities with which the decoding procedure maps the states $`|000\pm 111|`$ to the states $`(|0\pm |1)|00`$.
The three qubit cat-state benchmark can be generalized to any number $`n`$ of qubits by repeating the steps of the cascade in the networks shown in Fig.1. We implemented the seven qubit version using fully labeled trans-crotonic acid (Fig.2). The qubits are given by the spin $`\frac{1}{2}`$ component of the methyl group, the two protons adjacent to the double bond and the four carbon $`13`$ nuclei. A fidelity of $`.73\pm .02`$ was achieved. The loss of signal is primarily due to spin relaxation, incomplete refocusing of couplings and intrinsic defects in using selective pulses. The success of the experiment derives from the use of the following techniques: 1. An RF imaging method to greatly reduce the effects of RF inhomogeneities. 2. A gradient based selection method for removing signal from the spin $`\frac{3}{2}`$ component of the methyl group in an almost optimal way. 3. The use of abstract reference frames for each nucleus to absorb chemical shift and first order off-resonance effects in selective pulses. 4. Precomputation of coupling effects during pulses. 5. A pulse sequence compiler that optimizes delays between pulses for achieving the desired amount of coupling evolution while minimizing unwanted couplings. All these techniques are scalable in principle. Further details are in the methods section.
The cat-state benchmark has three applications that promise to make it useful for NMR and other quantum technologies. First, the benchmark demonstrates the ability to reach the maximum coherence with little loss of signal. Previous experiments have generated coherences by exploiting symmetry and effective Hamiltonian methods. Very high order coherences can be observed in solid state . A maximal coherence of order seven was detected by Weitekamp et al. in benzene with one carbon labeled by exploiting symmetry. The methods used in these cases do not yield the amount of signal that can be achieved by using methods based on quantum networks.
Second, the output of the benchmark can be used as a very reliable pseudo-pure state for quantum algorithms. We can write the maximum coherence on $`n`$ qubits as a sum of two operators $`\stackrel{~}{X}=|00\mathrm{}11\mathrm{}|+|11\mathrm{}00\mathrm{}|`$ and $`\stackrel{~}{Y}=i|11\mathrm{}00\mathrm{}|i|00\mathrm{}11\mathrm{}|`$. The decoding operator converts $`\stackrel{~}{X}`$ to $`\sigma _x^{(1)}|0\mathrm{}0\mathrm{}|`$ and $`\stackrel{~}{Y}`$ to $`\sigma _y^{(1)}|0\mathrm{}0\mathrm{}|`$. These states can be used as a pseudo-pure input to a quantum algorithm using one less qubit, provided the following two problems are addressed: The first problem is to ensure that the labeling method can be used together with a subsequent algorithm. The second problem is to eliminate errors accumulated when decoding the $`n`$-coherence.
Clearly, the method used to label the $`n`$-coherence must be reliable. The gradient based method is effective in conjunction with a quantum algorithm, as long as the echo pulse is applied just before the final observation. Unfortunately, diffusion introduces loss of signal at the gradient strengths required when used with long algorithms. Also, gradient methods do not easily generalize to other proposed quantum devices–an important issue in benchmarking. To label the $`n`$-coherence one can instead perform $`2n+1`$ experiments, where in the $`k`$’th experiment, the gradient is replaced by explicit pulses that rotate each qubit by a phase $`\varphi _k=2\pi k/(2n+1)`$. If $`o_k`$ is the expectation of the observable measured at the end of the $`k`$’th experiment, then the value $`o=_ko_ke^{i2\pi kn/(2n+1)}`$ is non-zero only for signal originating at the $`n`$-coherence. This technique can be applied in any system where it is possible to apply $`z`$-rotations reliably. If the phase of applied pulses is highly controllable (as is the case in systems controlled by RF or optical fields), instead of applying explicit pulses to accomplish the $`z`$-rotations, one can change the reference frame for each qubit, which is equivalent to changing the phase of all subsequent pulses and the observation reference phase by $`\varphi _k`$. This is essentially a phase cycling method for selecting the $`n`$-coherence. We have used both the gradient based and this phase cycling method with identical results in the crotonic acid system.
The problem of decoding error can in principle be solved by using the maximum coherence directly as the input for a (modified) algorithm. However it is not possible to obtain a reference signal for the $`n`$-coherence without first mapping it to an accessible observable, which can involve a loss of signal. Another problem is that it may be inconvenient to use the $`n`$-coherence instead of the more familiar standard pseudo-pure state. Our experiments show that we can decode the $`n`$-coherence to the pseudo-pure state with no detectable error in the observed spectrum. There can be error signal in unobserved operators which one would like to eliminate from future observation. This can be done efficiently by performing multiple experiments, each with a random phase of $`0`$deg or $`180`$deg applied to qubits $`2,3\mathrm{}`$, a technique which is a special case of the randomized methods of . The number of experiments that need to be performed depends on the desired level of suppression of possible error signals. $`N`$ experiments result in suppression by a factor of $`O(1/\sqrt{N})`$.
The final application of the cat-state benchmark is as an experiment to test the ability to coherently control a quantum system and demonstrate a fully coherent implementation of a non-trivial quantum algorithm. This is a critical issue for scalable quantum information processing, as scalable robustness requires that each operation has a maximum error below some threshold (which may depend on the types of errors) . The known thresholds seem to be dauntingly small. Nevertheless, interesting small scale computations may be performable with much higher error per gate. Thus the ability to implement the cat-state benchmark with high fidelity is a good indication of what types of tasks can be accomplished in the system at hand. In addition, the decoding algorithm of the cat-state benchmark is an instance of the type of process required to perform fault-tolerant error-correction , which is believed to be a necessary subroutine in any large scale quantum computation. Our experiment involved a total of twelve useful two-qubit operations, so the fidelity of $`.73`$ suggests an error of about $`.023=.27/12`$ per coupling gate. If this degree of control were available in the context of quantum communication, it would be close to the known thresholds .
The realization of the cat-state benchmark given here is in an ensemble setting. Most proposals for quantum devices involve individual systems with pure initial states. In these cases the benchmark can be modified by replacing the ensemble measurements by repetition to infer $`o_k`$ with sufficiently high signal to noise. The preparation step is replaced by a network that directly maps the available initial state to the cat state. Note that any evaluation of a quantum device involves substantial repetition, essentially replacing the ensemble measurement by an ensemble in time.
Methods. We used a Bruker DRX-500 NMR spectrometer with a triple resonance probe for our experiments. (The triple resonance probe is normally used for proton, carbon 13 and nitrogen 15; we used only the first two.) All the equipment used was standard with no specialized modifications. The chemical structure of trans-crotonic acid is given in Fig.2. Deuterated chloroform was used as the solvent. The chemical shifts (at 298K and 500Mhz) and coupling constants were experimentally determined to within .1Hz by direct analysis of the proton and carbon spectra (see Fig.2). This data was used to design selective pulse shapes and times. Only $`90`$deg and $`180`$deg rotations were used in the pulse sequence. The hard and selective pulses were analyzed by simulation on single and pairs of nuclei and represented optimally as a composition of phase shifts, $`\sigma _z\sigma _z`$ couplings and an ideal $`90`$deg or $`180`$deg pulse. The simulation is efficiently scalable, requiring $`7(7+1)/2`$ two qubit simulations for the seven qubit register. This permits elimination of most first order errors due to off-resonance and coupling effects without using specialized shapes. The computed phase shifts were absorbed into the rotating frame of each nucleus, while the computed coupling effects contributed to the coupling operations or were refocused. To implement quantum information processing tasks, we began with an ideal quantum network expressed in terms of $`90`$deg rotations and $`1/(2J)`$ coupling evolutions. Refocusing pulses are then inserted, and an optimizing pulse sequence compiler is used to determine the best choice of delays between pulses to achieve the desired evolution. The compiler permitted us to automate many of the tasks of translating a quantum network to a pulse sequence. Often smaller couplings cannot be perfectly refocused without an excessive number of pulses. Error due to imperfect refocusing is explicitly estimated by the compiler. The final pulse sequence used in our experiment required 48 pulses with an estimated signal loss due to coupling errors of $`.15`$.
To have accurate pulses, we have reduced the effect of RF inhomogeneities present in standard configurations by selecting signal based on RF power. The method first applies a $`90`$deg excitation pulse to the methyl group followed by a sequence of pairs of $`180`$deg rotations at phases of $`\pm \varphi _k`$, where $`\varphi _k`$ was determined from one of the selective pulse shapes we used (modified by an initial sequence to compensate for an off-resonance effect) and designed to cause a $`90`$deg phase shift in the signal at an RF power of about $`\pm 2\%`$ of the ideal. At other powers, the effect is such that a phase cycle involving a change of sign of the $`90`$deg phase shift eliminates the signal. A final pulse returns the selected signal along the z-axis in preparation for the next step. By calibrating the power, we were able to retain $`25\%`$ of the signal compared to an unselected spectrum. This sequence also has the property of selecting signal from only the methyl group so that the initial state is $`\sigma _z^{(M)}`$.
The next step in the experiment required selecting the spin $`1/2`$ component of the methyl group’s state space. This can be accomplished by use of a three step sequence involving transfer of polarization to the adjacent carbon and terminated by a gradient “crusher” (Fig.3). The elimination of signal from the spin $`\frac{3}{2}`$ states was verified by three experiments involving observation of the signal on the adjacent carbon after transfer of the methyl polarization with different delays for coupling. One of the resulting spectra is shown in Fig.4 with the standard reference spectrum. We were unable to detect error signal above the noise.
The remaining steps of the experiments consist of the generation of the $`n`$-coherence, labeling the $`n`$-coherence, and the decoding operations to obtain the standard pseudo-pure state, which was observed on the methyl-carbon $`C_1`$. We chose $`C_1`$ for making observations because all the couplings are adequately resolved there. The sequence is as described earlier, with judiciously inserted refocusing pulses and optimized delays. All the pulse phases were computed automatically for the nuclei’s individual reference frames. Knowledge of the intended current state of a nucleus was exploited when that state is $`|00|`$ or $`|11|`$ to absorb the effects of couplings to that nucleus into the reference frame. The compiled pulse sequence, pulse shapes and other required information needed for running on a Bruker spectrometer is available from the authors. Fig.5 shows the pseudo-pure state signal compared to a reference spectrum obtained after selection of the spin $`\frac{1}{2}`$ selection sequence on the methyl group. Errors can show up as peaks in positions different from the leftmost one. We could not detect such errors above the noise. The fidelity is given by the ratio of the intensity of the left most peak in the final signal to the intensity of a peak in the reference spectrum and was computed to be $`.73\pm .02`$.
Acknowledgments. We thank C. Unkefer for help in synthesizing labeled crotonic acid, David Cory and Tim Havel for advice in using NMR spectrometers, S. Lacelle for suggesting the idea of using crotonic acid, D. Lemaster and G. Fernandez for daily help at the spectrometer and W.H. Zurek for encouraging us to exceed our expectations. This research was supported by the Department of Energy under contract W-7405-ENG-36 and the National Security Agency. We thank the Newton Institute, where part of this work was completed.
|
no-problem/9908/astro-ph9908237.html
|
ar5iv
|
text
|
# Peculiar Velocities from Type Ia Supernovae
## 1 What’s been done
Type Ia supernova (SNe Ia) are a newcomer to the toolbox used for measuring peculiar velocities and flows. Yet, with the rapid increase in the sample size and precision of distance estimates to these luminous disruptions of white dwarf stars, SNe Ia show great promise in this field.
I will begin with a review of the published literature (of only 4 or 5 articles) of the applications of SNe Ia to flow measurements.
The first attempted use of SNe Ia for flow measurements began in the “low-precision era” which lasted until the early 1990s. This era was characterized by the use of photographic photometry and the assumption that SNe Ia were perfect “standard candles” with homogeneous luminosity and colors. Such data and philosophy yielded distance estimates with approximately 25% uncertainty. Additionally, the sample of SNe Ia were concentrated at much closer distances than the present (and future) sample.
Using 28 primarily photographically observed SNe Ia, Miller & Branch (1992) were able to discern the gravitational influence of the Virgo cluster, i.e. Virgocentric infall. Because the average depth of their sample was only about $`cz`$=2000 km s<sup>-1</sup>, their analysis was insensitive to the motion of the Local Group and the influence of the Great Attractor.
Jerjen & Tammann (1993) analysed a similar sample of 14 SNe Ia with an average depth of $`cz`$=3000 km s<sup>-1</sup> (again under the assumption of homogeneity of luminosity) but could not detect the motion of the Local Group.
By 1996, work by Phillips (1993), the Calán/Tololo Search (Hamuy et al. 1996a,b,c,d; Maza et al. 1995) the CfA Group (Riess, Press, & Kirshner 1995a, 1996) and others (see Branch 1998 for a review) demonstrated that with high-quality CCD light curves and application of relations between the peak luminosity, light curve shapes and color curve shapes, individual distance estimates to SNe Ia could reach observed precisions of 5-7%. These improvements and the growing sample of CCD light curves ushered in the “high precision era”.
An analysis in my thesis (Riess, Press, & Kirshner 1995b) of 13 new SNe Ia from the Calan/Tololo Search made the first detection of the motion of the Local Group using SNe Ia. The sample had an effective depth of $`cz`$=7000 km s<sup>-1</sup> and a typical distance precision of 6%. At this time, no corrections were applied for host galaxy extinction, though the members of the sample exhibited little reddening. Interestingly, the SN Ia measurement was strongly inconsistent with the large bulk flow observed from brightest cluster galaxies by Lauer & Postman (1994), a significant result since it was the only other sample at a similar depth. Nearly all of the observed disagreement occured in the Galactic $`\widehat{z}`$ direction. Despite the likely effects of correlations of small-scale flows (Feldman & Watkins 1995), the measurements remained in conflict. However, the relative imprecision of the SN Ia measurement could not rule out more moderate bulk flows on these scales.
Recently I have updated this measurement using the light and color curves of 44 SNe Ia with effective $`cz`$=5000 km s<sup>-1</sup>. This sample has been corrected for host galaxy extinction using the multicolor light curve shape method (Riess, Press, & Kirshner 1996). The results, show in Figure 1, are highly consistent with the previous SN Ia measurement, but have greater precision. The best-fit dipole is consistent with the CMB dipole. Relocating the SNe Ia into the CMB frame results in no measurable bulk flow (the debiased flow is negligible) with a 1$`\sigma `$ uncertainty of 150 km s<sup>-1</sup>.
An analysis by Riess, Davis, Baker, & Kirshner (1997) compared the observed peculiar motions of 25 SNe Ia with $`cz<10,000`$ km s<sup>-1</sup> to those predicted from the IRAS and ORS gravity maps (Nusser & Davis 1994). The predicted peculiar velocities of SNe Ia are a function of the local mass in the Universe ($`\mathrm{\Omega }_M`$) as well as the degree to which the positions of galaxies indicate the location of mass (i.e., the bias parameter). Together these unknowns are quantified by the density parameter, $`\beta =\mathrm{\Omega }_M^{0.6}/b`$. The comparison of the observed and predicted peculiar velocities of SNe Ia yields a statistically adequate match as well as strong constraints on the value of $`\beta `$. The fact that the observed and predicted peculiar velocity estimates concur (for the best-fit $`\beta `$) supports the gravitational instability paradigm as the source of peculiar flows. The results of the analysis are $`\beta =0.40\pm 0.15`$ from the IRAS comparison (and $`\beta =0.30\pm 0.15`$ from the ORS comparison, reflecting the relative biasing of infrared and optically selected galaxies). Bootstrap resamplings of the gravity maps and the SN Ia sample confirms the validity of the uncertainties.
Although mentioned previously by others in this conference, for completeness we mention an analysis by Zehavi, Riess, Kirshner, & Dekel (1998) which gives a marginal indication of a so-called “Hubble Bubble”. From 44 SNe Ia, Zehavi et al. (1998) found an indication at the 2-3$`\sigma `$ confidence level of a local excess expansion of 6% within 7000 km s<sup>-1</sup>. This increase in the global Hubble expansion appears to be compensated by a small decrease beyond this depth after which the Hubble expansion appears to settle to its global value. The model proposed by the authors is that we may live within a local void bounded by a wall or density contrast at $``$100 Mpc. More SNe Ia (and other distance indicators) will be required to test this provocative result.
## 2 The Future
Type Ia supernovae are an attractive tool for contributing to the measurement of peculiar velocities and flows in the future. SNe Ia provide independent means to measure flows at depths unreachable by many other distance indicators. The individual distance precision of SNe Ia results in a reduction of systematic errors like Malmquist and sample selection biases which often plague peculiar velocity studies. Individual SNe Ia can be corrected for line-of-sight extinction, eliminating a reliance on Milky Way extinction maps or inclination corrections. Finally, the pace at which SNe Ia are discovered is growing. By 1999 July 5 SN 1999da had already been discovered, starting the 5th cycle through the alphabet after only half a year! In 1998, 20 new SNe Ia with $`cz<0.1`$ were added to the sample which is useful for peculiar velocity studies.
A new era of nearby supernova searches is underway. Below we list in “bullet-form” searches and collection programs including some of their members, facilities, start dates, and successes.
$``$ Mount Stromlo & Siding Springs Observatory SN Search (Schmidt, Germany, Stubbs, Reiss)
Up since 1996, 2.3 m at MSSSO, $`>20`$ SNe Ia at $`z<0.1`$ so far…
$``$ Beijing Astronomical Observatory SN Search (Li, Qiu, Hu, etc.)
Up since 1996, 0.6 m at BAO, 13 SNe Ia at $`z<0.1`$ found so far…
$``$ Lick SN Search (Filippenko, Li, Treffers, etc.)
Up since 1997, KAIT robotic telescope, 16 SNe Ia at $`z<0.1`$ found so far…
$``$ Supernova Cosmology Project Nearby Search (Perlmutter, Aldering, etc.).
Started in 1999, many telescopes, $`7`$ SNe Ia at $`z<0.1`$ found so far…
$``$ CfA Program (Kirshner, Jha, Garnavich, Schmidt, Riess, etc.)
Collecting since 1993, $``$ 50 SNe Ia collected so far…
Others:Perth, EROS, Wise, Tenagra, J. Maza, T. Puckett, W. Johnson, etc
What have the past and present searches produced so far? I have compiled the list of all SNe Ia to date which met the following requirements:
$``$CCD photometry
$``$$`z<0.1`$
$``$enough observations recorded to yield precise distances
This list has 115 SNe Ia. Their positions on the sky and depth can be see in Figure 2. About half of these data have been published already (29 from Hamuy et al. 1996; 29 from Riess et al. 1998, 1999; 15 others in the literature) and the rest are “in the cans” of the various searches listed above. The average depth of this sample is 11,000 km s<sup>-1</sup> and the effective depth for flow measurements is 5,000 km s<sup>-1</sup>. There are 60 objects with $`cz<10,000`$ km s<sup>-1</sup>. By looking at Figure 2 we note a few points. Although the distribution between Galactic North and South is not heavily skewed, there are more objects in the North. The typical depth in the South is somewhat greater. The zone of avoidance has been strongly avoided to date. Some concentrations like the Perseus-Pisces Supercluster and Coma are not probed while others (Virgo and Fornax) are well probed.
Although there has been little coordination in the past between searches, the results are impressive. This sample, and the ever growing future sample, will be a powerful data set with which to measure the peculiar motions of test particles subject to gravity in the Universe.
###### Acknowledgements.
I wish to express my thanks to Lisa Germany, Greg Aldering, Saurabh Jha, and Weidong Li for providing lists of discovered SNe Ia.
|
no-problem/9908/quant-ph9908019.html
|
ar5iv
|
text
|
# Dualist interpretation of quantum mechanics
## I Introduction
The main conceptual problem in quantum mechanics remains the link between the quantum and the classical worlds. The wavefunction in quantum theory typically spreads out over a range of possible values of an observable property of a system, such as the position of a particle. Furthermore, the Schrödinger equation that describes the evolution of the wavefunctions is linear and deterministic, which implies that even macroscopic objects may not possess a determinate position, momentum, or any other property. And yet, some way must be found to account for the impression we have that macroscopic objects have determinate positions and velocities at all times. Much has been written over the years on possible solutions to this problem, particularly on a solution based on what is known as the orthodox or the Copenhagen interpretation . The orthodox interpretation relies on the projection postulate, also known as the collapse of the wavefunction, to provide the required link between the wavefunction and what is actually measured in experiments. The postulate states that given an initial wavefunction of a quantum system in a superposition of eigenstates of an observable property, it will collapse to one of those eigenstates whenever it undergoes a measurement by a macroscopic and classical instrument. The wavefunction after the measurement is therefore the eigenstate corresponding to the eigenvalue of the observable which is obtained by the measurement. The probability of obtaining a given eigenvalue is equal to the square modulus of the coefficient of the normalized initial wavefunction for that particular eigenvalue-eigenstate (see Ref. for a complete account of the orthodox interpretation).
The questions that this account of quantum mechanics entails are well known. The wavefunction collapse is a non-unitary, irreversible process, which, on the face of it at least, is completely at odds with the unitary, reversible evolution described by the Schrödinger equation. Also, the orthodox account of a quantum measurement made use of a dichotomy between microscopic quantum systems and macroscopic classical instruments. When quantum systems evolve in isolation, they do so according to Schrödinder’s equation, but when they interact with a classical instrument, their wavefunction collapses. And yet, are not the macroscopic instruments made up of microscopic quantum particles? If the Schrödinger equation is truly universal, this would mean that the instruments pointer should be in a superposition of macroscopically distinguishible states after the measurement interaction. What would such a result mean? Alternatively, if the Schrödinger equation is not universal and collapses do occur, there still remains the question of under what conditions they occur. It is the resolution of this ‘measurement problem’ which is the focus of the different interpretations of quantum mechanics.
The dualist interpretation attempts to solve the measurement problem, while avoiding the difficulties of previous attempts, by augmenting a pilot-wave interpretation with an action-reaction principle. We will therefore begin with a review of some pilot-wave interpretations, along with a few other interpretations, in Sec. II. The review is by no means exhaustive. We will only examine those relevant to the dualist interpretation, which will be explained in Sec. III. Discussions on the relative merits of the dualist interpretation with respect to the others, along with areas still to be explored, are given in Sec. IV. We conclude in Sec. V.
## II Review of some Interpretations of Quantum Mechanics
As mentioned before, this review does not include all attempts to answer the measurement problem. Specifically, we will not look at what Bub called the ‘new orthodoxy’, which is an interpretation built on notions of consistent histories, quantum logic and environment-induced decoherence and claims to do away with the need for a wavefunction collapse to describe the classical world (see Omnès , Zurek and Paz , Zurek and the corresponding comments and reply ). Also omitted are the statistical and relative state interpretations of quantum theory, among others (see Ref. for a review).
### A Pilot-wave interpretations
#### 1 Deterministic version
Originally proposed by de Broglie and later revived by Bohm , the pilot-wave approach states that in addition to the wavefunction propagating through space, there also exists a particle guided by the wave and moving along a well defined trajectory. For a single nonrelativistic, spinless particle of mass $`m`$, we have
$$i\mathrm{}\frac{\psi }{t}=\frac{\mathrm{}^2}{2m}^2\psi +V\psi $$
(1)
$$\frac{\mathrm{d}𝐱_p}{\mathrm{d}t}=\frac{\mathrm{}}{2mi}\frac{\psi ^{}\psi \psi \psi ^{}}{|\psi |^2}|_{𝐱=𝐱_p}$$
(2)
where $`\psi =\psi (𝐱,t)`$ is the wavefunction evolving under the influence of a potential $`V`$, and $`𝐱_p=𝐱_p(t)`$ is the trajectory of the particle. Both the wavefunction and the particle are considered real, even though the wavefunction exists in configuration space for a many-body system. We will therefore say that the pilot-wave interpretation possesses a dual ontology. A complete description of an individual quantum system must therefore include an initial wavefunction and an initial particle postition. It is convenient to express the wavefunction in polar coordinates, $`\psi =R\mathrm{exp}[iS/\mathrm{}]`$, and to rewrite Eqs. (1) and (2).
$$\frac{S}{t}+\frac{(S)^2}{2m}+V\frac{\mathrm{}^2}{2m}\frac{^2R}{R}=0$$
(3)
$$\frac{P}{t}+(P𝐯)=0$$
(4)
$$\frac{\mathrm{d}𝐱_p}{\mathrm{d}t}=\frac{S}{m}|_{𝐱=𝐱_p}$$
(5)
where $`P=R^2`$ and $`𝐯=S/m`$. Equation (3) is a classical Hamilton-Jacobi equation with an additional term depending on the form of the $`R`$-field (called the ‘quantum potential’ by Bohm , which imposes a corresponding ‘quantum force’ on the particle). Equation (4) is a conservation equation for $`P`$ with a velocity field that also serves to guide to particle in Eq. (5).
We will call the above the Deterministic Pilot-Wave (DPW) interpretation, although it has also been referred to in the literature as the de-Broglie-Bohm interpretation, the causal interpretation, the ontological interpretation, or even as Bohmian mechanics. Recently, there has been a resurgence of interest in the DPW interpretation, resulting in a series of articles reformulating well known quantum mechanical problems, such as Heisenberg’s uncertainty principle , the double-slit experiment , quantum fields and gravitation , the correspondence principle , spin measurements and the EPR experiment , the harmonic oscillator , relativistic quantum mechanics , and culminating in three major works on the subject (see Ref. for a concise overview). The DPW interpretation also allows us to speak clearly about arrival time distributions , something which the orthodox interpretation has difficulty defining. The main advantage of DPW is its answer to the measurement problem: the particle under study and the instrument’s pointer have determinate positions at all times, the measurement interaction induces a correlation in the combined particle-pointer wavefunction, which, by the guidance condition (Eq. (5)), also induces a correlation between the particle and pointer positions. Contrary to the orthodox interpretation, the wavefunction never undergoes a collapse. Equations (1) and (2) entirely describe the dynamics of the wave-particle system. DPW is therefore a no-collapse interpretation, as well as possessing nonlocal ‘hidden variables’. The hidden variables are the particle positions, and they are nonlocal because a change in one particle position can instantly induce a change in another distant particle entangled with the first.
It is immediately apparent, however, that the probability density of the particle position, $`\rho (𝐱_p)`$, is logically distinct from the square modulus of the wavefunction, $`P`$. The statistics of quantum theory are only reproduced in DWP theory if $`\rho =P`$, a condition Dürr et al. called ‘quantum equilibrium’ (better known as Born’s statistical postulate). The fact that the particle velocity is the same as the velocity in the $`P`$-field conservation law (Eq. (4)), $`\mathrm{d}𝐱_p/\mathrm{d}t=𝐯(𝐱_p)`$, means that if quantum equilibrium holds for a given time, it will hold for all times. Of course, the opposite is also true, as was pointed out by Keller . One answer to this problem is to simply postulate quantum equilibrium. Alternatively, one can invoke the fact that for a complicated system with a large number of degrees of freedom, the motion of the particles will be sufficiently complicated as to produce an effective mixing and diffusion of $`\rho `$, in a coarse-grained sense . An initial probability distribution not in quantum equilibrium will eventually reach it, resulting in what Valentini called a subquantum H-theorem . The use of probability in DPW theory is therefore no better or worse than the use of probability in classical statistical mechanics (see Sklar for a lucid account of the conceptual difficulties in classical statistical mechanics). It is interesting to note that the guidance condition, Eq. (5), does not uniquely conserve quantum equilibrium. As Deotto and Ghirardi showed, it is possible to add an additional velocity field to the guidance condition that still preserves quantum equilibrium. We therefore have many possible guidance conditions in the DPW interpretation which can reproduce the predictions of orthodox quantum theory, although the one described by Eq. (5) is the simplest.
Although the DPW interpretation includes particle trajectories, similar to the trajectories in classical physics, it is by no means guaranteed that very large and massive bodies would exhibit classical trajectories . In other words, even though the quantum potential scales as $`m^1`$, it still may show a singular behaviour in regions where $`R0`$, causing significant deviations from the expected classical behaviour. According to Bohm and Hiley , and Appleby , the interaction with a random environment induces random fluctuations in the quantum potential such that classical trajectories are recovered to a good approximation. Since the evolution of the wavefunction is rigorously deterministic, however, the randomness of the environment cannot be derived from the dynamics, but must be assumed (i.e. as reflecting our ignorance of its many degrees-of-freedom).
Furthermore, should the wavefunction be split into two or more non-overlapping wave packets in a measurement process, the particle will be seen to be in one of the packets. But since the wavefunction is considered objectively real, the empty wave packets will continue to exist after the measurement, a situation deemed awkward by Stapp . In the orthodox interpretation, the measurement only allows the packet corresponding to the eigenvalue which is obtained in the measurement, all other packets are eliminated. In the DPW interpretation, the empty packets are not eliminated and might interfere with the occupied packet at a later time. Bohm and Hiley invoke the interaction of the system with the measurement apparatus, and the interaction of the measurement apparatus with the environment, to argue that the empty packets rapidly become incapable of effectively affecting the occupied packet, thereby reproducing the measurement process in the orthodox interpretation. Other objections to the DPW interpretation refer to its lack of symmetry, i.e. position and momentum are no longer on the same footing, position now being a preferred variable . Also, the wave acts on the particle but not the other way around . As we will see, the last objection does not apply to the dualist interpretation.
#### 2 Stochastic version
A variant on the DPW interpretation is what we will call the Stochastic Pilot-Wave (SPW) interpretation. Here, as in the DPW theory, the particle and the wavefunction are both objectively real, and the evolution of the wavefunction is described solely by the deterministic Schrödinger equation. The motion of the particle, however, is described by a stochastic process which is conditioned by the wavefuncton. Inspired partly on the idea by Bohm and Vigier of a subquantum fluid imparting random fluctuations on the particle, Nelson formulated a SPW theory in terms of a continuous Markov process. The stochastic process of the particle is described by the Langevin equation (see Gillespie for a concise account of Markov processes),
$$𝐱_p(t+\mathrm{d}t)=𝐱_p(t)+𝐛(𝐱_p,t)\mathrm{d}t+D^{1/2}𝐰(t)(\mathrm{d}t)^{1/2}$$
(6)
where $`D=\mathrm{}/m`$ is the diffusion constant, and $`𝐛(𝐱_p,t)`$ is the drift function given by
$$𝐛=\frac{S}{m}+\frac{\mathrm{}}{m}\frac{R}{R}.$$
(7)
The first term is the particle velocity in Eq. (5), while the second term is an osmotic velocity. The function $`𝐰(t)`$ is an uncorrelated random time series with zero mean and unit variance:
$$<w_i(t)>=0$$
(9)
$$<w_i(t)w_j(t)>=\delta _{ij}$$
(10)
$$<w_i(s)w_j(t)>=0,st$$
(11)
$$<x_{p,i}(s)w_j(t)>=0,st$$
(12)
where the angle brackets denote an ensemble average. Defining $`\rho (𝐱_p,t|𝐱_{p0},t_0)`$ as the conditional probability density function for the particle position $`𝐱_p`$ at the time $`t`$, given that the position was $`𝐱_{p0}`$ at the earlier time $`t_0<t`$, it can be shown that Eqs. (6) to (7) result in a (forward) Fokker-Planck equation,
$$\frac{}{t}\rho (𝐱_p,t|𝐱_{p0},t_0)+[\rho (𝐱_p,t|𝐱_{p0},t_0)𝐛(𝐱_p,t)]=\frac{D}{2}^2\rho (𝐱_p,t|𝐱_{p0},t_0)$$
(13)
where the time and space derivatives are with respect to $`t`$ and $`𝐱_p`$. Equation (13) would diffuse the conditional probability density function in such a way so that $`\rho (𝐱_p,t_0+\tau |𝐱_{p0},t_0)P(𝐱_p,t_0+\tau )`$ as $`\tau \mathrm{}`$ . Also, once quantum equilibrium is achieved, the osmotic velocity and diffusion terms cancel each other out. Equation (13) then becomes identical to Eq. (4), and so quantum equilibrium is conserved. Given an initial probability density, $`\rho (𝐱_{p0},t_0)`$, the probability density at a later time, $`\rho (𝐱_p,t)=\rho (𝐱_p,t|𝐱_{p0},t_0)\rho (𝐱_{p0},t_0)\mathrm{d}^3x_{p0}`$, also obeys a Fokker-Planck equation,
$$\frac{}{t}\rho (𝐱_p,t)+[\rho (𝐱_p,t)𝐛(𝐱_p,t)]=\frac{D}{2}^2\rho (𝐱_p,t).$$
(14)
Quantum equilibrium can therefore be seen as a natural consequence of the SPW interpretation, without the need for mixing or coarse-graining. As with the DPW interpretation, it is possible to modify the stochastic guidance condition, Eq. (6), in such a way as to attain and preserve quantum equilibrium . And although, under certain conditions, the most probable particle path is approximately classical , there are also conditions where the SPW interpretation may exhibit significantly non-classical paths, even for very massive bodies.
Like its deterministic counterpart, the SPW interpretation has been the subject of numerous articles dealing with the physical meaning of the stochastic process , the extension of the theory to include spin , quantum fields , relativistic quantum mechanics and mixed states . Some have taken the stochastic process as fundamental, and the wavefunction as a derived quantity. However, as Wallstrom pointed out, the wavefunction must satisfy certain quantization conditions which cannot be derived from the components of the drift function alone. Also, the statistics of a mixed state can only be reproduced if the stochastic process of each wavefunction in the ensemble is treated separately, thereby conferring a reality to the wavefunction . It is for these reasons that we classify stochastic process interpretations as pilot-wave interpretations.
### B Explicit wavefunction collapse interpretations
An alternative to the introduction of a particle into the standard formulation of quantum mechanics is to add a mechanism, stochastic or deterministic, which causes the wavefunction to collapse. The collapse mechanism is intended as an answer to the ill-defined measurement process of the orthodox interpretation by exactly specifying the conditions under which the collapse occurs. Here, the wavefunction is assumed to completely describe an individual system. There are no particles. The wavefunction collapse mechanism, however, is not derived from the Schrödinger equation, but is added to it. The Explicit Wavefunction Collapse (EWC) interpretation, therefore, has a monist ontology, but dual dynamics. Strictly speaking, however, the EWC is not an interpretation of quantum mechanics, but rather a rival theory (as is the dualist interpretation). Nevertheless, we will still call the EWC an interpretation.
An influential model of EWC was introduced by Ghirardi, Rimini and Weber (GRW) . It is this model which will be of greatest interest to us, mainly because of its simplicity. GRW propose that the wavefunction undergoes spontaneous localizations events at random times. Pearle called these events ‘hits’ . If the wavefunction of a single particle prior to a hit is $`\psi (𝐱,t)`$, the wavefunction undergoes the instantaneous collapse,
$$\psi (𝐱,t)(\alpha /\pi )^{3/4}F^{1/2}\mathrm{exp}[(\alpha /2)(𝐱𝐳)^2]\psi (𝐱,t)$$
(15)
where the factor $`F`$ ensures that the altered wavefunction is correctly normalized, $`\alpha ^{1/2}`$ is the localization width, and $`𝐳`$ is the center of the hit. The center of the hits are random but not equally likely: $`F=F(𝐳)`$ is also equal to the probability density that the hit is centered at $`𝐳`$.
$$F(𝐳)=(\alpha /\pi )^{3/2}\mathrm{exp}[\alpha (𝐱𝐳)^2]|\psi (𝐱,t)|^2\mathrm{d}^3x$$
(16)
It is clear from Eq. (16) that $`F(𝐳)\mathrm{d}^3z=1`$. The hits are Poisson distributed in time with a frequency parameter $`\lambda `$. GRW set the localization width to $`\alpha ^{1/2}10^5\mathrm{cm}`$, and the frequency to $`\lambda 10^8\mathrm{yr}^1`$. If we have a macroscopic object with $`N10^{23}`$ particles, GRW showed that the center-of-mass coordinate collapses whenever a single particle in the object undergoes a hit. Therefore, the center-of-mass collapses with a frequency $`N\lambda 10^7\mathrm{s}^1`$.
This means that for a single particle, the hits occur so infrequently that one is unlikely to observe it in an experiment. However, for a macroscopic object, the hits occur so frequently that it becomes very difficult to observe a coherent superposition of macroscopic states separated by a distance greater than the localization width. Other models have been developed that use a Continuous Stochastic Localization (CSL) process, rather than the discontinuous hits of the GRW model, for particle position , mass density, c-numbers , and with relativistic features . These models were criticized by Ballentine , who emphasized that the energy production they entail is incompatible with equilibrium and steady states. Furthermore, the GRW model has two new physical constants to evaluate, something which can be problematic . Indeed, since we have a monist ontology, the wavefunction must entirely account for the definitiveness of the macroscopic world. Therefore, the hit frequency must be low enough so as to leave microscopic dynamics essentially unchanged from the usual Schrödinger equation, but high enough so as to ensure that *all* of what we might consider macroscopic variables (the position of a spec of dust, say) collapse sufficiently quickly before anybody can ‘notice’. To the best of the author’s knowledge, the energy production predicted by GRW has not actually been detected so far. Only ambiguous or negative results were obtained, imposing limits or constraints on the possible values for the frequency parameter and localization width. Also, the GRW model (like the pilot-wave interpretation) requires that all quantum measurements are essentially position measurements. It is not clear that this should always be the case , but we will assume it to be true for the purpose of this work.
The monist ontology is also the cause of what is known as the ‘tail’ problem (also, see Ghirardi and Grassi in Ref. ). In orthodox quantum theory, a system can only be said to possess a determinate value, $`a`$, of an observable, $`A`$, if the c-number of the corresponding eigenfunction satisfies *exactly* $`|c_a|^2=1`$, and all other c-numbers are *exactly* zero. If $`|c_a|^2=1ϵ`$, where $`0<ϵ1`$, then the system is said to be indeterminate with respect to the observable $`A`$, *no matter how small* $`ϵ`$ *may be*. In the orthodox theory, this requirement for determinateness is satisfied automatically by the reduction postulate. However, it is never satisfied in the GRW theory. To see this, we will suppose that the initial wavefunction in Eq. (15) is spread out over a macroscopically distinguishable distance $`D\alpha ^{1/2}`$. Clearly, then, the system does not initially possess a determinate position. But the position does not become determinate as a result of the hit described by Eq. (15). This is because that, while the Gaussian responsible for the localization becomes very small far away from the center $`𝐳`$, it is never actually zero (i.e. it has infinitely long ‘tails’). It is possible to reformulate GRW in terms of localization functions that are not Gaussian . The tail problem would be partially remedied if we choose a localization function with a finite support. But this tactic can only work for the discrete localization model, which does not preserve the symmetry of the wavefunction.
The CSL model does preserve the symmetry of the wavefunction, however. The continuous stochastic process in CSL causes the wavefunction to exponentially approach a determinate state (i.e.: $`|c_a|^2=1\mathrm{exp}[\gamma t]`$). But that determinate state is never actually attained in a finite time, and so the system will never possess a definite value of $`A`$. Proponents of the EWC interpretation attempt overcome this difficulty by defining an ‘objective reality’ and a ‘projective reality’. The value $`a`$ of a system is said to be objectively real if $`|c_a|^2>1ϵ_0`$, where $`0<ϵ_01`$ is an objective reality threshold and constitutes an additional parameter in the theory. If this criterion is not satisfied, then a projection operator is applied, creating a field in space-time, $`|c_a|^2=|c_a|^2(𝐱,t)`$, that is considered projectively real.
### C Intrinsic decoherence
Another interesting modification to the Schrödinger equation is Milburn’s intrinsic decoherence model . Without going into detail, the intrinsic decoherence model postulates that over sufficiently small time scales, a system evolves by a random sequence of unitary phase changes generated by the Hamiltonian. These random phase changes consequently induce decoherence in the energy basis (note, however, that the wavefunction does not actually collapse). See Ref. for some applications of intrinsic decoherence. This approach is ‘softer’ that the GRW model since the constants of the motion remain constant (no energy production). Similarly, Steane proposed that the measurement problem is solved whenever a measurement-like process causes the phase of a quantum system to become formally undecidable. Indeed, many authors believe that the measurement problem is solved in large part, if not completely solved, by decoherence in one basis or another. However, according to Bell , Holland , and Bub among others (see Leggett, Healey and Elby in Ref. ), decoherence, whether intrinsic or induced by the environment, necessarily requires the introduction of additional (and usually tacit) assumptions about the meaning of the wavefunction to fully account for the definitiveness of the macroscopic world. We now have enough of a background to formulate the dualist interpretation.
## III The dualist interpretation
The idea for an action-reaction principle for the DPW interpretation was suggested by Holland , but not pursued. A more serious attempt was subsequently made by Abolhasani and Golshani . However, in their model, the wavefunction is directly coupled to the probability density of the particle position, and only indirectly on the particle position itself. It is unclear, therefore, how the particle affects the wavefunction. Also, Santos and Escobar proposed a combination of a variant of the SPW interpretation (a ‘beable’ interpretation) with a GRW-type collapse mechanism. In their model, however, the particle (or beable) in no way affects the collapse mechanism. There is, therefore, no action-reaction principle between wave and particle. In our model, though, the evolution of the wavefunction is directly affected by the particle position. For the following we will limit ourselves to nonrelativistic, closed systems subject to a time-independent Hamiltonian. We will also assume, for simplicity, that the system has a discrete energy spectrum. We begin with the single particle case, then extend the formalism to the many-particle case.
### A Basic principles
We begin by stating the basic principles of the dualist interpretation.
1 - *Ontology*: The dualist interpretation has the same ontology as pilot-wave interpretations. There is a wavefunction $`\psi `$, which is considered objectively real. In addition, there is a particle, considered real, with a well defined trajectory, $`𝐱_p=𝐱_p(t)`$
2 - *The Wave-to-Particle (W-P) Guidance Condition*: The movement of the particle is described by a stochastic process controlled by the wavefunction. This stochastic process is identical to the one in the SPW interpretation, and is described by the Langevin equation (6).
3 - *The Particle-to-Wave (P-W) Guidance Condition*: The P-W guidance condition is modeled by successive, discontinuous changes in the wavefunction, called *Spontaneous Transition Events* (STE). The STE are Poisson distributed in time with a frequency parameter $`\lambda `$. The new wavefunction is chosen from a set of accessible wavefunctions, defined as the set of wavefunctions that conserve certain values of the wavefunction prior to the STE. Here, we postulate that every conserved quantity of the wavefunction, with respect to the Schrödinger equation, is stricly conserved with respect to a STE. We call this the *Maximal Strict Conservation* (MSC) principle. We will discuss possible alternatives in Section IV. Also, the probability of obtaining a particular member of the accessible set after a STE is proportional to the squared amplitude of that normalized wavefunction at the particle position. In that way, the particle guides the transition.
### B The single particle
#### 1 Formulation
The wavefunction of a single particle is expanded into its energy eigenstates:
$$\psi (𝐱,t)=\underset{i=0}{\overset{K}{}}\underset{j=0}{\overset{M_i1}{}}c_{ij}\varphi _{ij}(𝐱)\mathrm{exp}[iE_it/\mathrm{}]$$
(17)
where the index $`i`$ numbers the $`K+1`$ energy eigenvalues with a non-zero amplitude, the index $`j`$ numbers the $`M_i`$ degenerate eigenstates with energy $`E_i`$, and $`c_{ij}`$ are the complex coefficients for each eigenstate $`\varphi _{ij}(𝐱)`$. It is convenient to rewrite Eq. (17) as
$$\psi (𝐱,t)=\underset{i=0}{\overset{K}{}}C_i\mathrm{\Phi }_i(𝐱)\mathrm{exp}[iE_it/\mathrm{}]$$
(18)
where we have defined
$$\mathrm{\Phi }_i(𝐱)=C_i^1\underset{j=0}{\overset{M_i1}{}}c_{ij}\varphi _{ij}(𝐱)$$
(20)
$$C_i=\left(\underset{j=0}{\overset{M_i1}{}}|c_{ij}|^2\right)^{\frac{1}{2}}\mathrm{exp}[i\mathrm{\Theta }_i].$$
(21)
Since the phases $`\mathrm{\Theta }_i`$ are arbitrary, we set $`\mathrm{\Theta }_i=\theta _{i0}`$, where $`\theta _{ij}`$ is the phase of $`c_{ij}`$.
#### 2 Spontaneous transitions
By virtue of the MSC principle, only the phases $`\mathrm{\Theta }_i`$ change as a result of a STE. All the other quantities, $`|C_i|`$ and $`\mathrm{\Phi }_i(𝐱)`$, do not change. The constants of the motion, therefore, stay constant with respect to a transition. In what follows, we will express the wavefunction as a function of these phases, $`\psi =\psi (𝐱,\stackrel{}{\mathrm{\Theta }},t)`$, where $`\stackrel{}{\mathrm{\Theta }}=(0,\mathrm{\Theta }_1,\mathrm{\Theta }_2,\mathrm{},\mathrm{\Theta }_K)`$ and where we keep $`\mathrm{\Theta }_0=0`$ to eliminate an arbitrary overall phase factor.
If a STE occurs at time $`t_0`$, where the initial wavefunction and particle position are $`\psi (𝐱,\stackrel{}{\mathrm{\Theta }},t_0)`$ and $`𝐱_p(t_0)`$, respectively, then the wavefunction undergoes the transition $`\psi (𝐱,\stackrel{}{\mathrm{\Theta }},t_0)\psi (𝐱,\stackrel{}{\mathrm{\Theta }}^{},t_0)`$, where $`\stackrel{}{\mathrm{\Theta }}^{}`$ is the new phase vector. The particle position, however, does not change because of the STE. As mentioned previously, the conditional probability density of the new phase vector, $`f(\stackrel{}{\mathrm{\Theta }}^{}|𝐱_p,t_0)`$, is proportional to the squared amplitude of the (normalized) wavefunction at the particle position,
$$f(\stackrel{}{\mathrm{\Theta }}^{}|𝐱_p,t_0)=\mathrm{\Gamma }^1|\psi (𝐱_p,\stackrel{}{\mathrm{\Theta }}^{},t_0)|^2$$
(22)
where $`\mathrm{\Gamma }=\mathrm{\Gamma }(𝐱_p)`$ is a normalizing factor.
$$\mathrm{\Gamma }(𝐱_p)=_0^{2\pi }\mathrm{}_0^{2\pi }|\psi (𝐱_p,\stackrel{}{\mathrm{\Theta }}^{},t_0)|^2\underset{i=1}{\overset{K}{}}\mathrm{d}\mathrm{\Theta }_i^{}$$
(24)
$$\mathrm{\Gamma }(𝐱_p)=(2\pi )^K\underset{i=0}{\overset{K}{}}|C_i|^2|\mathrm{\Phi }_i(𝐱_p)|^2$$
(25)
The integration over the phase vector in Eq. (24) eliminates the interference terms between energy eigenstates, which leads to Eq. (25) and implies that $`\mathrm{\Gamma }`$ depends only on the particle position.
In Eq. (22), the squared amplitude is no longer seen as the conditional probability density of the particle position for a given wavefunction, but rather it is the conditional probability density of the wavefunction for a given particle position. Born’s statistical postulate has been turned on its head, thereby giving the squared amplitude a dual meaning. In the language of statistical inference, we would say that $`f(\stackrel{}{\mathrm{\Theta }}^{}|𝐱_p,t_0)`$ is proportional to the likelyhood that the probability density which caused the observation $`𝐱_p`$ at time $`t_0`$, had the parameter values $`\stackrel{}{\mathrm{\Theta }}^{}`$. In this case, however, Eq. (22) represents a real stochastic process of the wavefunction, conditioned by the particle. Unlike the GRW model, there is no collapse as such, only a random change in the phases which favors wavefunctions that are peaked and centered about the particle. Although an expansion of the wavefunction as a result of a STE is not impossible. In other words, the STE is a ‘quantum jump’ from one solution of the Schrödinger equation to another which better represents, on average, the current particle position (while staying within the set of accessible states).
Given an initial probability density for the particle position, $`\rho (𝐱_p,t_0)`$, the marginal probability density of the new phase vector is
$$f(\stackrel{}{\mathrm{\Theta }}^{},t_0)=f(\stackrel{}{\mathrm{\Theta }}^{}|𝐱_p,t_0)\rho (𝐱_p,t_0)\mathrm{d}^3x_p.$$
(26)
If quantum equilibrium holds prior to the transition, Eq. (26) equals
$$f(\stackrel{}{\mathrm{\Theta }}^{}|\stackrel{}{\mathrm{\Theta }},t_0)=f(\stackrel{}{\mathrm{\Theta }}^{}|𝐱_p,t_0)|\psi (𝐱_p,\stackrel{}{\mathrm{\Theta }},t_0)|^2\mathrm{d}^3x_p$$
(27)
which is the transition probability density function for a STE. We can also use Bayes rule to determine the particle position probability density after the STE for a given final wavefunction $`\stackrel{}{\mathrm{\Theta }}^{}`$, $`\rho ^{}(𝐱_p|\stackrel{}{\mathrm{\Theta }}^{},t_0)`$,
$$\rho ^{}(𝐱_p|\stackrel{}{\mathrm{\Theta }}^{},t_0)=\frac{\rho (𝐱_p,t_0)f(\stackrel{}{\mathrm{\Theta }}^{}|𝐱_p,t_0)}{f(\stackrel{}{\mathrm{\Theta }}^{},t_0)}.$$
(28)
Again, if quantum equilibrium obtained prior to the STE, we can derive from Eq. (28) the expression,
$$\rho ^{}(𝐱_p|\stackrel{}{\mathrm{\Theta }}^{},t_0)=\left[\frac{f(\stackrel{}{\mathrm{\Theta }}|𝐱_p,t_0)}{f(\stackrel{}{\mathrm{\Theta }}^{}|\stackrel{}{\mathrm{\Theta }},t_0)}\right]|\psi (𝐱_p,\stackrel{}{\mathrm{\Theta }}^{},t_0)|^2$$
(29)
where we have made use of Eqs. (22) and (26). Clearly, Eq. (29) shows that quantum equilibrium is not necessarily conserved after a transition event. Quantum equilibrium may be a good approximation if the initial wavefunction is much wider than the final one, since the initial wavefunction would be essentially constant over the region where the final wavefunction is appreciable. Such a situation may occur if, in between two consecutive STEs, the wavefunction has enough time to spread out in position space, and that the STE yields a reasonably narrow final wavefunction. Whatever deviations from quantum equilibrium may arise as a result of the STE will eventually vanish due to the stochastic evolution of the particle. Quantum equilibrium would be a good approximation for particle statistics if the timescale for it is much smaller than the average lifetime between transitions $`\lambda ^1`$.
Furthermore, should the system possess a well-defined energy, $`K=0`$, then the transition events would have no effect on it. In the double-slit experiment, for instance, if we have an initial plane wave with a well-defined energy, then the two branches of the wavefunction that are produced by the slits would be have the same energy. The coherence between them would be preserved by a STE. Therefore, even if a transition event were to occur while the particle is travelling between the slits and the screen, it would have no effect on the interference pattern.
Equations (26) to (29) only apply to the case where the initial state is pure. But for an initial mixed state, the results can be quite different. If we have an initial ensemble of states that all belong to the same set of accessible states, but is completely decoherent (i.e. every component of the phase vector is independent and uniformly distributed between 0 and $`2\pi `$), and assuming quantum equilibrium for every member of the ensemble, then the initial particle position probability density is $`\rho (𝐱_p)=(2\pi )^K\mathrm{\Gamma }(𝐱_p)`$. Placing this result in Eq. (26), the marginal probability density of the new phase vector is $`f(\stackrel{}{\mathrm{\Theta }}^{})=(2\pi )^K`$, which is the probability for that phase vector in the initial ensemble. Furthermore, placing these results in Eq. (29), it is easily shown that the particle position probability density after the STE for the final wavefunction $`\stackrel{}{\mathrm{\Theta }}^{}`$ is given by $`\rho ^{}(𝐱_p|\stackrel{}{\mathrm{\Theta }}^{},t_0)=|\psi (𝐱_p,\stackrel{}{\mathrm{\Theta }}^{},t_0)|^2`$. Therefore, not only is the probability distribution of the wavefunction for this mixed state conserved after a transition event, but so is the quantum equilibrium for every wavefunction in the ensemble. And since neither the Schrödinger equation nor the W-P guidance condition can alter this joint wave-particle probability distribution, this mixed state is therefore completely stationary. Owing to its features of complete decoherence and quantum equilibrium, we shall call this mixed state the *Decoherent Quantum Equilibrium* (DQE) steady state. It must be emphasized that the term ‘decoherence’ in the dualist interpretation simply means that the relative phases are random. It does not means that the phases don’t exist, or that they are undecidable, or that a wavefunction collapse took place.
The DQE steady state has two desirable properties. First, since it is a steady state, then any initial pure state, in quantum equilibrium or not, will irreversibly evolve towards its corresponding DQE mixed state and will stay in that state. This introduces a kind of time’s arrow in the dualist interpretation. Second, the canonical ensemble of a system in thermal equilibrium can always be decomposed into a combination of DQE steady states. Therefore, any microscopic system that is extracted from this ensemble must be in quantum equilibrium, regardless of any STEs that may have occured prior to the extraction. This guarantees that the results of orthodox quantum theory are reproduced in experiments like the one by Arndt et al. , where C<sub>60</sub> molecules are sublimated from a 900-1000 K oven to a diffraction grating which causes an interference pattern.
#### 3 General stochastic evolution
An initial pure state will evolve into a mixed state as a result of the spontaneous transition events. Therefore, given an initial pure state, $`\psi (𝐱,\stackrel{}{\mathrm{\Theta }},t_0)`$, and particle position, $`𝐱_p(t_0)`$, the position expectation value of the wavefunction will evolve as,
$$\{\stackrel{}{\mu }(t_0+\mathrm{d}t)\}=(1\lambda \mathrm{d}t)[\stackrel{}{\mu }(t_0)+\frac{𝐩}{m}\mathrm{d}t]+\{\stackrel{}{\mu }(t_0)\}_p\lambda \mathrm{d}t$$
(31)
$$\{\stackrel{}{\mu }(t_0+\mathrm{d}t)\}=\stackrel{}{\mu }(t_0)+\frac{𝐩}{m}\mathrm{d}t+[\{\stackrel{}{\mu }(t_0)\}_p\stackrel{}{\mu }(t_0)]\lambda \mathrm{d}t$$
(32)
where $`\stackrel{}{\mu }=\psi |\stackrel{}{x}|\psi `$ is the position expectation value of the wavefunction, $`𝐩=\psi |𝐩|\psi `$ is the momentum expectation value of the wavefunction, and the curly brackets indicate an ensemble average in a mixed state. It is important to note that, in this context, $`\stackrel{}{\mu }`$ is the centroid of the wavefunction (i.e. a geometrical point representing the wavefunction), and not the average particle position. Therefore,
$$\{\stackrel{}{\mu }(t)\}_p=\psi (𝐱,\stackrel{}{\mathrm{\Theta }}^{},t)|𝐱|\psi (𝐱,\stackrel{}{\mathrm{\Theta }}^{},t)f(\stackrel{}{\mathrm{\Theta }}^{}|𝐱_p,t)\mathrm{d}^K\mathrm{\Theta }^{}$$
(33)
is the average position of the mixed state created by the STE. Also, we have neglected the term proportional to $`(\mathrm{d}t)^2`$ in Eq. (32). If we assume that the ensemble of wavefunctions created by the STE is symmetrically distributed about the particle position, then $`\{\stackrel{}{\mu }\}_p=𝐱_p`$ and Eq. (32) becomes,
$$\{\stackrel{}{\mu }(t_0+\mathrm{d}t)\}=\stackrel{}{\mu }(t_0)+\frac{𝐩}{m}\mathrm{d}t[\stackrel{}{\mu }(t_0)𝐱_p(t_0)]\lambda \mathrm{d}t$$
(34)
where the square bracket term attracts, in a sense, the average position of the wavefunctions towards the particle. In this way, we may interpret the influence of the particle as a *pilot-particle* guiding the evolution of the wavefunction, which, in turn, guides the evolution of the particle. Similarly, the variance of the wavefunction, $`\sigma ^2=\psi |(𝐱𝐱)^2|\psi `$, evolves as,
$$\{\sigma ^2(t_0+\mathrm{d}t)\}=\sigma ^2(t_0)+\chi (t_0)\mathrm{d}t[\sigma ^2(t_0)\{\sigma ^2(t_0)\}_p]\lambda \mathrm{d}t$$
(35)
where $`\chi =\mathrm{d}\sigma ^2/\mathrm{d}t`$ is the rate of growth of the variance of the wavefunction described by the Schrödinger equation, and $`\{\sigma ^2\}_p`$ is the average variance of the mixed state created by the STE. The particle, therefore, can be seen to impede the growth of the (average) variance of the wavefunction whenever $`\sigma ^2>\{\sigma ^2\}_p`$.
### C The many-particle system
#### 1 Formulation
We now extend the dualist interpretation to the case with an arbitrary number of particles, $`N`$. It is convenient, therefore, to introduce the notation, $`\stackrel{}{q}_p=(𝐱_{p1},𝐱_{p2},\mathrm{},𝐱_{pN})`$, where $`\stackrel{}{q}_p`$ is the vector of particle positions in the configuration space $`\stackrel{}{q}=(𝐱_1,𝐱_2,\mathrm{},𝐱_N)`$. The wavefunction representation in Eq. (18) has the straightforward extension:
$$\psi (\stackrel{}{q},t)=\underset{i=0}{\overset{K}{}}C_i\mathrm{\Phi }(\stackrel{}{q})\mathrm{exp}[iE_it/\mathrm{}]$$
(36)
where the $`\mathrm{\Phi }(\stackrel{}{q})`$ are coherent superpositions of degenerate energy eigenstates for the many-particle system. Here, we assume that the wavefunction is non-separable, which means that at least one of the following two conditions must apply. First, it is impossible to express the wavefunction as a product of two wavefunctions: $`\psi (\stackrel{}{q},t)\psi (\stackrel{}{q}_n,t)\psi (\stackrel{}{q}_{(Nn)},t)`$ for all time, where $`\stackrel{}{q}_n`$ is the configuration space vector of a subset of $`n`$ particles and $`\stackrel{}{q}_{(Nn)}`$ is the complement configuration space vector of the remaining $`Nn`$ particles. In other words, the particles must be entangled. Second, the particles must interact with one another. A separable wavefunction, therefore, is one where the wavefunction may be factorized, for all time, into two or more wavefunctions of subsets of particles where the particles in one subset do not interact with the particles in another subset.
From Eq. (36), it follows that the extension of Eq. (22) to a many-particle system is,
$$f(\stackrel{}{\mathrm{\Theta }}^{}|\stackrel{}{q}_p,t_0)=\mathrm{\Gamma }^1|\psi (\stackrel{}{q}_p,\stackrel{}{\mathrm{\Theta }}^{},t_0)|^2$$
(37)
where $`\mathrm{\Gamma }=\mathrm{\Gamma }(\stackrel{}{q}_p)`$ is a normalizing factor similar to the one used in the single-particle case, and is equal to,
$$\mathrm{\Gamma }(\stackrel{}{q}_p)=(2\pi )^K\underset{i=0}{\overset{K}{}}|C_i|^2|\mathrm{\Phi }(\stackrel{}{q}_p)|^2.$$
(38)
We postulate that the STEs described by Eq. (37) are Poisson distributed in time with a frequency parameter $`N\lambda `$. This is because any one particle may ‘trigger’ a STE with a frequency $`\lambda `$. But since all the particles are entangled, they must all condition the transition event. For any given STE, therefore, every particle is equally likely to have triggered it, independently of every other particle. The frequencies must then add to $`N\lambda `$. Consequently, if the wavefunction is separable into the subsets $`\stackrel{}{q}_{pn}=(𝐱_{p1},\mathrm{},𝐱_{pn})`$ and $`\stackrel{}{q}_{p(Nn)}=(𝐱_{p(Nn)},\mathrm{},𝐱_{pN})`$, for instance, then the phase vector of the whole wavefunction $`\stackrel{}{\mathrm{\Theta }}`$ can be expressed as two phase vectors: $`\stackrel{}{\mathrm{\Theta }}_n`$ which is guided only by $`\stackrel{}{q}_{pn}`$, and $`\stackrel{}{\mathrm{\Theta }}_{Nn}`$ which is guided only by $`\stackrel{}{q}_{p(Nn)}`$. The wavefunction represented by $`\stackrel{}{\mathrm{\Theta }}_n`$ undergoes a transition with a frequency $`n\lambda `$, and the $`\stackrel{}{\mathrm{\Theta }}_{Nn}`$ wavefunction with a frequency $`(Nn)\lambda `$, independently of the transitions for $`\stackrel{}{\mathrm{\Theta }}_n`$.
#### 2 The macroscopic limit
Now we will examine a simplified account of the motion of a body with a large number of particles, $`N10^{23}`$, held together by an interparticle potential, $`V_{nm}(|𝐱_n𝐱_m|)`$. We assume that the wavefunction of this body can be factorized as,
$$\psi (\stackrel{}{q},t)=\mathrm{\Psi }(𝐑,t)\zeta (\mathrm{\Delta }\stackrel{}{q},t)$$
(39)
where $`𝐑=_{n=1}^Nm_n𝐱_n/(_{n=1}^Nm_n)`$ is the center-of-mass coordinate of the body and where $`\mathrm{\Delta }\stackrel{}{q}`$ is the vector of all interparticle distances, $`𝐫_{nm}=𝐱_n𝐱_m`$, for $`n<m`$. We also assume that the wavefunction of the internal variables of the body is stationary: $`\zeta (\mathrm{\Delta }\stackrel{}{q},t)\zeta (\mathrm{\Delta }\stackrel{}{q})\mathrm{exp}[iE_{\mathrm{int}}t/\mathrm{}]`$, where $`E_{\mathrm{int}}`$ is the internal energy of the body. The center-of-mass wavefunction, however, is not stationary,
$$\mathrm{\Psi }(𝐑,t)=\underset{i=0}{\overset{K}{}}C_i\mathrm{\Phi }_i(𝐑)\mathrm{exp}[iE_it/\mathrm{}].$$
(40)
This allows us to attribute a phase vector, $`\stackrel{}{\mathrm{\Theta }}=(\mathrm{\Theta }_1,\mathrm{},\mathrm{\Theta }_K)`$, to the center-of-mass wavefunction, $`\mathrm{\Psi }(𝐑,t)\mathrm{\Psi }(𝐑,\stackrel{}{\mathrm{\Theta }},t)`$, but not to the internal wavefunction (since $`K=0`$ in that case). Placing the wavefunction described in Eq. (39) in the STE equation (37), we obtain
$$f(\stackrel{}{\mathrm{\Theta }}^{}|𝐑_p,t_0)=\mathrm{\Gamma }^1|\mathrm{\Psi }(𝐑_p,\stackrel{}{\mathrm{\Theta }}^{},t_0)|^2$$
(41)
where $`𝐑_p=_{n=1}^Nm_n𝐱_{pn}/(_{n=1}^Nm_n)`$ is the center-of-mass of the particle positions, and
$$\mathrm{\Gamma }(𝐑_p)=(2\pi )^K\underset{i=0}{\overset{K}{}}|C_i|^2\mathrm{\Phi }_i(𝐑_p)$$
(42)
is the normalizing factor. The internal wavefunction does not appear in Eq. (41) because it is independent of $`\stackrel{}{\mathrm{\Theta }}^{}`$, and so would cancel itself out upon renormalization. The STEs, therefore, are guided solely by the center-of-mass $`𝐑_p`$, and since all $`N`$ particles are entangled, the STEs occur with a frequency parameter $`N\lambda `$. Here, we have managed to reproduce one of the results of the GRW theory, namely, that for a similar system the hits only affect the center-of-mass coordinate and occur with a frequency $`N\lambda `$.
For a macroscopic object, then, the time between transition events can be very much shorter than for a microscopic object. The expectation value equations for a single particle, (34) and (35), can now be extended to the many-particle case,
$$\{\stackrel{}{\mu }(t_0+\mathrm{d}t)\}=\stackrel{}{\mu }(t_0)+\frac{𝐏}{M}\mathrm{d}t[\stackrel{}{\mu }(t_0)𝐑_p(t_0)](N\lambda )\mathrm{d}t$$
(44)
$$\{\sigma ^2(t_0+\mathrm{d}t)\}=\sigma ^2(t_0)+\chi (t_0)\mathrm{d}t[\sigma ^2(t_0)\{\sigma ^2(t_0)\}_{Rp}](N\lambda )\mathrm{d}t$$
(45)
where $`\stackrel{}{\mu }=\mathrm{\Psi }|𝐑|\mathrm{\Psi }`$, $`𝐏=\mathrm{\Psi }|𝐏|\mathrm{\Psi }`$, $`𝐏`$ being the center-of-mass momentum, $`M=_{n=1}^Nm_n`$ is the total mass, $`\sigma ^2=\mathrm{\Psi }|(𝐑𝐑)^2|\mathrm{\Psi }`$, and where $`\{\sigma ^2\}_{Rp}`$ is analogous to $`\{\sigma ^2\}_p`$ in Eq. (35). Because the frequency parameter for the center-of-mass STEs of a macroscopic object is much greater than that of a microscopic object, the influence of the center-of-mass on its wavefunction is proportionately greater. Therefore, the influence of the center-of-mass on the position and variance expectation values of its wavefunction is greatly increased.
The Langevin equation for the center-of-mass, $`𝐑_p`$, is similar to the single particle equation (6).
$$𝐑_p(t+\mathrm{d}t)=𝐑_p(t)+𝐛(𝐑_p,t)\mathrm{d}t+D^{1/2}𝐰(t)(\mathrm{d}t)^{1/2}$$
(46)
where $`D=\mathrm{}/M`$. The drift function is given by
$$𝐛=\frac{_R𝒮}{M}+\frac{\mathrm{}}{M}\frac{_R}{}$$
(47)
where we set $`\mathrm{\Psi }=\mathrm{exp}[i𝒮/\mathrm{}]`$. In order to analyze the macroscopic limit, we look at the center-of-mass equations for a body that starts with $`N=1`$, and we progressively add particles to the body until $`N1`$. We will assume, for simplicity, that all the particles have an equal mass, so that $`M=Nm`$. It is convenient to introduce a quantum equilibrium timescale $`\tau `$, such that $`\rho (𝐑_p,t_0+\tau |𝐑_{p0},t_0)P(𝐑_p,t_0+\tau )`$, which scales as,
$$\tau (N)=\frac{L^2(N)}{D}=\frac{Nm}{\mathrm{}}L^2(N)$$
(48)
where $`L(N)`$ is the characteristic length scale the squared amplitude of the wavefunction. Its exact value is not important, all that matters is how it changes with increasing $`N`$. Note that the approach towards quantum equilibrium depends not only on the stochastic diffusion, controlled by $`D`$, but also on the mixing by the wavefunction, controlled by the drift function $`𝐛`$. However, we will ignore mixing effects and assume that diffusion controls the rate at which quantum equilibrium is approached. If we assume the particles are statistically independent and that their respective wavefunctions are separable, then the length scale should change like the standard deviation of the average of $`N`$ independent random variables, $`L(N)=N^{1/2}L(1)`$. This would mean that the timescale scales as $`\tau (N)=mL(1)/\mathrm{}=\tau (1)`$, i.e. that $`\tau `$ does not change with $`N`$. If we assume that $`L(1)10^5\mathrm{cm}`$, then for a proton, $`\tau (1)10^7\mathrm{s}`$, while for an electron $`\tau (1)10^{11}\mathrm{s}`$. The particles are not really independent, of course, since they are bound together to form a body. Nevertheless, as we will see, the conclusions drawn from this section do not depend critically of the scaling for $`L`$.
For the first time, we will impose a constraint on the value of the frequency parameter, $`\lambda `$. We require that $`\lambda \tau (1)1`$, meaning that microscopic objects would have ample time to reach quantum equilibrium before the next STE which might break it. If we adopt the same value for the frequency parameter as in the GRW model, then this condition is certainly satisfied, where $`\lambda \tau (1)10^{23}`$ for a proton. For a macroscopic object with $`N10^{23}`$, then $`N\lambda \tau (1)1`$, meaning that the time to reach quantum equilibrium is, on average, the same as the time between successive STEs. However, it is conceivable to make $`N`$ large enough so that $`N\lambda \tau (1)1`$. In that case, the STEs occur too frequently for quantum equilibrium to be valid in general. In that limit, the conditional particle position variance, $`\sigma ^2(t_0+\mathrm{\Delta }t|𝐑_{p0},t_0)`$ (where $`t_0`$ is the time of the previous STE), has not had enough time to spread appreciably before the following STE,
$$\sigma ^2(t_0+(N\lambda )^1|𝐑_{p0},t_0)\mathrm{}/(N^2m\lambda )L^2(1)/N.$$
(49)
If we further assume that the drift function does not vary appreciably between two STEs, and over the particle displacement, then we can say that the conditional particle position average is approximately,
$$\{𝐑_p\}(t_0+(N\lambda )^1|𝐑_{p0},\stackrel{}{\mathrm{\Theta }}^{},t_0)𝐑_{p0}(t_0)+𝐛(𝐑_{p0},\stackrel{}{\mathrm{\Theta }}^{},t_0)(N\lambda )^1$$
(50)
where we have placed the wavefunction phase vector $`\stackrel{}{\mathrm{\Theta }}^{}`$ in the arguments of the conditional average and the drift function to explicitly show their dependence on the state of the wavefunction. However, if we only know the initial center-of-mass position $`𝐑_{p0}(t_0)`$, and we do not know *a priori* which phase vector was chosen by the STE, then we must take the average with respect to the conditional probability density $`f(\stackrel{}{\mathrm{\Theta }}^{}|𝐑_{p0},t_0)`$.
Before we can find that average, it is convenient to write the drift function as,
$$𝐛=\frac{\mathrm{}}{\sqrt{2}M|\mathrm{\Psi }|^2}(e^{i(\pi /4)}\mathrm{\Psi }^{}_R\mathrm{\Psi }+e^{i(\pi /4)}\mathrm{\Psi }_R\mathrm{\Psi }^{}).$$
(51)
Combining Eqs. (40) and (41) with Eq. (51), and averaging over $`\stackrel{}{\mathrm{\Theta }}^{}`$, we obtain,
$$\{𝐛(𝐑_{p0},t_0)\}_{Rp}=\frac{1}{\mathrm{\Gamma }(𝐑_{p0})}\underset{i=0}{\overset{K}{}}|C_i|^2|\mathrm{\Phi }(𝐑_{p0})|^2𝐛_i(𝐑_{p0})$$
(52)
where $`𝐛_i`$ is the drift function corresponding to the stationary state $`\mathrm{\Phi }_i`$,
$$𝐛_i=\frac{\mathrm{}}{\sqrt{2}M|\mathrm{\Phi }_i|^2}(e^{i(\pi /4)}\mathrm{\Phi }_i^{}_R\mathrm{\Phi }_i+e^{i(\pi /4)}\mathrm{\Phi }_R\mathrm{\Phi }_i^{}).$$
(53)
We can gain more insight into these equations by considering the simple case of a free body in one dimension. Specifically, we consider a Gaussian wavepacket centered at $`R=0`$, with a standard deviation $`\sigma `$ at $`t=0`$, and with a mean velocity $`U`$,
$$\mathrm{\Psi }(R,t)=\left[\frac{\sigma }{\sqrt{2\pi ^3}\mathrm{}^2}\right]^{\frac{1}{2}}_0^{\mathrm{}}\left[e^{\sigma ^2(PMU)^2/\mathrm{}^2+iPR/\mathrm{}}+e^{\sigma ^2(P+MU)^2/\mathrm{}^2iPR/\mathrm{}}\right]e^{iP^2t/(2M\mathrm{})}dP$$
(54)
where the square brackets in the integral include the two degenerate eigenstates for each energy level. Instead of a phase vector, we now have a continuous phase function $`\mathrm{\Theta }(P)`$, corresponding to the continuous energy spectrum. By adapting Eq. (52) for the continuous energy spectrum case, we obtain
$$\{b(R_p)\}_{Rp}=\frac{U[\mathrm{}R_p/(2M\sigma ^2)]e^{2(\sigma MU/\mathrm{})^2R_p^2/(2\sigma ^2)}}{1+e^{2(\sigma MU/\mathrm{})^2R_p^2/(2\sigma ^2)}}.$$
(55)
The average drift in Eq. (55), like the average drift described in Eq. (52), is time-independent. This is because it has been evaluated from a spectrum of stationary states. Also, as $`|R_p/\sigma |\mathrm{}`$, the average drift approaches the mean velocity, $`\{b(R_p)\}_{Rp}U`$. In the region $`|R_p/\sigma |1`$, the average drift may deviate from the mean velocity by as much as $`|\{b(R_p)\}_{Rp}U|\mathrm{}/(2M\sigma )`$. The deviation in the region $`|R_p/\sigma |1`$ is a consequence of the preservation of the relative phases between energy eigenfunctions in a STE. As we have already assumed, the standard deviation of the wavefunction scales as $`\sigma N^{1/2}`$, which means that the region over which the average drift deviates from the mean velocity shrinks with increasing $`N`$. Also, since $`M=Nm`$, the deviation scales as $`N^{1/2}`$. Consequently, for very large bodies, the average drift is equal to $`U`$ to a very good approximation. The average of Eq. (50) with respect to a STE, and in the limit of very large $`N`$, tends to
$$\{R_p\}(t_0+(N\lambda )^1|R_{p0},t_0)R_{p0}(t_0)+U(N\lambda )^1.$$
(56)
Physically, Eq. (56) states that for very large bodies, the particles, represented here by the center-of-mass coordinate, dominate the evolution of the center-of-mass wavefunction in such a way that the osmotic velocity of the drift function averages out over many STEs. Likewise, the particle velocity $`_R𝒮/M`$ averages to the mean velocity $`U`$, which is a constant and does not change as a result of the STEs. The center-of-mass not only ‘drags’ the wavefunction along, as in Eq. (44), but, on average, also assumes the overall velocity of the wavefunction.
There remains the question of the variance of the center-of-mass position at the time $`t_0+(N\lambda )^1`$. This variance is the sum of a diffusion component, $`D/(N\lambda )`$, and a component due to the variance of the drift function, $`\sigma _b^2/(N\lambda )`$. Given that the diffusion constant $`D`$ scales as $`N^1`$, we may neglect this component for very large bodies. We will not attempt to evaluate $`\sigma _b^2`$ here, but it reasonable to assume that it is a function of the velocity uncertainty of the wavefunction, a constant for this system, and weakly dependent on $`R_p/\sigma `$, in the same way and for the same reason as for the average drift.
Here we have shown, in a limited way, how in the macroscopic limit the particle aspect dominates the dynamics of the wave-particle system, while assuming, on average, certain characteristics of the wavefunction. The principal assumption used in this development is the scaling law of the characteristic length of the squared amplitude of the wavefunction, $`LN^{1/2}`$. This assumption, while arbitrary, is not crucial. If $`L`$ were constant (which may be the case if the particles were perfectly correlated with each other), or increased with respect to the particle number, then the limit $`N\lambda \tau (N)1`$ would be attained faster with respect to $`N`$ than in the case presented here. Also, if $`L`$ were constant, for instance, then the region over which the average drift in Eq. (56) deviates from $`U`$ would also be constant, but the magnitude of the deviation would scale as $`N^1`$. If, on the other hand, $`L`$ decreased with increasing $`N`$ such that $`LN^\chi `$ and $`\chi >1/2`$, then the macroscopic limit may never be attained. However, these scaling laws only make sense in the context of a specific preparation procedure for the macroscopic body. It is this procedure that determines the change in $`L`$ as new particles are incorporated into the body. It is important to note that, while limited, this demonstration shows that in the macroscopic limit the dualist interpretation provides an approximately classical trajectory for a closed system, without the need for environmental decoherence. In this sense, macroscopic bodies are self-decohering.
## IV Discussion
### A Comparison of interpretations
Bell, in Ref. , stated that the resolution of the measurement problem of orthodox quantum mechanics would come *either* from a Bohm-type pilot-wave interpretation, *or* a GRW-type wavefunction collapse interpretation. We have attempted, however, to formulate an interpretation where the *or* in that statement is replaced with an *and*, although we don’t have, strictly speaking, a GRW-type collapse mechanism. Rather, we have a stochastic process where the wavefunction undergoes random transitions that are conditioned, or guided, by the particle.
At first sight, the dual ontology coupled with the dual dynamics of the wavefunction may seem unduly complicated. Proponents of pilot-wave interpretations may question the necessity of introducing a stochastic wavefunction process in addition to the deterministic Schrödinger equation. After all, the introduction of the particle already solves the measurement problem without the need for an additional wavefunction process. Likewise, proponents of an explicit collapse interpretation might wonder why the introduction of a particle is needed, since the explicit collapse mechanism already accounts for the determinateness of the macroscopic world.
In reponse to the first objection, we would like to point out that in the dualist interpretation, the wave and the particle are on a more equal footing than in the pilot-wave interpretations. Here, the particle is an active participant in the evolution of the wave-particle system. Indeed, a dual guidance condition is a natural extension of pilot-wave interpretation: since both wave and particle are real, and the wave guides the particle, then the particle should also guide the wave. Consequently, the particle cannot be dismissed as an *ad hoc* device introduced into the mathematical formalism of quantum mechanics for the sole purpose of getting rid of the measurement problem. Furthermore, the stochastic action of the particle on the wavefunction leads to an *objective* decoherence process necessary to eliminate empty wave effects and to ensure the emergence of classical trajectories in the macroscopic limit. The term ‘objective’ emphasizes the fact that the particle-guided STEs introduce an irreducible randomization of relative phases in the energy basis. The decoherence is not a consequence of our ignorance of a complicated but deterministic external environment acting on the system. Indeed, the model described here refers to closed systems, which eventually become completely decoherent in the energy basis on their own. Environmental decoherence not only still applies to open systems, but now possesses a stronger theoretical foundation within the dualist interpretation.
The role of the wavefunction can be seen, therefore, to generate a quantum equilibrium ensemble for the particle, while the particle generates an energy decoherent ensemble for the wavefunction, together they tend towards a DQE steady state. Degenerate energy eigenstates, however, retain their coherence in the energy decoherent ensemble generated by the particle in a closed system. On the other hand, it is reasonable to assume that for open systems, the combination of particle-guided STEs and the interaction with the environment would generate a completely decoherent wavefunction ensemble for the subsystem under consideration. The interaction with the environment would break the degeneracy of the closed-system eigenstates, and the STEs would, over time, generate a genuinely random wavefunction ensemble. In that sense, the action of the particles on the wavefunction provides a link between quantum mechanics and thermodynamics by generating a wavefunction ensemble compatible with quantum statistical mechanics. Indeed, Kobayashi describes thermal equilibrium of a closed system as the result of a relative-phase interaction. The relative-phase interaction is simply postulated, however. While Dugić claims that such an interaction can only happen for open systems, in the dualist interpretation the randomization of relative-phases is a consequence of the P-W guidance condition. The thermodynamic behaviour of the dualist interpretation has no analogue in the pilot-wave interpretations, whether deterministic or stochastic.
To the second objection, we would respond first by saying that the dualist interpretation resolves the question of the origin of the non-unitary stochastic process of the wavefunction. In the GRW model, the discrete hits are simply postulated, while in the continuous versions of the EWC interpretation , a random field is assumed to exist which interacts with the wavefunction in such a way as to induce a collapse. In the dualist interpretation, the origin of the non-unitary evolution is attributed to the particle. In that sense, the dual ontology and the dual guidance condition complement one another. Furthermore, the GRW model requires that we determine three new physical constants, which must be carefully chosen so that macroscopic objects collapse fast enough (a requirement of its monist ontology) while avoiding excessive energy production. While the CSL model, on the other hand, demands that we accept the existence of two different types of realities as a means of overcoming the tail problem. The dualist interpretation, however, is basically a no-collapse interpretation, since objects, microscopic as well as macroscopic, have definite positions at all times. Consequently, the tail problem simply does not occur, while equilibrium steady states are allowed to exist. Finally, the concepts of wave and particle are more intuitive than that of ‘objective’ and ‘projective’ realities.
### B Experiments and applications
There is every reason to think that the experimental results of orthodox quantum theory are reproduced by the dualist interpretation, particularly with respect to microscopic systems comprising a limited number of particles. This is partly because of the state preparation prior to the experiment, and partly because of the very short duration of the experiment $`T`$ with respect to frequency parameter, $`\lambda ^110^{16}\mathrm{s}`$. In an experiment like the one by Arndt et al., large molecules, made up of $`N10^3`$ particles each, originate from a source in thermal equilibrium, ensuring that they are initially in quantum equilibrium. The time-of-flight from the source to the detector is of the order $`T10\mathrm{m}\mathrm{s}`$. The probability that a STE occurs between the source and the detector is $`N\lambda T10^{15}`$. Therefore, about one large molecule in $`10^{15}`$ may not exhibit the proper quantum statistics, a statistically insignificant effect.
This is not to say that the dualist interpretation can have no experimental consequences. The irreversible evolution of systems towards a DQE steady state may provide insight into non-equilibrium thermodynamics, which may also lead to the experimental verification of the dualist interpretation. The program would consist of: (*i*) developing the dualist interpretation in more detail than what was presented here, (*ii*) finding its consequences (if any) for the non-equilibrium thermodynamics of bulk matter, and finally (*iii*) comparing these results to the observed properties of such systems. While it is too early to tell if such an experimental program would be feasible, it at least has the advantage of focusing on table-top experiments on bulk matter rather than on the possible detection of the very minute amounts of energy produced by the GRW collapse mechanism .
### C Speculations and future work
We must remain open to the possibility that $`\lambda `$ is not a constant at all, but may vary with particle type, particle mass, or may even be a function of the constants of the wavefunction, $`\lambda =\lambda (|C_0|,|C_1|,\mathrm{},|C_K|)`$. One interesting possibility would be that the frequency parameter is proportional to the energy spread of the wavefunction,
$$\lambda \mathrm{\Delta }E/\mathrm{}$$
(57)
where $`\mathrm{\Delta }E`$ is the standard deviation of the energy of the wavefunction. Not only would Eq. (57) give new meaning to the energy-time uncertainty relation, but it would also imply that energy eigenstates ($`\mathrm{\Delta }E=0`$) never undergo STEs ($`\lambda ^1=\mathrm{}`$). Such a result would be logical: since STEs have no effect on energy eigenstates, we might as well eliminate them altogether for such states.
The particular version of the dualist interpretation presented here is tentative in many respects. First and foremost, the stochastic process of discrete transition events should be replaced by a continuous stochastic process on the wavefunction guided by the particle. This would allow for a greater flexibility than the separable/non-separable criteria used to determine which subset of particles may evolve independently of another subset of particles for the many-particle case. The phase vector may be subject to a Langevin equation where the drift function would explicitly depend on the particle positions. The drift function for the phase vector of a given subsystem, therefore, would depend more or less strongly on the particle positions of an external environment depending on the degree of interaction or entanglement of the subsystem with its environment. Indeed, extending the dualist interpretation to include open systems or systems with a time-dependent Hamiltonian would be as important as developing a continuous stochastic process for the wavefunction. It also goes without saying that a dualist treatment of quantum fields, relativistic quantum mechanics, spin, to name a few, must be developed. Above all, a fully comprehensive theory of the wave-particle interaction must be formulated. In particular, is another conservation principle possible instead of the MSC principle? Might the constants of the motion be allowed to randomly fluctuate as a result of a STE, but be conserved on average. In other words, can we replace the strict conservation with a statistical one? This would certainly be consistent with the stochastic approach developed here. Alternatively, must all the constants of the motion of the Schrödinger equation be conserved? Perhaps the STEs only conserve those quantities that are constant for the equivalent classical system (total momentum, total energy, and so on). Such a *Classical Strict Conservation* (CSC) principle would more readily ensure the proper classical trajectories in the macroscopic limit than the MSC principle. The CSC principle also means that entanglements between energy eigenstates would decay over time, independently of any environmental decoherence. But due to the very low frequency parameter, that decay should not alter experimental results on microscopic systems.
## V Conclusions
We have presented a version of quantum mechanics in which both the particle and wavefunction are assumed to be objectively real, and where the wavefunction and the particle guide the evolution of the other. The wavefunction guides the particle according to the stochastic pilot-wave theory. The particle guides the wavefunction by means of discrete spontaneous transitions that are Poisson distributed in time. The transitions are assumed to respect the constants of the motion of the Schrödinger equation. Consequently, only the relative phases between the stationary states that make up the non-stationary wavefunction may change as a result of the transitions. In this way, an action-reaction principle is established between wave and particle. It is shown that for microscopic objects, the transitions occur so infrequently as to make the dualist interpretation indistinguishable from the orthodox interpretation. For macroscopic objects, however, the transitions occur so frequently as to cause a rapid decoherence of the wavefunction in the energy basis. For a free macroscopic body, we have shown that this decoherence causes the emergence of an average classical motion for the center-of-mass of the body.
On a conceptual level, we have argued that the dual ontology and the dual guidance condition complement one another. The wavefunction and the particle position are now equally important, as they are both necessary to completely specify the stochastic evolution of the wave-particle system. The dual ontology provides a clearer account of the determinateness of the macroscopic world and quantum measurements than that given by explicit wavefunction collapse models. The dual guidance condition creates the genuine decoherence necessary to eliminate empty wave effects and to reproduce classical trajectories in the macroscopic limit. This kind of decoherence is problematic in pilot-wave theories since decoherence can only be an expression of ignorance of the initial conditions of the wavefunction and is therefore not real. The dualist interpretation, therefore, imposes a kind of symmetry between the wavefunction and the particle. Furthermore, since the P-W guidance condition causes a non-stationary pure state to become a steady mixed one, it is argued that the dualist interpretation can exhibit thermodynamic behaviour. While much work still remains, we nevertheless conclude that the dualist interpretation not only avoids the problems of other interpretations, but can lead to the rigorous unification of classical mechanics, quantum mechanics and thermodynamics.
|
no-problem/9908/cond-mat9908316.html
|
ar5iv
|
text
|
# Sandpile Models of Self-Organized Criticality
\[
## Abstract
Self-Organized Criticality is the emergence of long-ranged spatio-temporal correlations in non-equilibrium steady states of slowly driven systems without fine tuning of any control parameter. Sandpiles were proposed as prototypical examples of self-organized criticality. However, only some of the laboratory experiments looking for the evidence of criticality in sandpiles have reported a positive outcome. On the other hand a large number of theoretical models have been constructed that do show the existence of such a critical state. We discuss here some of the theoretical models as well as some experiments.
\]
The concept of Self-Organized Criticality (SOC) was introduced by Bak, Tang and Wiesenfeld (BTW) in 1987 . It says that there is a certain class of systems in nature whose members become critical under their own dynamical evolutions. An external agency drives the system by injecting some mass (in other examples, it could be the slope, energy or even local voids) into it. This starts a transport process within the system: Whenever the mass at some local region becomes too large, it is distributed to the neighbourhood by using some local relaxation rules. Globally, mass is transported by many such successive local relaxation events. In the language of sandpiles, these together constitute a burst of activity called an avalanche. If we start with an initial uncritical state, initially most of the avalanches are small, but the range of sizes of avalanches grows with time. After a long time, the system arrives at a critical state, in which the avalanches extend over all length and time scales. Customarily, critical states have measure zero in the phase space. However, with self-organizing dynamics, the system finds these states in polynomial times, irrespective of the initial state .
BTW used the example of a sandpile to illustrate their ideas about SOC. If a sandpile is formed on a horizontal circular base with any arbitrary initial distribution of sand grains, a sandpile of fixed conical shape (steady state) is formed by slowly adding sand grains one after another (external drive). The surface of the sandpile in the steady state on the average makes a constant angle known as the angle of repose, with the horizontal plane. Addition of each sand grain results in some activity on the surface of the pile: an avalanche of sand mass follows, which propagates on the surface of the sandpile. Avalanches are of many different sizes and BTW argued that they would have a power law distribution in the steady state.
There are also some other naturally occurring phenomena which are considered to be examples of SOC. Slow creeping of tectonic plates against each other results intermittent burst of stress release during earthquakes. The energy released is known to follow power law distributions as described by the well known Gutenberg-Richter Law . The phenomenon of earthquakes is being studied using SOC models . River networks have been found to have fractal properties. Water flow causes erosion in river beds, which in turn changes the flow distribution in the network. It has been argued that the evolution of river pattern is a self-organized dynamical process . Propagation of forest fires and biological evolution processes have also been suggested to be examples of SOC.
Laboratory experiments on sandpiles, however, have not always found evidence of criticality in sandpiles. In the first experiment, the granular material was kept in a semicircular drum which was slowly rotated about the horizontal axis, thus slowly tilting the free surface of the pile. Grains fell vertically downward and were allowed to pass through the plates of a capacitor. Power spectrum analysis of the time series for the fluctuating capacitance however showed a broad peak, contrary to the expectation of a power law decay, from the SOC theory .
In a second experiment, sand was slowly dropped on to a horizontal circular disc, to form a conical pile in the steady state. On further addition of sand, avalanches were created on the surface of the pile, and the outflow statistics was recorded. The size of the avalanche was measured by the amount of sand mass that dropped out of the system. It was observed that the avalanche size distribution obeys a scaling behaviour for small piles. For large piles, however, scaling did not work very well. It was suggested that SOC behavior is seen only for small sizes, and very large systems would not show SOC .
Another experiment used a pile of rice between two vertical glass plates separated by a small gap. Rice grains were slowly dropped on to the pile. Due to the anisotropy of grains, various packing configurations were observed. In the steady state, avalanches of moving rice grains refreshed the surface repeatedly. SOC behaviour was observed for grains of large aspect ratio, but not for the less elongated grains .
Theoretically, however, a large number of models have been proposed and studied. Most of these models study the system using cellular automata where discrete, as well as continuous, variables are used for the heights of sand columns. Among them, the Abelian Sandpile model is most popular . Other models of self organized criticality have been studied but will not be discussed here. These include the Zheng model which has modified rules for sandpile evolution , a model for Abelian distributed processors and other stochastic rule models , the Eulerian Walkers model and the Takayasu aggregation model .
In the Abelian sandpile model, we associate a non-negative integer variable $`h`$ representing the height of the ‘sand column’ with every lattice site on a $`d`$-dimensional lattice (in general on any connected graph). One often starts with an arbitrary initial distribution of heights. Grains are added one at a time at randomly selected sites $`𝒪`$: $`h_𝒪h_𝒪+1`$. The sand column at any arbitrary site $`i`$ becomes unstable when $`h_i`$ exceeds a previously selected threshold value $`h_c`$ for the stability. Without loss of generality, one usually chooses $`h_c=2d1`$. An unstable sand column always topples. In a toppling, the height is reduced as: $`h_ih_i2d`$ and all the $`2d`$ neighbouring sites $`\{j\}`$ gain a unit sand grain each: $`h_jh_j+1`$. This toppling may make some of the neighbouring sites unstable. Consequently, these sites will topple again, possibly making further neighbours unstable. In this way a cascade of topplings propagates, which finally terminates when all sites in the system become stable (Fig. 1). One waits until this avalanche stops before adding the next grain. This is equivalent to assuming that the rate of adding sand is much slower than the natural rate of relaxation of the system. The wide separation of the ‘time scale of drive’ and ‘time scale of relaxation’ is common in many models of SOC. For instance, in earthquakes, the drive is the slow tectonic movement of continental plates, which occurs over a timescale of centuries, while the actual stress relaxation occurs in quakes, whose duration is only a few seconds. This separation of time scales is usually considered to be a defining characteristic of SOC. However, Dhar has argued that the wide separation of time scales should not be considered as a necessary condition for SOC in general . Finally, the system must have an outlet, through which the grains go out of the system, which is absolutely necessary to attain a steady state. Most popularly, the outlet is chosen as the $`(d1)`$ dimensional surface of a $`d`$-dimensional hypercubic system.
The beauty of the Abelian model is that the final stable height configuration of the system is independent of the sequence in which sand grains are added to the system to reach this stable configuration . On a stable configuration $`𝒞`$, if two grains are added, first at $`i`$ and then at $`j`$, the resulting stable configuration $`𝒞^{}`$ is exactly same in case the grains were added first at $`j`$ and then at $`i`$. In other sandpile models, where the stability of a sand column depends on the local slope or the local Laplacian, the dynamics is not Abelian, since toppling of one unstable site may convert another unstable site to a stable site (Fig. 2). Many such rules have been studied in the literature .
An avalanche is a cascade of topplings of a number of sites created on the addition of a sand grain. The strength of an avalanche in general, is a measure of the effect of the external perturbation created due to the addition of the sand grain. Quantitatively, the strength of an avalanche is estimated in four different ways: (i) size $`(s)`$: the total number topplings in the avalanche, (ii) area $`(a)`$: the number of distinct sites which toppled, (iii) life-time $`(t)`$: the duration of the avalanche and (iv) radius $`(r)`$: the maximum distance of a toppled site from the origin. These four different quantities are not independent and are related to each other by scaling laws. Between any two measures $`x,y\{s,a,t,r\}`$ one can define a mutual dependence as: $`<y>x^{\gamma _{xy}}.`$ These exponents are related to one another, e.g., $`\gamma _{ts}=\gamma _{tr}\gamma _{rs}`$. For the ASM, it can be proved that the avalanche clusters cannot have any holes. It has been shown that $`\gamma _{rs}=2`$ in two dimensions. It has also benn proved that $`\gamma _{rt}`$ = 5/4 . A better way to estimate the $`\gamma _{tx}`$ exponents is to average over the intermediate values of the size, area and radius at every intermediate time step during the growth of the avalanche.
Quite generally, the finite size scaling form for the probability distribution function for any measure $`x\{s,a,t,r\}`$ is taken to be:
$`P(x)x^{\tau _x}f_x\left({\displaystyle \frac{x}{L^{\sigma _x}}}\right).`$
The exponent $`\sigma _x`$ determines the variation of the cut-off of the quantity $`x`$ with the system size $`L`$. Alternatively, sometimes it is helpful to consider the cumulative probability distribution $`F(x)=`$$`_x^{L^{\sigma _x}}P(x)𝑑x`$ which varies as $`x^{1\tau _x}`$. However, in the case of $`\tau _x=1`$, the variation should be in the form $`F(x)=C`$ log$`(x)`$. Between any two measures, scaling relations like $`\gamma _{xy}=(\tau _x1)/(\tau _y1)`$ exist. Recently, the scaling assumptions for the avalanche sizes have been questioned. It has been argued that there actually exists a multifractal distribution instead .
Numerical estimation for the exponents have yielded scattered values. For example estimates of the exponent $`\tau _s`$ range from 1.20 to 1.27 and 1.29 .
We will now look into the structure of avalanches in more detail. A site $`i`$ can topple more than once in the same avalanche. The set of its neighbouring sites $`\{j\}`$, can be divided into two subsets. Except at the origin $`𝒪`$, where a grain is added from the outside, for a toppling, the site $`i`$ must receive some grains from some of the neighbouring sites $`\{j_1\}`$ to exceed the threshold $`h_c`$. These sites must have toppled before the site $`i`$. When the site $`i`$ topples, it loses $`2d`$ grains to the neighbours, by giving back the grains it has received from $`\{j_1\}`$, and also donating grains to the other neighbours $`\{j_2\}`$. Some of these neighbours may topple later, which returns grains to the site $`i`$ and its height $`h_i`$ is raised. The following possibilities may arise: (i) some sites of $`\{j_2\}`$ may not topple at all; then the site $`i`$ will never re-topple and is a singly toppled site on the surface of the avalanche. (ii) all sites in $`\{j_2\}`$ topple, but no site in $`\{j_1\}`$ topples again; then $`i`$ will be a singly toppled site, surrounded by singly toppled sites. (iii) all sites in $`\{j_2\}`$ topple, and some sites of $`\{j_1\}`$ re-topple; then $`i`$ will remain a singly toppled site, adjacent to the doubly toppled sites. (iv) all sites in $`\{j_2\}`$ topple, and all sites of $`\{j_1\}`$ re-topples; then the site $`i`$ must be a doubly toppled site. This implies that the set of at least doubly toppled sites must be surrounded by the set of singly toppled sites. Arguing in a similar way will reveal that sites which toppled at least $`n`$ times, must be a subset and also are surrounded by the set of sites which toppled at least $`(n1)`$ times. Finally, there will be a central region in the avalanche, where all sites have toppled a maximum of $`m`$ times. The origin of the avalanche $`𝒪`$, where the sand grain was dropped, must be a site in this maximum toppled zone. Also the origin must be at the boundary of this $`m^{\mathrm{th}}`$ zone, since otherwise it should have toppled $`(m+1)`$ times .
Using this idea, we see that the boundary sites on any arbitrary system can topple at most once in any arbitrary number of avalanches. Similar restrictions are true for inner sites also. A $`(2n+1)\times (2n+1)`$ square lattice can be divided into $`(n+1)`$ subsets which are concentric squares. Sites on the $`m`$-th such square from the boundary can topple at most $`m`$ times, where as the central site cannot topple more than $`n`$ times in any avalanche.
Avalanches can also be decomposed in a different way, using Waves of Toppling. Suppose, on a stable configuration $`𝒞`$, a sand grain is added at the site $`𝒪`$. The site is toppled once, but is not allowed to topple for the second time, till all other sites become stable. This is called the first wave. It may happen that after the first wave, the site $`𝒪`$ is stable; in that case the avalanche has terminated. If the site $`𝒪`$ is still unstable it is toppled for the second time, and all other sites are allowed to become stable again; this is called the second wave, and so on. It was shown, that in a sample where all waves occur with equal weights, the probability of occurrence of a wave of area $`a`$ is $`D(a)1/a`$ .
It is known that the stable height configurations in ASM are of two types: Recurrent configurations appear only in the steady state with uniform probabilities, whereas Transient configurations occur in the steady state with zero probability. Since long range correlations appear only in the steady states, it implies that the recurrent configurations are correlated. This correlation is manifested by the fact that certain clusters of connected sites with some specific distributions of heights never appear in any recurrent configuration. Such clusters are called the forbidden sub-configurations. It is easy to show that two zero heights at the neighbouring sites: (0$``$0) or, an unit height with two zero heights at its two sides: (0$``$1$``$0) never occur in the steady state. There are also many more forbidden sub-configurations of bigger sizes.
An $`L\times L`$ lattice is a graph, which has all the sites and all the nearest neighbour edges (bonds). A Spanning Tree is a sub-graph of such a graph, having all sites and some bonds. It has no loop and therefore, between any pair of sites there exists an unique path through a sequence of bonds. There can be many possible Spanning trees on a lattice. These trees have interesting statistics in a sample where they are equally likely. Suppose when we randomly select such a tree and then randomly select one of the unoccupied bonds and occupy it, it forms a loop of length $`\mathrm{}`$. It has been shown that these loops have the length distribution $`D(\mathrm{})\mathrm{}^{8/5}`$. Similarly, if a bond of a Spanning tree is randomly selected and deleted, then it divides into two fragments. The sizes of the two fragments generated follow a probability distribution $`D(a)a^{11/8}`$ . It was also shown that every recurrent configuration of the Abelian model on an arbitrary lattice has a one-to-one correspondence to a random Spanning tree graph on the same lattice. Therefore, there are exactly the same number of distinct Spanning trees as the number of recurrent Abelian sandpile model configurations on any arbitrary lattice . Given a stable height configuration, there exists an unique prescription to obtain the equivalent Spanning tree. This is called the Burning method . A fire front, initially at every site outside the boundary, gradually penetrates (burns) into the system using a deterministic rule. The paths of the fire front constitute the Spanning tree. A fully burnt system is recurrent, otherwise it is transient.
Suppose, addition of a grain at the site $`𝒪`$ of a stable recurrent configuration $`𝒞`$, leads to another stable configuration $`𝒞^{}`$. Is it possible to get back the configuration $`𝒞`$ knowing $`𝒞^{}`$ and the position of $`𝒪`$? This is done by Inverse toppling . Since $`𝒞^{}`$ is recurrent, a corresponding Spanning tree ($`𝒞^{}`$) exists. Now, one grain at $`𝒪`$ is taken out from $`𝒞^{}`$ and the configuration $`𝒞^{\prime \prime }`$= $`𝒞^{}\delta _{𝒪j}`$ is obtained. This means on ST($`𝒞^{}`$), one bond is deleted at $`𝒪`$ and it is divided into two fragments. Therefore one cannot burn the configuration $`𝒞^{\prime \prime }`$ completely since the resulting tree has a hole consisting of at least the sites of the smaller fragment (Fig.3). This implies that $`𝒞^{\prime \prime }`$ has a forbidden sub-configuration $`(F_1)`$ of equal size and $`𝒞^{\prime \prime }`$ is not recurrent. On $`(F_1)`$, one runs the inverse toppling process: 4 grains are added to each site $`i`$, and one grain each is taken out from all its neighbours $`\{j\}`$. The cluster of $`f_1`$ sites in $`F_1`$ is called the first inverse avalanche. The lattice is burnt again. If it still has a forbidden sub-configuration ($`F_2`$), another inverse toppling process is executed, and is called the second inverse avalanche. The size of the avalanche is: $`s=f_1+f_2+f_3+\mathrm{}.`$, and the $`f_1`$ is related to the maximum toppled zone of the avalanche. From the statistics of random spanning trees it is clear that $`f_1`$ should have the same statistics of the two fragments of the tree generated on deleting one bond. Therefore the maximum toppled zone also has a power law distribution of the size, $`D(a)a^{11/8}`$.
Sandpile models with stochastic evolution rules have also been studied. The simplest of these is a Two-state sandpile model, A stable configuration of this system consists of sites, either vacant or occupied by at most one grain. If there are two or more grains at a site at the same time we say there is a collision. In this case, all grains at that site are moved. Each grain chooses a randomly selected site from the neighbours and is moved to that site. The avalanche size is the total number of collisions in an avalanche. From the numerical simulations, the distribution of avalanche sizes is found to follow a power law, characterized by an exponent $`\tau _s1.27`$ . This two-state model has a nontrivial dynamics even in one dimension . Recently, it has been shown that instead of moving all grains, if only two grains are moved randomly leaving others at the site, the dynamics is Abelian .
Some other stochastic models also have nontrivial critical behaviour in one dimension. To model the dynamics of rice piles, Christensen et. al. studied the following slope model . On a one-dimensional lattice of length $`L`$, non-negative integer variable $`h_i`$ represents the height of the sand column at the site $`i`$. The local slope $`z_i=h_ih_{i+1}`$ is defined, maintaining zero height on the right boundary. Grains are added only at the left boundary $`i=1`$. Addition of one grain $`h_ih_i+1`$ implies an increase in the slope $`z_iz_i+1`$. If at any site, the local slope exceeeds a pre-assigned threshold value $`z_i^c`$, one grain is transferred from the column at $`i`$ to the column at $`(i+1)`$. This implies a change in the local slope as: $`z_iz_i2`$ and $`z_{i\pm 1}z_{i\pm 1}+1`$. The thresholds of the instability $`z_i^c`$ are dynamical variables and are randomly chosen between 1 and 2 in each toppling. Numerically, the avalanche sizes are found to follow a power law distribution with an exponent $`\tau _s1.55`$ and the cutoff exponent was found to be $`\sigma _s2.25`$. This model is referred as the Oslo model.
Addition of one grain at a time, and allowing the system to relax to its stable state, implies a zero rate of driving of the system. What happens when the driving rate is finite? Corral and Paczuski studied the Oslo model in the situation of nonzero flow rate. Grains were added with a rate $`r`$, i.e., at every (1/$`r`$) time updates, one grain is dropped at the left boundary $`i=1`$. They observed a dynamical transition separating intermittent and continuous flows .
Many different versions of the sandpile model have been studied. However the precise classification of various models in different universality classes in terms of their critical exponents is not yet available and still attracts much attention . Exact values of the critical exponents of the most widely studied Abelian model are still not known in two dimensions. Some effort has also been made towards the analytical calculation of avalanche size exponents . Numerical studies for these exponents are found to give scattered values. On the other hand the two-state sandpile model is believed to be better behaved and there is good agreement of numerical values of its exponents by different investigators. However, whether the Abelian model and the two-state model belong to the same universality class or not is still an unsettled question .
If a real sandpile is to be modeled in terms of any of these sandpile models or their modifications, it must be a slope model, rather than a height model. However, not much work has been done to study the slope models of sandpiles . Another old question is whether the conservation of the grain number in the toppling rules is a necessary condition to obtain a critical state. It has been shown already that too much non-conservation leads to avalanches of characteristic sizes . However, if grains are taken out of the system slowly, the system is found to be critical in some situations. A non-conservative version of the Abelian sandpile model with directional bias shows a mean field type critical behaviour . Therefore, the detailed role of the conservation of the grain numbers during the topplings is still an open question.
We acknowledge D. Dhar with thanks for a critical reading of the manuscript and for useful comments.
Electronic address: [email protected]
|
no-problem/9908/astro-ph9908282.html
|
ar5iv
|
text
|
# Fuelling quasars with hot gas
## 1 Introduction
It is generally accepted that quasars are the result of accretion onto massive black holes residing in the nuclei of normal galaxies. Rees (1984) has argued that a black hole is likely to form at the centre of almost any galaxy, so that the main issue for quasar formation is how the black hole is fuelled. Models for quasar evolution must account for the time dependence of the quasar luminosity function, particularly its peak at $`z1.5`$ and subsequent decline (e.g. Boyle, Shanks & Peterson, 1988), and also for the formation of the massive black holes and their remnants that reside in the nuclei of many nearby galaxies (e.g. Ford et al 1997; Magorrian et al 1998).
A variety of models has been proposed for the fuelling of quasars, most of which rely on making interstellar gas fall close to the nucleus where it joins an accretion disc (e.g. Shlosman, Begelman & Frank 1990). In this paper we consider the possibility that the main source of fuel is the hot interstellar medium formed during the collapse of larger galaxies. Provided that the angular momentum of the gas is not too large, a nuclear black hole can grow by Bondi accretion and, if the gas temperature is close to the virial temperature, its growth time is controlled largely by the density of the hot gas. In section 3 we show that, soon after the collapse of a protogalaxy, this can be large enough to make the accretion rate of a nuclear black hole comparable to the Eddington rate.
It is the protogalaxies with the largest spheroids, i.e. large elliptical protogalaxies, that contain most hot gas (Nulsen & Fabian 1997), making them the best hosts for quasars in this model. Thus, the observation that quasars reside in elliptical hosts (McLure et al 1998) is consistent with them being fuelled by hot gas. Based on the presence of companions close to many quasars, it has been argued that gravitational interaction plays a significant role in the quasar phenomenon (e.g. Bahcall et al 1997). However, we note that this may also be interpreted as indicating recent collapse. As outlined below, the supply of hot gas is expected to be greatest immediately after a collapse and to decrease with time, so this would also be consistent with fuelling quasars by hot gas. Of course, active galactic nuclei, including quasars, may obtain their fuel from a variety of sources. We consider that hot gas provides a minimum fuelling rate, which can be supplemented by cold gas, if available.
Cooling depletes the hot gas throughout the galaxy, so that the nuclear accretion rate decreases with time after a protogalaxy collapses. The depletion of the hot gas does not however simply explain the lack of luminous quasars at the current epoch. The central black holes in nearby elliptical galaxies are still immersed in accretable hot gas, yet generally have low accretion luminosities (Fabian & Canizares 1988; Di Matteo & Fabian 1997). As an example, one of the best candidates for a massive nuclear black hole is M87 (Harms et al 1994; Marconi et al 1997). A solution to this problem has been argued by Di Matteo & Fabian (1997) and, specifically for M87, by Reynolds et al (1996) in which the accretion flow becomes advection-dominated (i.e. an ADAF, Narayan & Yi 1995), so having a low accretion efficiency. We adopt that hypothesis here and assume that when the nuclear accretion rate falls below the threshold for ADAF formation most quasars fade rapidly (Fabian & Rees 1995; Yi 1996).
In section 4 we describe the incorporation of a simple version of this model for quasar formation into a semi-analytical model for galaxy formation (Nulsen & Fabian 1997; see also Haehnelt & Rees 1993 for models relating quasar evolution to galaxy formation). This is used to show that the model can account for the broad features of the history of quasars in section 5. Section 6 has a brief discussion of feedback on the model, particular that due to Compton cooling. In section 7 we discuss the limitations of the semi-analytical model for quasar formation and some predictions of the model. Our conclusions are summarized in section 8.
## 2 Angular momentum and the feeding of a nuclear black hole
Shlosman et al (1990) discuss the issues of getting a large mass of gas to accrete into the nucleus of a galaxy. Almost all galaxies have appreciable net rotation, so that the main difficulty is the dissipation of angular momentum, essentially all of which must be dissipated in order for gas to accrete into a nuclear black hole. The total accreted matter should also account for (most of) the mass of the nuclear black hole, which exceeds $`10^9\mathrm{M}_{}`$ in many cases (Ford et al 1997; Magorrian et al 1998), so that the gas needs to be drained from a large region around the nucleus. The larger this region, the greater the difficulty of dissipating angular momentum.
According to the standard argument, the effective viscosity in an accretion disc can be expressed as (Shakura & Sunyaev 1973)
$$\mu _\mathrm{d}=\frac{\alpha _\mathrm{d}\rho s^2}{\mathrm{\Omega }},$$
(1)
where $`\rho `$ is the density and $`s=\sqrt{kT/(\mu m_\mathrm{H})}`$ is the isothermal sound speed of gas in the disc, and
$$\mathrm{\Omega }(r)=\sqrt{\frac{GM(r)}{r^3}}$$
is the angular frequency of a circular orbit. The dimensionless parameter $`\alpha _\mathrm{d}`$ cannot normally exceed 1 and is generally thought to be $`0.1`$ – 0.3 (e.g. Cannizzo 1993). The same parametrization can be applied to the hot gas, i.e. gas at about the virial temperature, but then it is more appropriate to express the viscosity in terms of the scale height, $`w`$,
$$\mu _\mathrm{h}=\alpha _\mathrm{h}\rho sw.$$
(2)
The cold gas moves on almost circular orbits, so that the time required for gas from radius $`r`$ to drain to the centre of the disc is roughly (e.g. Shlosman et al 1990)
$$t_\mathrm{d}=\frac{r}{v_r}=\frac{rv_{\mathrm{rot}}}{\alpha _\mathrm{d}\eta ^2s^2},$$
(3)
where $`v_r`$ is the radial speed, $`v_{\mathrm{rot}}=r\mathrm{\Omega }`$ is the rotation speed of the gas disc and $`\eta =d\mathrm{ln}\mathrm{\Omega }/d\mathrm{ln}r`$ ($`3/2`$ for a keplerian disc).
Since the hot gas is pressure supported, it can drain much faster than a cold disc. However, angular momentum may still prevent it from accreting directly onto a nuclear black hole. For gas at about the virial temperature, the scale height is $`wr`$ and the speed of sound is close to the kepler speed, $`v_{\mathrm{rot}}`$, at the same radius. A rough estimate of the time required to dissipate the angular momentum of the hot gas is then
$$t_{\mathrm{am}}\frac{r}{\alpha _\mathrm{h}v_{\mathrm{rot}}}.$$
For $`\alpha _\mathrm{h}`$ of order unity, this is comparable to the dynamical time, while
$$\frac{t_{\mathrm{am}}}{t_\mathrm{d}}\frac{\alpha _\mathrm{d}s^2}{\alpha _\mathrm{h}v_{\mathrm{rot}}^2}1,$$
if the disc gas is cold.
Thus, while the drainage of a cold accretion disc is governed by the dissipation of angular momentum, it is only when hot gas flows inward at speeds approaching the free-fall velocity that we need to be concerned with the effects of rotation on the flow. Dissipation of angular momentum is less of a problem for the accretion of hot gas than it is for cold gas.
For flow speeds comparable to the speed of sound or faster, the angular momentum of the hot gas will be largely conserved, so that its residual angular momentum will cause it to eventually join a disc. We assume that this occurs at a sufficiently small radius to make the drainage time of the disc from the point where the gas joins it point short.
## 3 Feeding by hot gas
In the collapse of small protogalaxies, radiative cooling is faster than shock heating, so that gas ends up cold immediately after the collapse (Rees & Ostriker 1977; White & Frenk 1991). In larger systems, which are more tenuous and have higher virial temperatures, some of the gas can form a hot atmosphere after the collapse. The condition for the gas at radius $`r`$ to be part of a hot atmosphere is that its radiative cooling time be longer than the free-fall time from $`r`$ to the centre of the protogalaxy. This cooling time is still significantly smaller than the time at which the system collapses, so that the hot gas will start to cool, forming a cooling flow (Fabian 1994), almost immediately after collapse. A central black hole can accrete hot gas from the central region of the cooling flow.
Based on observations of clusters of galaxies, we expect gas taking part in the cooling flow to be sufficiently inhomogeneous to lead to widespread thermal instability (Nulsen 1986; 1988). The general solution for an inhomogeneous cooling flow is complex, since the flow of each phase must be tracked separately. However, Nulsen (1986) has argued that gas blobs moving relative to the mean flow tend to be rapidly disrupted until they are small enough to be pinned to the mean flow. As a result, the phases tend to flow inward at approximately the same speed, i.e. to comove. In the central part of the cooling flow, where conditions change slowly relative to the flow time, we can also expect the flow to be nearly steady.
For the purpose of simulating quasar formation, we take the potentials of galaxies to be exactly isothermal. In that case, the mean temperature of the inhomogeneous gas mixture in a steady, comoving cooling flow will be close to the constant virial temperature, the exact relationship depending on details of the inhomogeneous density distribution in the gas. There is a class of self-similar, inhomogeneous cooling flow solutions in which the mean gas temperature is a constant multiple of the virial temperature, the isothermal cooling flows (Nulsen 1998). The mean gas density and temperature in an isothermal cooling flow are related to the flow time, $`r/v`$, by
$$\frac{r}{v}=K\frac{kT}{\mu m_\mathrm{H}}\frac{\rho }{n_\mathrm{e}n_\mathrm{H}\mathrm{\Lambda }(T)},$$
(4)
where $`r`$ is the radius, $`v`$ the flow velocity, $`\rho `$ the mean gas density, $`n_\mathrm{e}`$ the mean electron number density and $`n_\mathrm{H}`$ the mean hydrogen number density. $`T`$ is the effective gas temperature (defined so that the pressure is $`\rho kT/(\mu m_\mathrm{H})`$) and $`\mu m_\mathrm{H}`$ is the mean mass per gas particle. The dimensionless constant, $`K`$, depends on flow details, including the cooling function and the radial dependence of the mass deposition rate. In clusters the mass flow rate, $`\dot{M}`$, is found to be approximately proportional to $`r`$ (Fabian 1994; Peres et al 1998), which we take to be exact for the purpose of our model. In that case, if the cooling function is approximated by a power law, $`\mathrm{\Lambda }(T)T^a`$, then $`K`$ depends weakly on the exponent $`a`$, ranging from 2.32 to 2.92 for $`a`$ in the range $`0.5`$ to 0.5. We adopt the representative value $`K=2.5`$ for our calculations.
In an isothermal cooling flow with mass flow rate $`\dot{M}r`$, the flow velocity is constant, regardless of the details of the cooling function, so that the Mach number is also constant. The arguments below rely on the Mach number of the cooling flow being close to unity initially, since this maximizes the density of the hot gas in the vicinity of a nuclear black hole. This is the most critical aspect of the cooling flow model for quasar formation, since it determines the Bondi accretion rate of the nuclear black hole.
When the Mach number of a cooling flow is low, the linear growth of thermal instability is weak (Balbus & Soker 1989). The inhomogeneous, isothermal cooling flow model relies on non-linear thermal instability (contrary to a common misconception, very small amplitude density fluctuations become non-linear in a cooling flow; Nulsen 1997). However, for Mach numbers of order unity, linear thermal instability is much stronger. If, contrary to our assumptions, the gas is very nearly homogeneous and the thermal instability is weak (or if $`\dot{M}r^\eta `$ with $`\eta <1`$), the Mach number of the cooling flow increases inward and, as it approaches unity, thermal instability becomes strong, causing widespread deposition of cold gas. This tends to make the Mach number saturate close to one, so that our assumption that the Mach number of the cooling flow in the vicinity of the nucleus is close to one is not sensitive to our assumptions. A cooling flow with a Mach number of order unity is simply a maximal cooling flow. Conveniently, the nuclear accretion rate can be determined exactly for the isothermal cooling flow model (see below), but our quasar formation model is not critically dependent on the assumption that the hot gas forms an isothermal cooling flow. Also note that the gas density in an isothermal cooling flow with Mach number close to one is similar to that obtained by other arguments for the maximum gas density in a protogalaxy (Fall & Rees 1985).
We assume that a nuclear black hole accretes any hot gas coming within its influence by Bondi accretion. At very small $`r`$ the residual angular momentum causes the accreting gas to pass through a shock (assumed to be radiative) and join an accretion disc. Thus, the final stages of the accretion are still assumed to be through a disc, but we take the nuclear accretion rate to equal the Bondi rate. Thus, the nuclear accretion rate is determined by the density and temperature of the hot gas at the point that the influence of the black hole becomes dominant. There are a number of reasons why this assumption may not be valid, but we have adopted it as the simplest possibility.
The result (4) shows that the steady cooling flow is governed by the requirement that the cooling time equals the flow time, to within factors of order unity. As cooling gas comes under the influence of the black hole, its flow velocity will increase, reducing $`r/v`$ to the point that cooling is no longer effective. Thus, the transition between the cooling flow and the Bondi solution occurs at about the radius where the initial Mach numbers of the two flows are equal.
The Bondi accretion rate for a monatomic gas (with $`\gamma =5/3`$) is (Shu 1991)
$$\dot{M}_\mathrm{h}=\pi \rho _\mathrm{i}\frac{G^2M_\mathrm{h}^2}{s_\mathrm{i}^3},$$
(5)
where $`\rho _\mathrm{i}`$ is the gas density and $`s_\mathrm{i}`$ the adiabatic sound speed at large $`r`$, and $`M_\mathrm{h}`$ is the mass of the black hole. Well outside the accretion radius, the density in the Bondi solution is almost constant so that the velocity, $`v_\mathrm{B}`$, can be determined from the accretion rate, $`\dot{M}_\mathrm{h}=4\pi \rho _\mathrm{i}v_\mathrm{B}r^2`$. The gas temperature is also nearly constant, so that Mach number is given approximately by
$$\frac{v_\mathrm{B}}{s_\mathrm{i}}\frac{\dot{M}_\mathrm{h}}{4\pi \rho _\mathrm{i}s_\mathrm{i}r^2}=\left(\frac{GM_\mathrm{h}}{2s_\mathrm{i}^2r}\right)^2.$$
Equating this to the Mach number, $`_\mathrm{i}`$, of the cooling flow, we find that the Bondi solution takes over at about the radius
$$r_\mathrm{x}=\frac{GM_\mathrm{h}}{2s_\mathrm{i}^2_\mathrm{i}^{1/2}}.$$
(6)
Using (4) for the gas density and (6) to replace $`r`$ in the result, we can evaluate the accretion rate (5) as
$$\dot{M}_\mathrm{h}=2\pi K_\mathrm{i}^{3/2}\frac{kT_\mathrm{i}GM_\mathrm{h}}{\mu m_\mathrm{H}\mathrm{\Lambda }(T_\mathrm{i})}\frac{\rho ^2}{n_\mathrm{e}n_\mathrm{H}},$$
(7)
where $`T_\mathrm{i}`$ is the gas temperature at large $`r`$ (note that the last factor is a constant).
This result is shown to be exact in the Appendix, where $`T_\mathrm{i}`$ is now to be interpreted as the “mean” gas temperature of a steady isothermal cooling flow and $`_\mathrm{i}`$ as its Mach number. The argument in the Appendix also shows that, for the conditions of our model, i.e. gas conditions that would give an isothermal cooling flow with $`\dot{M}r`$, the nuclear accretion rate is independent of the gravitational potential in between the edge of the steady cooling flow and the nucleus. Thus, the accretion rate is largely unaffected by changes to the potential of the galaxy due to deposition of cooled gas. This feature of the model reflects an accidental balance between the competing effects of deepening the potential: a temperature rise, reducing the Bondi accretion rate, and a density increase, tending to increase it.
The result (7) shows that a black hole at the centre of a protogalaxy grows exponentially by Bondi accretion. The timescale for growth is
$$t_{\mathrm{BH}}=\frac{M_\mathrm{h}}{\dot{M}_\mathrm{h}}=\frac{1}{2\pi K}\frac{n_\mathrm{e}n_\mathrm{H}}{\rho ^2}\frac{\mu m_\mathrm{H}\mathrm{\Lambda }(T_\mathrm{i})}{kT_\mathrm{i}G_\mathrm{i}^{3/2}},$$
(8)
which depends only on the gas temperature and the Mach number in the outer parts of the of the steady cooling flow. Numerically,
$$t_{\mathrm{BH}}5.1\times 10^8\mathrm{\Lambda }_{23}T_6^1_\mathrm{i}^{3/2}\mathrm{y},$$
(9)
where $`T_\mathrm{i}=10^6T_6`$ K and $`\mathrm{\Lambda }(T_\mathrm{i})=10^{23}\mathrm{\Lambda }_{23}\mathrm{erg}\mathrm{cm}^3\mathrm{s}^1`$.
This can be compared directly with the growth timescale of a black hole accreting at the Eddington rate, $`t_{\mathrm{Edd}}`$, as
$$t_{\mathrm{BH}}11\mathrm{\Lambda }_{23}T_6^1_\mathrm{i}^{3/2}t_{\mathrm{Edd}}.$$
Thus, for the relevant temperatures, the accretion rate from a maximal cooling flow is about one tenth or more of the Eddington rate.
The Bondi radius for a stellar mass object is very small,
$$r_\mathrm{B}=\frac{GM_\mathrm{h}}{s_\mathrm{i}^2}6\times 10^{11}M_\mathrm{h}T_6^1\mathrm{cm},$$
for $`M_\mathrm{h}`$ in solar masses, so we should be wary of applying our result to accretion onto very small black holes. However, we assume that more massive seed black holes are formed in the cores of all galaxies, as described by Rees (1984), and that these objects then grow to quasars by the accretion of hot gas.
## 4 Quasar birth and death
In order to test the outcome of this quasar formation model, we have incorporated it into a semi-analytical model for galaxy formation, the details of which are are described in Nulsen & Fabian (1997) and Nulsen, Barcons & Fabian (1998). In this section we outline modifications we have made to that model in order to track the growth and accretion luminosities of nuclear black holes.
As outlined in the previous section, the major factors that determine the growth rate of a nuclear black hole are the gas temperature and the Mach number of the cooling flow. Expressed in terms of the growth time (8), the time dependence of the mass of a nuclear black hole is given by
$$\mathrm{ln}\frac{M_\mathrm{h}(t_2)}{M_\mathrm{h}(t_1)}=_{t_1}^{t_2}\frac{dt}{t_{\mathrm{BH}}}.$$
(10)
The growth time depends on the temperature of the gas, $`T_\mathrm{i}`$, which is determined at the time of collapse of a protogalaxy, and the Mach number, $`_\mathrm{i}`$, of the isothermal cooling flow. The latter factor is the only time dependent part of $`t_{\mathrm{BH}}`$.
In our galaxy formation model, each collapse produces a dark halo which is taken to be a perfect isothermal sphere (density $`r^2`$) that is truncated at $`r_{200}`$, the radius within which the mean density is 200 times the background density of an Einstein-de Sitter Universe at the time of collapse. The gas temperature produced in the collapse is expressed as
$$T_\mathrm{i}=\frac{\mu m_\mathrm{H}\sigma ^2}{\beta k},$$
(11)
where $`\sigma `$ is the line-of-sight velocity dispersion of the halo and $`\beta `$ is a dimensionless parameter, generally lying in the range 0.5 to 1, that allows for excess energy in the gas (mostly excess binding energy resulting from supernova driven ejection; $`\beta =1`$ corresponds to zero excess energy; $`\beta `$ here is determined for each collapse as in the existing semi-analytical model).
The outcome of a collapse is determined by considering a notional, non-radiative collapse. In this collapse, the gas would form a hydrostatic atmosphere, with density proportional to $`r^{2\beta }`$ (also truncated at $`r_{200}`$). The ratio, $`\tau `$, of the cooling time to the free-fall time in the notional atmosphere is used to separate the gas into two parts, one with $`\tau <\tau _0`$ that is cold (due to efficient radiative cooling) immediately after the collapse and one with $`\tau >\tau _0`$ that forms a hot atmosphere after the collapse. The model parameter $`\tau _0`$ is of order unity and determines the radius, $`r_{\mathrm{CF}}`$, in the notional atmosphere that separates these two regions.
Radiative cooling eventually causes hot gas produced in the collapse to cool to low temperatures. As in clusters, the hot gas is little affected by cooling until the age of the system is comparable to its cooling time, at which stage it joins a steady cooling flow before being deposited as cold gas. Thus the hot atmosphere consists of an outer region that is largely unperturbed since the collapse, a transition region, comparable in size to the steady cooling flow, and a central steady cooling flow. The total rate of mass deposition and the extent of the steady cooling flow are determined by the initial state of the hot atmosphere (e.g. Fabian & Nulsen 1979). In particular, this means that the Mach number of the steady cooling flow is determined by the initial structure of the hot gas.
Formerly, the hot gas was assumed to cool to low temperature at a time $`t=t_{\mathrm{coll}}+t_{\mathrm{cool}}`$, where $`t_{\mathrm{coll}}`$ is time of the collapse and
$$t_{\mathrm{cool}}(r)=\frac{3\rho (r)kT_\mathrm{i}}{2\mu m_\mathrm{H}n_\mathrm{e}(r)n_\mathrm{H}(r)\mathrm{\Lambda }(T_\mathrm{i})}r^{2\beta }$$
is the cooling time of the hot gas in the notional collapse. This made deposition of the cooled gas start discontinuously a short time after collapse. In reality, the onset of the cooling flow will be continuous, commencing immediately after the collapse. To model this, here we assume that the gas cools when
$$t=t_{\mathrm{coll}}+t_{\mathrm{cool}}(r)t_{\mathrm{cool}}(r_{\mathrm{CF}}),$$
which gives the radius (in the notional atmosphere) of gas that is cooling at time $`t`$ as
$$r_{\mathrm{cool}}=r_{\mathrm{CF}}\left[1+\frac{tt_{\mathrm{coll}}}{t_{\mathrm{cool}}(r_{\mathrm{CF}})}\right]^{1/2\beta }.$$
Since the cooling time of the first hot gas to cool is comparable to the free-fall time ($`\tau _0`$ about 1), we should expect the initial Mach number of the cooling flow, $`_{\mathrm{i},0}`$, to be about 1. We treat this as a parameter of the model. The time dependence of the Mach number is then determined from the expectation that the flow velocity scales with time as $`r_{\mathrm{cool}}/t_{\mathrm{cool}}(r_{\mathrm{cool}})r_{\mathrm{cool}}^{12\beta }`$, giving
$$_\mathrm{i}=_{\mathrm{i},0}\left[1+\frac{tt_{\mathrm{coll}}}{t_{\mathrm{cool}}(r_{\mathrm{CF}})}\right]^{(12\beta )/2\beta }.$$
(12)
Using this and (8) in (10) enables us to determine the factor by which the mass of a nuclear black hole grows as a function of the time. Differentiating the result with respect to the time gives the accretion rate and hence the luminosity of the black hole. Of course, the black hole stops growing when the hot gas is exhausted ($`r_{\mathrm{cool}}>r_{200}`$).
The low level of emission from nuclear black holes in nearby galaxies suggests that the radiative efficiency of accretion is much lower now than it was in quasars (Fabian & Rees 1995; di Matteo et al 1999). Several models, including advection-dominated accretion flows (Narayan & Yi 1995), advection-dominated inflow-outflow solutions (Blandford & Begelman 1999) and other gas processes (Stone, Pringle & Begelman 1999), suggest that this is due to a reduced gas supply. Despite indications that advection-dominated accretion discs do not account for the behaviour of some nearby massive black holes (Di Matteo et al 1999), for the sake of definiteness, we base our model on them. Thus the radiative efficiency is assumed to be high as long as the accretion rate exceeds about $`1.3\alpha _\mathrm{d}^2\dot{M}_{\mathrm{Edd}}`$ (Esin, McClintock & Narayan 1997). When the accretion rate falls below this value, the radiative efficiency of the nuclear accretion disc is assumed to plummet, in effect turning off emission from the active nucleus. This gives a critical growth time,
$$t_{\mathrm{BH},\mathrm{c}}=\frac{M_\mathrm{h}}{1.3\alpha _\mathrm{d}^2\dot{M}_{\mathrm{Edd}}}3.5\times 10^7\alpha _\mathrm{d}^2\mathrm{y},$$
(13)
and when the growth time (8) exceeds $`t_{\mathrm{BH},\mathrm{c}}`$, a quasar turns off, although the nuclear black hole can continue to grow. $`\alpha _\mathrm{d}`$ is treated as a parameter of the model.
Our galaxy formation model uses the block model (Cole & Kaiser 1988) to simulate merger trees. The smallest blocks have a mass of $`1.5\times 10^{10}\mathrm{M}_{}`$. In order to simulate the presence of seed black holes, a black hole of (arbitrary) unit mass is associated with each of the smallest blocks. When a block collapses, the black holes associated with all merging sub-blocks are assumed to merge into a single black hole. This makes the mass of a seed black hole proportional to the mass of its halo up to the stage that it starts to grow by Bondi accretion.
A nuclear black hole can only grow by Bondi accretion when it forms in a collapse that produces some hot gas. Such systems are identified with normal galaxies in our model. The model does not allow mergers between normal galaxies, so that normal galaxies only grow in collapses where they accrete dwarf galaxies and gas. Any collapse involving more than one normal galaxy is taken to form a group or cluster of galaxies. Since nuclear black holes are associated with galaxies rather than a group or cluster, we do not track the growth of black holes for galaxies in these systems. In short, black holes can only grow by Bondi accretion to form quasars in the “normal” galaxies of our simulation. McClure et al (1998) find from optical imaging that most quasars do occur in elliptical, or spheroidal galaxies, so our model should apply well to such objects. It may not be directly relevant to present-day low luminosity Seyfert galaxies, which tend to be in spiral galaxies, but can have undergone a hot-phase era at an earlier stage.
Note that, since the black hole masses are in arbitrary units, the quasar luminosities are too. Adopting a luminosity scale and radiative efficiency for the quasars will fix the scale of the black hole masses. On the bolometric magnitude scale used in the plots in this paper, $`15`$ corresponds to an accretion rate of $`10^4`$ black hole mass units per year. If the radiative efficiency is 0.1 and the mass unit is taken as $`10^4\mathrm{M}_{}`$, this would correspond to a bolometric luminosity of about $`6\times 10^{45}\mathrm{erg}\mathrm{s}^1`$.
## 5 Model results
For the purpose of the simulations we have taken an open CDM cosmology, with $`H_0=50\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, density parameter $`\mathrm{\Omega }=0.3`$, baryon density parameter $`\mathrm{\Omega }_\mathrm{b}=0.075`$ and $`\sigma _8=1`$.
Because nuclear black holes grow exponentially in our model, the results are quite sensitive to the key parameters, $`\tau _0`$, $`_{\mathrm{i},0}`$ and $`\alpha _\mathrm{d}`$. Using parameter values that favour high growth produces such massive nuclear black holes by the present day that the most recent quasars are inevitably the most luminous. At the other extreme, parameter values can easily be found that result in essentially no growth of the seed black holes. The range of parameters giving substantial, but not excessive, black hole growth is relatively narrow (although it covers a substantial part of the physically reasonably parameter range due to the correlated effects of the parameters). Models presented here are chosen to lie in that range.
Fig. 1 shows distributions of total accretion luminosity for several redshifts for the case $`\tau _0=1`$, $`_\mathrm{i}=0.9`$ and $`\alpha _\mathrm{d}=0.1`$, while Fig. 2 shows the same thing for $`\alpha _\mathrm{d}=0.15`$. Since $`\alpha _\mathrm{d}`$ only affects the the critical growth time (equation 13), black hole masses and accretion rates are identical in the two models. Differences between them are entirely due to the earlier onset of the ADAF phase for the model of Fig. 2. This effect is greatest at low redshifts, since a greater proportion of the active nuclei is then old and cooling flows in the older collapsed systems have lower Mach numbers (equation 12). A black hole accreting from a cooling flow with a low Mach number has a longer growth time (equation 8) and so is more likely to be an ADAF when $`\alpha _\mathrm{d}`$ is increased. This accounts for the substantial reduction in the numbers of luminous active nuclei at low redshifts between the models of Fig. 1 and Fig. 2.
The model in Fig. 3 is the same as that of Fig. 1, except that the initial Mach number of the cooling flow is $`_\mathrm{i}=1`$. Increasing the Mach numbers of the cooling flows increases the nuclear accretion rate, resulting in greater black hole growth and higher nuclear luminosities. This effect can be seen in Fig. 3, where, for the same seed mass, the most luminous quasars are more numerous at all redshifts.
Finally, Fig. 4 shows the bolometric luminosity distributions for a model with $`\tau _0=0.9`$, $`_\mathrm{i}=1`$ and $`\alpha _\mathrm{d}=0.1`$. This is most readily compared to the model of Fig. 3. The effect of changing $`\tau _0`$ is more complicated than that of the other two parameters, but its main influence on black hole growth is through the cooling time at the inner edge of the notional hot atmosphere. $`\tau _0`$ sets the ratio of the cooling time to the free-fall time at $`r_{\mathrm{CF}}`$, the inner edge of the notional hot atmosphere, so that reducing it reduces the cooling time there, $`t_{\mathrm{cool}}(r_{\mathrm{CF}})`$. This cooling time sets the timescale for the evolution of the Mach number of the cooling flow (equation 12). Reducing $`t_{\mathrm{cool}}(r_{\mathrm{CF}})`$ causes the Mach number of the cooling flow to decrease more quickly, reducing the overall growth of the nuclear black holes and hence their luminosities.
The bolometric luminosity is, essentially, just the total accretion rate of the black holes and not likely to be a good measure of the visible luminosity of the disc. Despite its shortcomings, for the sake of definiteness, we use the thermal disc model (Shakura & Sunyaev 1973) to estimate the visible luminosity. This gives the emitted spectrum
$$P_\nu \nu ^{1/3}M_\mathrm{h}^{2/3}\dot{M}_\mathrm{h}^{2/3},$$
for frequency $`\nu `$. The resulting “visible” luminosity functions are plotted in Fig. 5, for the quasar formation model of Fig. 1 ($`\tau _0=1`$, $`_\mathrm{i}=0.9`$, $`\alpha _\mathrm{d}=0.1`$).
To compare these to the observed quasar luminosity functions (Boyle et al 1988), we need to convert our arbitrary magnitude scale to absolute blue magnitude. Based on the intensity spectrum of the X-ray background, Fabian & Iwasawa (1999) argue that 85 percent of the accretion power of quasars is absorbed; only ten per cent is seen without some obscuration at 1 keV. If so, then the number densities in Fig. 5 (and preceding Figs) should be reduced by about a factor 10. In that case, adding 2 – 3 to our arbitrary magnitudes to convert to $`M_\mathrm{B}`$ gives rough agreement between our number densities and those of Boyle et al (1988). With this conversion, our luminosity functions are too flat below about $`M_\mathrm{B}=27`$ and probably too steep above.
The sharp cut offs in the model luminosity functions are largely due to our assumption that the mass of a seed black hole is proportional to the mass of the dark halo in which it forms. A more realistic model for the seed black holes would give them a distribution of masses. In effect, this distribution would be convolved with the luminosity functions, making them fit the observed luminosity functions better.
The redshift dependence of our model is also only in rough agreement with the observed quasar luminosity functions. The greatest discrepancy is the absence in our model of luminous active nuclei at $`z=2`$ and earlier. While this is affected to some extent by the collapse model (i.e. cosmology), in large part it is due to the time required to grow massive black holes. From equation (8), the growth rate is maximized by minimizing $`\mathrm{\Lambda }(T_\mathrm{i})/T_\mathrm{i}`$, which generally means in the hottest collapses. The virial temperature of a halo of mass $`M`$ collapsing at time $`t_{\mathrm{coll}}`$ scales as $`(M/t_{\mathrm{coll}})^{2/3}`$, so that, for a given mass, the earliest collapses give the most growth. However, few massive galaxies collapse early, and, since it requires several growth times (equation 9) to produce a massive black hole, very few of these form early in our model.
Fig. 6 is a contour diagram of the distribution of the masses of the nuclear black holes vs spheroid mass at $`z=0`$ in the model of Fig. 1. Magorrian et al (1998) and Richstone et al (1998) find that black hole mass is proportional to blue luminosity of the spheroid (bulge). However, there is a substantial spread in this relationship and the data would be consistent with a substantially steeper relationship for the more massive bulges. This is roughly consistent with the ridge and the lower spur at high spheroid mass of the distribution shown here. However, there is no evidence in the data for the extension to high black hole mass for spheroids of about $`10^{11}\mathrm{M}_{}`$ in the model.
Taking $`10^4`$ black hole mass units to correspond to a spheroid mass of $`10^{11}\mathrm{M}_{}`$ and using $`M_\mathrm{h}=0.005M_{\mathrm{spheroid}}`$ (Richstone et al 1998) gives the conversion factor of $`5\times 10^4\mathrm{M}_{}`$ per black hole mass unit. In that case, a bolometric magnitude of $`15`$ in the Figs would correspond to a bolometric luminosity of about $`3\times 10^{46}\mathrm{erg}\mathrm{s}^1`$.
We can also compare the results of Fig. 1 to the quasar X-ray luminosity function (Miyaji, Hasinger & Schmidt 1998). Using the black hole mass calibration from above and a fixed bolometric correction of about 50 for the 0.5–2 keV X-ray luminosity (Elvis et al 1994; Fabian & Iwasawa 1999) means that a magnitude of $`15`$ in Fig. 1 corresponds to a 0.5–2 keV X-ray luminosity of about $`6\times 10^{44}\mathrm{erg}\mathrm{s}^1`$. Assuming that absorption reduces the X-ray luminosity function by a factor of about 10, as above, there is rough agreement between the results in Fig. 1 and the observed X-ray luminosity function of Miyaji et al (1998). However, the fit suffers from essentially the same problems that we found for the visible luminosity function. As in that case, the most serious problem is the lack of high redshift quasars.
## 6 Feedback effects
Ciotti & Ostriker (1997) argue that feedback from a quasar will stifle a cooling flow by heating the cooling gas. By raising the gas temperature and reducing its density, this could also dramatically reduce the Bondi accretion rate. However, using Ferland’s (1996) CLOUDY and the quasar spectra of Laor et al (1997), we find that the Compton temperature of radio-loud and radio-quiet quasars is approximately one and two million K, respectively. This is cooler than the virial temperature of most of the systems that form quasars. Furthermore, the gas temperature rises inward in the Bondi accretion flow, so that Compton feedback from the quasar is likely, at best, to cool the accreting gas. If Compton cooling is significant, then it will almost certainly increase the nuclear accretion rate, possibly causing it to approach the Eddington limit (Fabian & Crawford 1990).
If Compton cooling reduces the temperature of gas near to the nucleus, the reduction in pressure will result in inflow at speeds comparable to the sound speed in the uncooled gas. Since the flow within the Bondi radius is already roughly sonic, the accretion rate will not be altered dramatically unless Compton cooling is effective beyond the Bondi radius. Taking the Compton cooling time as $`t_\mathrm{C}=3\pi m_\mathrm{e}c^2r^2/(\sigma _\mathrm{T}L_\mathrm{h})`$, where $`m_\mathrm{e}`$ is the electron mass, $`\sigma _\mathrm{T}`$ is the Thomson cross section and $`L_\mathrm{h}`$ is the nuclear luminosity, and taking the flow time from the Bondi radius, $`r_\mathrm{B}=GM_\mathrm{h}/s_\mathrm{i}^2`$, as $`t_{\mathrm{BF}}=r_\mathrm{B}/s_\mathrm{i}`$, we have
$$\frac{t_\mathrm{C}}{t_{\mathrm{BF}}}=\frac{3\pi m_\mathrm{e}c^2GM_\mathrm{h}}{\sigma _\mathrm{T}L_\mathrm{h}s_\mathrm{i}}=\frac{3m_\mathrm{e}cL_{\mathrm{Edd}}}{4m_\mathrm{H}s_\mathrm{i}L_\mathrm{h}}0.8T_6^{1/2}\left(\frac{L_\mathrm{h}}{L_{\mathrm{Edd}}}\right)^1,$$
where the nuclear luminosity has been put in terms of the Eddington luminosity, $`L_{\mathrm{Edd}}`$. This shows that Compton cooling will be significant outside the Bondi radius for most Eddington-limited active nuclei. In terms of our model, for the accretion rate (7), if the radiative efficiency of the nuclear accretion disc is $`\eta =0.1\eta _1`$ (and $`K=2.5`$), then
$$\frac{t_\mathrm{C}}{t_{\mathrm{BF}}}\frac{m_\mathrm{e}\mathrm{\Lambda }}{\eta _\mathrm{i}^{3/2}\sigma _\mathrm{T}s_\mathrm{i}^3}\frac{n_\mathrm{e}n_\mathrm{H}}{\rho ^2}9\eta _1^1_\mathrm{i}^{3/2}T_6^{3/2}\mathrm{\Lambda }_{23}.$$
We find that the ionization parameter of the gas at the accretion radius, $`\xi =L/nr_\mathrm{B}^24\pi \eta m_\mathrm{p}c^2s_i3\times 10^4.`$ Under these circumstances, nuclear radiation keeps the gas highly photoionized, so that the effective cooling function is close to pure bremsstrahlung. The Compton cooling time is then less than the infall time for gas temperatures exceeding about $`3\times 10^6`$ K when the Mach number $`_\mathrm{i}1`$. This means that the accretion rate may well exceed the Bondi rate in the cases when it would be highest. No allowance has been made for this in our model.
A further effect which may influence some objects is feedback due to radio jets. If the central engine produces jets or outflows which deposit significant energy near the Bondi radius, then the accretion rate can be much reduced. It is not clear how such an effect should be included in our model at this stage. If, as suggested by McLure et al (1998), the radio loud quasars are those with the largest black holes, then feedback from radio jets might be responsible for limiting the growth of the black holes in these systems.
Finally, it has also been suggested that a wind might expel the surrounding gas when quasars becomes sufficiently luminous (Silk & Rees 1998; Fabian 1999). This would lead to a much closer correlation between bulge and remnant mass.
## 7 Discussion
The implementation of the quasar formation model used here has a number of shortcomings. First, we only follow the growth of black holes in isolated galaxies. This discounts growth in groups and clusters. Because of the high gas temperature, a central galaxy in a group could potentially accrete very rapidly. However, the block model gives no information about the spatial arrangement of collapsing objects, so we are unable to identify central galaxies in groups and clusters. We may therefore be ignoring the most luminous quasars and the most massive black holes.
The truncated isothermal potentials used in the model lead to gas density distributions that are more peaked than in more realistic collapse models (Navarro, Frenk & White 1997). This affects the time development of the cooling flows, changing the evolution of the Mach number, and so would affect the time dependence of the nuclear accretion rate (equation 7). However, the cooling time of the hot gas is comparable to the free-fall time in normal galaxy collapses, so we should still expect immediate onset of a cooling flow with initial Mach number close to 1 in most cases. The initial cooling time controls the rate of change of the Mach number while it is close to 1 (when the growth rate is largest) and this is comparable to the collapse time. Thus, we do not expect such a change to have a dramatic effect on the results of the simulation. Beyond this, is not clear how a more realistic collapse model would alter our results.
In our simple cooling flow model, the nuclear accretion rate is insensitive to details of the galactic potential. However, this may change in a more realistic model, such as one in which a central star cluster is formed. In that case, matter deposited by the cooling flow beyond the Bondi radius could significantly alter the central potential and so affect the nuclear accretion rate.
The handling of abundances in our galaxy formation model is very crude, only allowing for the effects of Type II supernovae and treating the gas as homogeneous. Increasing the abundance increases the cooling function, hence the growth time (equation 8), and so would reduce black hole masses and quasar luminosities. This may be significant, since the abundances in some quasars appear to be very high (e.g. Hamann & Ferland 1993; Ferland et al 1996). On the other hand, the cooling function is considerably less sensitive to abundance for temperatures exceeding about $`3\times 10^6`$ K and the gas temperature exceeds this value in most of the systems that would be quasars, so we should not expect this to have a major effect on the outcome of the model.
As discussed in section 5, our assumption that the mass of a seed black hole is proportional to the mass of halo in which it resides is too simplistic. Given that Seyfert nuclei can occur in disc galaxies, it seems likely that some active nuclei are not fuelled by hot gas (or, at least, not by gas from a hot halo resulting from the collapse of the protogalaxy). A wide variety of other mechanisms for fuelling active nuclei have been proposed, including starburst activity, interactions between galaxies and the effects of a bar. There is also some cold gas within the region that is effectively drained through a cold accretion disc. Some or all of these gas sources may fuel seed black holes, in which case, they could have a wide range of masses, depending on details of the history of each galaxy. Such effects would be compounded with those due to the processes described in Rees (1984), that are also likely to lead to a range of seed masses.
If a large proportion of active nuclei are heavily absorbed, then mergers between galaxies may affect their luminosity by disturbing the absorbing material, which could alter the luminosity in either direction. In other words, it is possible that much of the evolution of observed optical quasars is due to changes in the obscuring material rather than changes in accretion rate.
The worst shortcoming of our model is its failure to produce quasars at high redshifts. For the cosmological parameters used, a $`10^{12}\mathrm{M}_{}`$ halo would have collapsed from a $`3\sigma `$ peak at about $`10^9`$ y ($`z7`$). The gas temperature in a halo of mass $`10^{12}M_{12}\mathrm{M}_{}`$, collapsing at $`10^9t_9`$ y, is
$$T3.3\times 10^6M_{12}^{2/3}t_9^{2/3}\beta ^1\mathrm{K},$$
for $`\beta `$ as defined in equation (11), so that the growth time for a nuclear black hole in such a system would have been
$$t_{\mathrm{BH}}1.5\times 10^8\beta _\mathrm{i}^{3/2}M_{12}^{2/3}t_9^{2/3}\mathrm{\Lambda }_{23}\mathrm{y}.$$
The cooling function depends on abundances and ionization (Böhringer & Hensler 1989), but $`\mathrm{\Lambda }_{23}`$ lies in the the range 0.5 – 2 for the relevant temperatures. Thus, an early collapsing protogalaxy would have had roughly 10 $`e`$-folding times to form a massive nuclear black hole before $`z3`$. This shows that the lack of high redshift quasars is not a fundamental shortcoming of our model, although details of the model would clearly need to be modified in order to account for them.
The failure of the model to account for the observed relationship between black hole and spheroid mass (Richstone et al 1998) is due primarily to the exponential black hole growth, which tends to make the larger black holes grow very large. As discussed in the previous section, different feedback mechanisms may enhance or limit the growth rate. If feedback from a radio source limits the growth of the most massive black holes, then this, rather than the fuel supply, might be the cause of the relationship between spheroid and black hole mass. Note that we have ignored many effects in the nuclear accretion disc that could break the simple connection we have assumed between the Bondi accretion rate and the nuclear accretion rate.
Finally, we have attempted to account for the masses of the remnant nuclear black holes and the evolution of the quasars with a single mechanism. If the processes that form the seed black holes account for a substantial part of their mass, or, if other gas sources also play a significant role in the fuelling of active nuclei, then this will not be possible. In that case, accretion of hot gas may simply be one of several fuel sources for quasars. Nevertheless, the results in section 3 show that hot gas is potentially a significant fuel source for AGNs.
Conditions are most favourable for quasar formation in our model when the hot gas supply is greatest, i.e. soon after the collapse of a large protogalaxy. In hierarchical collapse models, a collapse will generally include the infall of gas and other galaxies, so that the presence of a close companion of comparable luminosity may be interpreted as an indication of recent collapse. Thus, the results that have been interpreted as showing the gravitational interactions can drive quasars (e.g. Bahcall et al 1997) may also be interpreted as indicating that quasars form in systems that have collapsed recently, as expected in the present model. For this purpose, the main difference between the predictions of the models is that no close companion is required in the case that quasars are fuelled by hot gas.
If quasars are fuelled by hot gas, then there should be substantial halos of hot gas around them. Since the gas temperature typically exceeds $`3\times 10^6`$ K, these should be detectable by their X-ray emission. The limited extent of the hot gas (comparable to the size of the dark halo) makes it hard to separate from the powerful nuclear X-ray emission of a quasar. Nevertheless, the angular resolution of Chandra should be sufficient to detect diffuse emission around some quasars out to redshifts of about 1.
## 8 Conclusions
Bondi accretion of the hot gas produced in the collapse of protogalaxies onto a seed population of nuclear black holes is sufficient to form and fuel quasars. A simple simulation shows that this model can account for the optical and X-ray luminosity functions of quasars for $`z<1.5`$, provided that about 90 percent of quasars are obscured. The simulation produces insufficient quasars at high redshifts and predicts a wider range of black hole masses in massive spheroids than has been found.
Hot gas formed in the collapse of large protogalaxies is likely to be the minimum source available for fuelling quasars and so can form the baseline above which other sources contribute. Our model directly confronts and includes problems related to the current fuel supply of massive black holes in elliptical galaxies. The details of the amplitude and evolution of hot gas as a fuel supply is sensitive to the presence of plausible feedback mechanisms, such as heating due to radio jets.
The Bondi accretion rate from the hot gas formed in the collapse of a protogalaxy can exceed the Eddington accretion rate in systems with virial temperatures exceeding about $`3\times 10^6`$ K, and can be enhanced by feedback due to Compton cooling in such systems. Thus, hot gas is an excellent fuel source for quasars.
## ACKNOWLEDGEMENTS
PEJN gratefully acknowledges the hospitality of the Institute of Astronomy, Cambridge, during part of this work. ACF thanks the Royal Society for support.
## Appendix A Bondi accretion from a cooling flow
We consider Bondi accretion by a massive black hole at the centre of a steady inhomogeneous cooling flow. We will show that, for the cooling flow model used here, the accretion rate onto the black hole depends on conditions at the edge of the steady cooling flow, but is insensitive to details of the gravitational potential between there and the central black hole. In particular, when the ratio of specific heats, $`\gamma =5/3`$, the accretion rate is completely independent of the intervening potential.
The cooling time of the gas is
$$t_{\mathrm{cool}}=\frac{p}{(\gamma 1)n_\mathrm{e}n_\mathrm{H}\mathrm{\Lambda }(T)},$$
where $`p`$ is the pressure, $`n_\mathrm{e}`$ the electron density, $`n_\mathrm{H}`$ the hydrogen (proton) number density, $`T`$ the temperature and $`\mathrm{\Lambda }(T)`$ the cooling function. The flow time of the gas is
$$t_{\mathrm{flow}}=\frac{r}{v},$$
where $`r`$ is the radius and $`v`$ the flow velocity (positive inward). For the Bondi solution (e.g. Shu 1991) at small $`r`$, $`vr^{1/2}`$, the density varies as $`r^{3/2}`$ and the temperature as $`r^{3(\gamma 1)/2}`$, so that the ratio of the cooling time to the flow time varies as
$$\frac{t_{\mathrm{cool}}}{t_{\mathrm{flow}}}\frac{T}{\mathrm{\Lambda }(T)},$$
Since the temperature increases as $`r`$ decreases and, for the relevant temperatures and abundances, $`T/\mathrm{\Lambda }`$ is almost always an increasing function of $`T`$, this ratio increases with decreasing $`r`$. In a steady cooling flow $`t_{\mathrm{cool}}t_{\mathrm{flow}}`$ (Fabian 1994), but as the flow comes under the influence of a central black hole $`t_{\mathrm{cool}}/t_{\mathrm{flow}}`$ will increase, eventually making cooling negligible. Thus, at sufficiently small $`r`$ cooling can be ignored and the flow asymptotes to the Bondi solution.
We begin by outlining the Bondi solution. For a steady, spherical flow, the mass flow rate (inward) is
$$\dot{M}=4\pi \rho vr^2=\mathrm{constant},$$
where $`\rho `$ is the gas density. The flow is adiabatic, so that
$$\frac{T}{\rho _{}^{\gamma 1}}=\frac{T_0}{\rho _0^{\gamma 1}},$$
where $`T_0`$ and $`\rho _0`$ are evaluated a long way from the black hole. Using these, we can express the flow speed as
$$v=\frac{\dot{M}}{4\pi \rho _0r^2}\left(\frac{T_0}{T}\right)^{1/(\gamma 1)}.$$
(14)
Bernoulli’s theorem gives
$$H+\frac{1}{2}v^2\frac{GM}{r}=H_0,$$
(15)
where $`r`$ is the radius, $`H=\gamma p/[(\gamma 1)\rho ]`$ is the specific enthalpy of the gas, $`H_0`$ is the specific enthalpy at temperature $`T_0`$ and $`M`$ is the mass of the black hole. Differentiating (15) with respect to $`r`$ and using (14) to find $`dv/dr`$ gives
$$\left(H\frac{v^2}{\gamma 1}\right)\frac{1}{T}\frac{dT}{dr}=\frac{2v^2}{r}\frac{GM}{r^2},$$
so that at the sonic point, $`r_\mathrm{s}`$, where $`v_\mathrm{s}^2=(\gamma 1)H_\mathrm{s}`$, we must have
$$v_\mathrm{s}^2=\frac{GM}{2r_\mathrm{s}}.$$
Using these results in (15) gives
$$\frac{GM}{r_\mathrm{s}}=\frac{4(\gamma 1)}{53\gamma }H_0,$$
enabling us to evaluate all quantities at the sonic point in terms of $`T_0`$, $`\rho _0`$ and $`M`$. Evaluating the mass flow rate at the sonic point then gives the Bondi accretion rate as
$$\dot{M}_\mathrm{h}=\pi \rho _0\frac{(GM)^2}{s_0^3}\left(\frac{53\gamma }{2}\right)^{\frac{3\gamma 5}{2(\gamma 1)}},$$
(16)
where $`s_0=\sqrt{\gamma p_0/\rho _0}`$ is the speed of sound in gas a long way from the black hole. The last factor in (16),
$$q(\gamma )=\left(\frac{53\gamma }{2}\right)^{\frac{3\gamma 5}{2(\gamma 1)}},$$
is finite for both $`\gamma 1`$, when $`qe^{1.5}`$, and $`\gamma 5/3`$, when $`q1`$.
Now consider a cooling flow. Thermal instability causes an inhomogeneous cooling flow to deposit gas throughout the flow at a rate that is conveniently expressed as (Nulsen 1986, 1988; values there assume $`\gamma =5/3`$)
$$\xi \frac{(\gamma 1)\rho R}{\gamma p},$$
where $`R`$ is the power radiated per unit volume by the gas, $`\rho `$ is now the mean density of the gas and $`\xi `$ is dimensionless. The value of $`\xi `$ depends on details of the density distribution and physical behaviour of the inhomogeneous gas, and is typically of order unity. We treat it as a constant parameter in what follows, since this leads to the simplest physically useful flow models (this approximation is exact for the isothermal cooling flow models; Nulsen 1998). The mass conservation equation for a steady flow is then
$$\frac{1}{r^2}\frac{d}{dr}\rho vr^2=\xi \frac{(\gamma 1)\rho R}{\gamma p}.$$
(17)
The corresponding energy equation is (Nulsen 1988)
$$\frac{1}{\gamma 1}pv\frac{d}{dr}\mathrm{ln}\mathrm{\Sigma }=(1\xi )R,$$
(18)
where $`\mathrm{\Sigma }=T/\rho ^{\gamma 1}`$, with $`T`$ being the temperature corresponding to the mean density ($`\rho `$) at pressure $`p`$. $`\mathrm{\Sigma }`$ determines the effective entropy of the inhomogeneous gas mixture. Eliminating $`R`$ between these two equations and integrating gives
$$\frac{\mathrm{\Sigma }}{\mathrm{\Sigma }_\mathrm{i}}=\left(\frac{\dot{M}}{\dot{M}_\mathrm{i}}\right)^{\gamma (1\xi )/\xi },$$
(19)
where $`\dot{M}_\mathrm{i}`$ and $`\mathrm{\Sigma }_\mathrm{i}`$ are evaluated at $`r_\mathrm{i}`$, a fixed point in the cooling flow, which we will take to be at the outer edge of the steady flow, well outside the region that is perturbed by the black hole. Thus, the cooling flow model enforces a fixed relationship between the entropy and $`\dot{M}`$ throughout the steady flow.
The derivation of equation (19) makes no reference to the potential (or the momentum equation) and it applies at small $`r`$, in the Bondi flow where cooling is negligible. The quantities $`\rho _0`$ and $`T_0`$ (i.e. $`s_0`$) in the Bondi accretion rate (16) are no longer well defined, but, since it is constant, the entropy, $`\mathrm{\Sigma }_0=T_0/\rho _0^{\gamma 1}`$, is. Furthermore, (19) requires
$$\frac{\mathrm{\Sigma }_0}{\mathrm{\Sigma }_\mathrm{i}}=\left(\frac{\dot{M}_\mathrm{h}}{\dot{M}_\mathrm{i}}\right)^{\gamma (1\xi )/\xi },$$
giving
$$\rho _0=\left(\frac{T_0}{\mathrm{\Sigma }_\mathrm{i}}\right)^{\frac{1}{\gamma 1}}\left(\frac{\dot{M}_\mathrm{h}}{\dot{M}_\mathrm{i}}\right)^{\frac{\gamma (1\xi )}{\xi (\gamma 1)}}.$$
We use this to eliminate $`\rho _0`$ in (16), then solve the resulting equation for $`\dot{M}_\mathrm{h}`$ and put $`\mathrm{\Sigma }_\mathrm{i}=T_\mathrm{i}/\rho _\mathrm{i}^{\gamma 1}`$, where $`T_\mathrm{i}`$ and $`\rho _\mathrm{i}`$ are the temperature and density at $`r_\mathrm{i}`$. After some algebra this gives
$$\dot{M}_\mathrm{h}=\left[\pi \rho _\mathrm{i}\frac{(GM)^2}{s_\mathrm{i}^3}q(\gamma )\right]^\kappa \dot{M}_\mathrm{i}^{1\kappa }\left(\frac{T_0}{T_\mathrm{i}}\right)^{\frac{\kappa (53\gamma )}{2(\gamma 1)}},$$
(20)
where $`\kappa =\xi (\gamma 1)/(\gamma \xi )`$.
This argument makes no reference to the gravitational potential in which the cooling flow takes place, but $`\dot{M}_\mathrm{h}`$ depends on details of the potential in two ways. First, gas properties at the edge of the cooling flow are affected to some extent by the potential ($`\dot{M}_\mathrm{i}`$ is governed largely by initial conditions in a collapse). Generally, we can assume that any disturbance to the potential is at $`rr_\mathrm{i}`$ and has little effect on the cooling flow at $`r_\mathrm{i}`$. The second means of influence is through $`T_0`$, which is still not well defined. Despite this, we can expect $`T_0`$ to be comparable to the gas temperature at the point where the black hole starts to have an appreciable affect on the cooling flow. Since the influence of the black hole is felt outside the sonic point, this temperature will be comparable to the “virial” temperature at that place and, unless the galaxy potential is strongly non-isothermal, we can expect it to be comparable to $`T_\mathrm{i}`$. In general, the influence of the potential on $`\dot{M}_\mathrm{h}`$ is weak.
For the case of interest, $`\gamma =5/3`$ and $`\dot{M}_\mathrm{h}`$ does not depend on $`T_0`$, but is determined completely by the mass of the black hole and the gas properties near the outer edge of the cooling flow. This simple result comes about because, for $`\gamma =5/3`$, the Bondi accretion rate (16) depends on the gas properties through the entropy alone ($`\rho _0s_0^3\mathrm{\Sigma }_0^{3/2}`$), so that it may be regarded as specifying a relationship between $`\mathrm{\Sigma }`$ and $`\dot{M}`$. The requirement (19) of the cooling flow specifies a second relationship between $`\mathrm{\Sigma }`$ and $`\dot{M}`$ which is only satisfied simultaneously by a unique $`\dot{M}`$.
For an inhomogeneous isothermal cooling flow, if $`\dot{M}r^\eta `$, then equations (17) and (18) require that
$$\xi =\frac{2\gamma \eta }{3(\gamma 1)+\eta (\gamma +1)},$$
making
$$\kappa =\frac{2\eta }{3+\eta }.$$
We take $`\gamma =5/3`$ for the remainder of this section.
For an isothermal cooling flow, Nulsen (1998) has shown that
$$\rho _\mathrm{i}=Q\frac{3\eta }{2}\left(\frac{\rho ^2}{n_\mathrm{e}n_\mathrm{H}}\right)\frac{v_\mathrm{i}kT_\mathrm{i}}{\mu m_\mathrm{H}r_\mathrm{i}\mathrm{\Lambda }(T_\mathrm{i})},$$
where $`v_\mathrm{i}`$ the flow speed at $`r_\mathrm{i}`$ and $`\rho ^2/(n_\mathrm{e}n_\mathrm{H})`$ is constant for the temperatures of interest. The factor $`Q`$ is a constant that depends on details of the cooling function. For power laws, $`\mathrm{\Lambda }(T)T^a`$, with $`a`$ in the range $`[0.5,0.5]`$, $`Q`$ ranges from $`2.32`$ to $`2.93`$. Since $`d\mathrm{ln}\mathrm{\Lambda }/dT`$ lies in about this range for the temperatures of interest, we use the representative value $`Q=2.5`$ in all models. Using the expression for $`\rho _\mathrm{i}`$ to eliminate the density in $`\dot{M}_\mathrm{i}=4\pi \rho _\mathrm{i}v_\mathrm{i}r_\mathrm{i}^2`$ and using the result to replace $`\dot{M}_\mathrm{i}`$ in (20) gives, after some further algebra,
$$\dot{M}_\mathrm{h}=\frac{(3\eta )\pi QkT_\mathrm{i}GM_\mathrm{i}^{\frac{3}{2}}}{\mu m_\mathrm{H}\mathrm{\Lambda }(T_\mathrm{i})}\frac{\rho ^2}{n_\mathrm{e}n_\mathrm{H}}\left(\frac{2r_\mathrm{i}s_\mathrm{i}^2_\mathrm{i}^{\frac{1}{2}}}{GM}\right)^{\frac{3(1\eta )}{3+\eta }},$$
(21)
where $`_\mathrm{i}=v_\mathrm{i}/s_\mathrm{i}`$ is the Mach number at $`r_\mathrm{i}`$.
Observations show that $`\eta 1`$ in the well studied cluster cooling flows (Fabian 1994; Peres et al 1998) and this is the value that we use for our models, giving
$$\dot{M}_\mathrm{h}=\frac{2\pi QkT_\mathrm{i}GM_\mathrm{i}^{3/2}}{\mu m_\mathrm{H}\mathrm{\Lambda }(T_\mathrm{i})}\left(\frac{\rho ^2}{n_\mathrm{e}n_\mathrm{H}}\right).$$
(22)
We note that $`\eta `$ is not well determined and, since the factor in parentheses in (21) is usually large, a small change in $`\eta `$ can have a large effect on $`\dot{M}_\mathrm{h}`$. For the isothermal cooling flow model, the radial dependence of the Mach number is $`r^{(\eta 1)/2}`$, so that $``$ is constant in our models. If $`\eta <1`$ ($`\eta >1`$), then $``$ increases (decreases) inward. In general, thermal instability in a cooling flow is greater when the Mach number is larger (Balbus & Soker 1989), which will tend to limit the rise in the Mach number for $`\eta <1`$. This limits the error in $`\dot{M}_\mathrm{h}`$ in that case. For $`\eta >1`$ there is no limiting effect, so that, if $`\eta >1`$, (22) could substantially overestimate the nuclear accretion rate. If so, the rate of accretion of hot gas would be insufficient to account for quasars.
|
no-problem/9908/hep-th9908091.html
|
ar5iv
|
text
|
# 1 Unstable Type II D-Branes
## 1 Unstable Type II D-Branes
During the past two years Ashoke Sen has pioneered the study of non-BPS D-brane systems. (For reviews see .) In particular, he has focused on systems of coincident D-branes and anti-D-branes. The basic idea is that whereas a system of coincident D-branes (or anti-D-branes) would be a stable supersymmetric (BPS) configuration, a system with both branes and anti-branes is not. Each separately preserves half of the supersymmetries of the ambient background, but different halves are preserved in each case, so that when both are present, there is no unbroken supersymmetry. One manifestation of this fact is that the excitation spectrum of open strings connecting $`\mathrm{D}p`$-branes to $`\overline{\mathrm{D}p}`$-branes has the reversed GSO projection compared to ones connecting $`\mathrm{D}p`$-branes to $`\mathrm{D}p`$-branes (or $`\overline{\mathrm{D}p}`$-branes to $`\overline{\mathrm{D}p}`$-branes). This results in tachyon fields on the world volume, which signal an instability. When the tachyon fields roll to a minimum — in a Higgs-like manner — this represents annihilation of branes and anti-branes.
To be specific, consider a system of $`N`$ $`\mathrm{D}p`$-branes and $`N^{}`$ $`\overline{\mathrm{D}p}`$-branes all of which are coincident $`(p+1)`$-dimensional hyperplanes embedded in $`𝐑^{10}`$. The ground state of the $`\mathrm{D}p`$ \- $`\mathrm{D}p`$ open strings gives $`U(N)`$ gauge fields $`A`$ and the $`\overline{\mathrm{D}p}`$ \- $`\overline{\mathrm{D}p}`$ open strings give $`U(N^{})`$ gauge fields $`A^{}`$. The $`\mathrm{D}p`$ \- $`\overline{\mathrm{D}p}`$ open strings, on the other hand, give a bifundamental $`(N,N^{})`$ tachyon $`T`$. These fields can be written together as a “superconnection”
$$𝒜=\left(\begin{array}{cc}A& T\\ \overline{T}& A^{}\end{array}\right).$$
(1)
One issue is whether or not the branes and anti-branes can completely annihilate. The criterion, basically, is whether the total D-brane charge (which is conserved) is zero or not. Cancelling the $`\mathrm{D}p`$-brane charge requires $`N=N^{}`$, of course, but that is not the whole story. It is also necessary that the gauge bundles $`E`$ and $`E^{}`$ (associated to branes and anti-branes) should be topologically equivalent, $`EE^{}`$. Otherwise, there is some lower-dimension D-brane charge, and such a D-brane would survive. To illustrate this consider the case of one $`\mathrm{D2}`$-brane and one $`\overline{\mathrm{D2}}`$-brane, which are wrapped on a $`T^2`$, and coincident in the other dimensions. The Wess-Zumino term of the $`\mathrm{D2}`$-brane world-volume action contains
$$(Ce^F)_3=_{R\times T^2}(C_3+C_1F),$$
(2)
where the $`C`$’s are $`RR`$ potentials. From this formula we see that the magnetic flux $`_{T^2}F`$ is a source of $`C_1`$, which means that it carries $`\mathrm{D0}`$-brane charge. Thus, for example, if the $`\mathrm{D2}`$-brane has flux giving one unit of $`\mathrm{D0}`$-brane charge and the $`\overline{\mathrm{D2}}`$-brane has no such flux, then the annihilation leaves a $`\mathrm{D0}`$-brane
$$\mathrm{D2}+\overline{\mathrm{D2}}\mathrm{D0}.$$
(3)
The world-volume theory that describes a coincident $`\mathrm{D}p+\overline{\mathrm{D}p}`$ system can be formulated in terms of the gauge fields and tachyons, where one imagines that all other modes have been integrated out. It is hard to make this explicit in a controlled manner, since the tachyon mass is generally string scale. Thus the discussion that follows is necessarily somewhat qualitative and heuristic. It does have the advantage of being very physical and intuitive, however. Analyses with better mathematical control lead to the same conclusions. One approach is to use conformal field theory methods, as described in Sen’s lecture. Another one is to use boundary-state techniques, as described in Gaberdiel’s lecture. In any case, working with gauge fields and tachyons, the world-volume theory has a tachyon potential $`V(T)`$, which must be invariant under the $`U(N)\times U(N^{})`$ gauge symmetry. Moreover, when $`N=N^{}`$, Sen argues that it should have minima that correspond to pure vacuum. The locus of minima, all of which are gauge equivalent, is given by $`T=T_0𝒰`$, where $`T_0`$ is a fixed positive real number and $`𝒰`$ is an arbitrary constant element of $`U(N)`$. At the minimum, the tachyon condensation energy should exactly cancel the energy of the D-branes
$$V(T_0𝒰)+2NT_{Dp}=0.$$
(4)
Here, $`T_{\mathrm{D}p}`$ is the tension of a single $`\mathrm{D}p`$-brane. Thus when $`EE^{}`$ and $`T=T_0𝒰`$, the $`\mathrm{D}p+\overline{\mathrm{D}p}`$ system is equivalent to pure vacuum. What happens to the $`U(N)`$ gauge groups is not completely understood.
Let us now take $`N=N^{}=1`$ and consider a kink configuration of the tachyon field $`T`$. $`T`$ is complex, so let us consider $`\mathrm{Im}T=0`$ and $`\mathrm{Re}T=T_0\mathrm{tanh}(x/a)`$, where $`x`$ is one of the Cartesian coordinates on the branes. This describes a solitonic $`\mathrm{D}(p1)`$-brane of thickness $`a`$ concentrated in the vicinity of $`x=0`$. (The precise functional form is not important.) Since the vacuum manifold $`|T|=T_0`$ is a circle, and $`\pi _0(S^1)`$ is trivial, this D-brane has a real tachyon in its world volume and is unstable. This is just as well, since the stable D-branes of type II theories are believed to be known, and this one is not in the list. In fact, such unstable D-branes can be constructed for all “wrong” values of $`p`$ in type II theories. Stable D-branes exist for $`p`$ = even in the IIA theory and $`p`$ = odd in the IIB theory. The unstable ones occur for the other values of $`p`$. Sen has demonstrated that these unstable D-branes are useful for analyzing certain issues. My purpose in describing them here is to set the stage for an analogous construction, which will appear later.
## 2 Non-BPS Type I D0-Branes
Let me now review one of Sen’s constructions of a non-BPS stable $`\mathrm{D0}`$-brane in type I superstring theory. The construction we will consider is in terms a tachyon kink in a D-string anti-D-string configuration. Recall that the type I D-string is actually the Spin (32)/$`𝐙_2`$ heterotic string continued to strong coupling. The continuation is reliable, because the string is BPS. A system of $`N`$ coincident $`D`$ strings has world volume gauge group $`O(N)`$. This can be understood as the subgroup of $`U(N)`$ on a set of type IIB D-strings that survives orientifold projection. In particular, for a single D-string the group is $`O(1)=𝐙_2`$. Even though there are no gauge fields in this case, the group matters. In particular, a D-string wrapped on a circular spatial dimension has possible Wilson lines $`W=\pm 1`$.
The 32 left-moving fermion fields $`\lambda ^A`$ on the D-string world-sheet arise as zero modes of D1 - D9 open strings. When wrapped on a circular dimension, the Wilson line encodes their periodicity
$$\lambda ^A(x+2\pi R)=W\lambda ^A(x).$$
(5)
Thus, for $`W=1`$, $`\lambda ^A`$ has zero modes, which satisfy a Clifford algebra, and D-string quantum states are gauge group spinors (with $`2^{15}`$ components).
Now consider a $`\mathrm{D1}+\overline{\mathrm{D1}}`$ pair wrapped on a circle. If one string has $`W=1`$ and the other one has $`W=1`$, then the overall two-particle state is a gauge group spinor. Since the gauge group is not broken, this implies that complete annihilation is not possible. The tachyonic ground state of the open string connecting the D-string and the anti-D-string is real in this case. For the case of opposite Wilson lines that we are considering, the tachyon field is antiperiodic. Thus it has the Fourier series expansion
$$T=\underset{n}{}T_{n+1/2}(t)\mathrm{exp}\left[i\left(\frac{n+1/2}{R}\right)x\right].$$
(6)
The mass of $`T_{n+1/2}`$, considered as a particle in $`9d`$, is
$$M_{n+1/2}^2=(n+1/2)^2/R^21/2.$$
(7)
The $`1/2`$ term is the tachyonic mass-squared value (in string units) in 10d, as usual for an RNS string. From this formula we see that for $`R<1/\sqrt{2}`$, there is no tachyonic instability and the wrapped $`\mathrm{D1}+\overline{\mathrm{D1}}`$ pair does not annihilate. For $`R>1/\sqrt{2}`$, on the other hand, $`T_{\pm 1/2}`$ (and possibly other modes) are tachyonic. This means that the strings can annihilate. What results is a stable non-BPS $`\mathrm{D0}`$-brane, which is a gauge group spinor. It carries a conserved $`𝐙_2`$ charge. In this case, the $`𝐙_2`$ corresponds to the two conjugacy classes of Spin (32)/$`𝐙_2`$.
At $`R=R_c=1/\sqrt{2}`$ and small string coupling constant $`g`$
$$M_{\mathrm{D0}}22\pi R_cT_{\mathrm{D1}}=\sqrt{2}/g.$$
(8)
Sen has argued that this is the leading small $`g`$ value of the type 1 $`\mathrm{D0}`$-brane mass for all $`R`$, though there are higher-order corrections. It has the usual $`1/g`$ factor that is characteristic of D-branes. Curiously, its mass differs from that of the type IIA $`\mathrm{D0}`$-brane by a factor of $`\sqrt{2}`$ (in leading order). In the S-dual heterotic theory the lightest gauge group spinor occurs at the first excited level in the perturbative spectrum. Presumably, the non-BPS $`\mathrm{D0}`$-brane of type I is this state continued to strong coupling.
## 3 K-Theory Classification of D-Branes
Recall that a $`\mathrm{D}p+\overline{\mathrm{D}p}`$ system is characterized by a pair of vector bundles $`(E,E^{})`$ and a tachyon $`T`$, which is a section of $`E^{}E^{}`$. Complete annihilation should be possible if and only if $`EE^{}`$. This requires $`N=N^{}`$, in particular. Following an earlier suggestion by Moore and Minasian , Witten has argued that equivalence classes of pairs $`(E,E^{})`$ that can be related by brane annihilation and creation correspond to K-theory classes . So these are the mathematical objects that correspond to conserved D-brane charges.
For example, D-brane charges of the type IIB theory on $`𝐑^{10}`$ are given by
$$\stackrel{~}{K}(S^{9p})=\{\begin{array}{cc}𝐙\hfill & p=\mathrm{odd}\hfill \\ 0\hfill & p=\mathrm{even}\hfill \end{array}.$$
(9)
This accounts for the RR charges of all stable type IIB D-branes. Note that the unstable D-branes (for $`p`$ = even) carry no conserved charges and do not show up in this classification.
In the case of type I theory, $`E`$ is an $`O(N+32)`$ bundle and $`E^{}`$ is an $`O(N)`$ bundle, so that the total RR 9-brane charge is 32. The relevant K-theory groups for $`𝐑^{10}`$ in this case are denoted $`\stackrel{~}{KO}(S^{9p})`$, as explained by Witten.
The results are as follows:
* $`\stackrel{~}{KO}(S^{9p})=𝐙`$ for $`p=1,5,9`$
these classify the charges for the three kinds of BPS Dp-branes of type I.
* $`\stackrel{~}{KO}(S^{9p})=𝐙_2`$ for $`p=1,0,7,8`$
$`p=1`$ corresponds to the type I D-instanton, and $`p=0`$ corresponds to the non-BPS $`\mathrm{D0}`$-brane, which we have discussed. The cases $`p=7,8`$ are additional non-BPS D-branes proposed by Witten.
* $`\stackrel{~}{KO}(S^{9p})=0`$ for $`p=2,3,4,6`$
there are no conserved D-brane charges in these cases.
## 4 Issues Concerning the Type I D8-brane
The K theory classification of type I D-branes, which we have just summarized, suggests two new D-branes not discussed previously: $`\mathrm{D7}`$ and $`\mathrm{D8}`$, each of which is supposed to carry a conserved $`𝐙_2`$ charge.
As noted in the final paragraph of Ref. , there is a tachyon in the spectrum of D7 - D9 and D8 - D9 open strings. This means that the world volume of a $`\mathrm{D7}`$-brane or $`\mathrm{D8}`$-brane contains 32 tachyon fields. Therefore neither of these D-branes is stable. This raises the question of what happens to the conserved $`𝐙_2`$ charge when they dissolve into the background $`\mathrm{D9}`$-branes. The comments that follow arose out of discussions with Oren Bergman and Ashoke Sen, as well as correspondence with Edward Witten. I will only discuss the $`\mathrm{D8}`$-brane, though the $`\mathrm{D7}`$-brane story is likely to be similar.
Witten has argued in support of the $`\mathrm{D8}`$-brane as follows: The type I D-instanton implies that there are two different “vacua”, distinguished by the sign of the instanton amplitudes. This is a $`Z_2`$ analog of the $`\theta `$ angle in QCD. One should expect that there is a domain wall separating the two vacua and this should be the $`\mathrm{D8}`$-brane. The sign change of instanton amplitudes would mean that the D-instanton is the EM dual of the $`\mathrm{D8}`$-brane. Investigations that support this picture were carried out by Gukov .
The K-theory analysis incorporates Bott periodicity. This suggests that the type I $`\mathrm{D8}`$-brane should have features in common with the type I D0-brane, discussed in Section 2. Of course, Bott periodicity should be taken cum grano salis, since the total spacetime dimension is ten. The construction of the type I $`\mathrm{D0}`$-brane that we described involved wrapping D-strings on a circle, which was a convenient regulator. However, one might argue that a localized $`\mathrm{D8}`$-brane should not exist on a circle (in the direction normal to the brane), since this would require identifying the two distinct vacua. Therefore we will analyze the situation in uncompactified $`𝐑^{10}`$.
The $`\mathrm{D0}`$-brane could have been presented without involving compactification. In any case, by considering the $`R\mathrm{}`$ limit of the construction in Section 2, we see that the $`\mathrm{D0}`$-brane can be described as a tachyonic kink in a system consisting of an infinite straight D-string and a coincident anti-D-string. The kink configuration would be exactly the same as we described for type II theories in Section 1. However, unlike the type II examples, the tachyon field is real in this case, and the potential $`V(T)`$ is an even function because of the $`𝐙_2`$ gauge symmetries. The kink configuration describing the $`\mathrm{D0}`$-brane is topologically stable in this case because the vacuum manifold is $`S^0`$ ($`T=\pm T_0`$) and $`\pi _0(S^0)=𝐙_2`$.
Let us now try to construct the $`\mathrm{D8}`$-brane out of $`\mathrm{D9}`$-branes in an analogous manner. One essential difference is that the total $`\mathrm{D9}`$-brane charge must be 32. Therefore the simplest analog to consider is 33 $`\mathrm{D9}`$-branes and one $`\overline{\mathrm{D9}}`$-brane filling the entire $`𝐑^{10}`$ spacetime. In this case the open strings connecting the $`\overline{\mathrm{D9}}`$ to the $`\mathrm{D9}`$-branes give 33 real tachyon fields $`\stackrel{}{T}`$ in the fundamental representation of $`SO(33)`$. (It doesn’t matter whether one uses $`O(N)`$ or $`SO(N)`$ in the present setting.) The potential $`V(\stackrel{}{T})`$ must have $`SO(33)`$ symmetry and therefore the vacuum manifold should be given by $`|\stackrel{}{T}|=T_0`$, which describes an $`S^{32}`$. This manifold is connected, so there is no topologically stable kink. This is the same situation we encountered for the unstable type II D-branes in Section 1. In this case there are 32 directions of instability, so one expects to find 32 tachyon fields in the $`\mathrm{D8}`$-brane world volume. This agrees with the conclusion of Ref. , which identified them with modes of D8 - D9 open strings.
So what are we to make of all this? I think it is clear that the $`\mathrm{D8}`$-brane is unstable, at least unless something further is done to stabilize it. Still, it may be interesting to consider setting up a $`\mathrm{D8}`$-brane configuration and exploring what that implies. I won’t present the details of the reasoning here, but it appears that the vacua on the two sides of the $`\mathrm{D8}`$-brane are distinguished by the chirality of gauge group spinors. Of course, once the $`\mathrm{D8}`$-brane decays, eventually leaving a uniform type I vacuum, only one chirality will remain. This may sound paradoxical, but it is possible because the gauge group is broken inside the $`\mathrm{D8}`$-brane.
In conclusion, K-theory classifies D-brane charges. However, high dimension non-BPS D-branes are sometimes destabilized by tachyonic modes of open strings connecting them to background spacetime filling D-branes.
## Acknowledgments
I am grateful to Oren Bergman, Ashoke Sen, and Edward Witten for helpful discussions and suggestions.
|
no-problem/9908/cs9908010.html
|
ar5iv
|
text
|
# On Diffusing Updates in a Byzantine Environment
### Abstract
We study how to efficiently diffuse updates to a large distributed system of data replicas, some of which may exhibit arbitrary (Byzantine) failures. We assume that strictly fewer than $`t`$ replicas fail, and that each update is initially received by at least $`t`$ correct replicas. The goal is to diffuse each update to all correct replicas while ensuring that correct replicas accept no updates generated spuriously by faulty replicas. To achieve reliable diffusion, each correct replica accepts an update only after receiving it from at least $`t`$ others. We provide the first analysis of epidemic-style protocols for such environments. This analysis is fundamentally different from known analyses for the benign case due to our treatment of fully Byzantine failures—which, among other things, precludes the use of digital signatures for authenticating forwarded updates. We propose two epidemic-style diffusion algorithms and two measures that characterize the efficiency of diffusion algorithms in general. We characterize both of our algorithms according to these measures, and also prove lower bounds with regards to these measures that show that our algorithms are close to optimal.
## 1 Introduction
A diffusion protocol is the means by which an update initially known to a portion of a distributed system is propagated to the rest of the system. Diffusion is useful for driving replicated data toward a consistent state over time, and has found application for this purpose, e.g., in USENET News \[LOM94\], and in the Grapevine \[BLNS82\] and Clearinghouse \[OD81\] systems. The quality of a diffusion protocol is typically defined by the delay until the update has reached all replicas, and the amount of message traffic that the protocol generates.
In this paper, we provide the first study of update diffusion in distributed systems where components can suffer Byzantine failures. The framework for our study is a network of data replicas, of which strictly less than some threshold $`t`$ can fail arbitrarily, and to which updates are introduced continually over time. For example, these updates may be sensor readings of some data source that is sampled by replicas, or data that the source actively pushes to replicas. However, each update is initially received only by a subset of the correct replicas of some size $`\alpha t`$, and so replicas engage in a diffusion protocol to propagate updates to all correct replicas over time. Byzantine failures impact our study in that a replica that does not obtain the update directly from the source must receive copies of the update from at least $`t`$ different replicas before it “accepts” the update as one actually generated by the source (as opposed to one generated spuriously by a faulty replica).
In our study, we allow fully Byzantine failures, and thus cannot rely on digital signatures to authenticate the original source of a message that one replica forwards to others. While maximizing the fault models to which our upper bounds apply, avoiding digital signatures also strengthens our results in other respects. First, in a network that is believed to intrinsically provide the correct sender address for each message due to the presumed difficulty of forging that address, avoiding digital signatures avoids the administrative overheads associated with distributing cryptographic keys. Second, even when the sender of a message is not reliably provided by the network, the sender can be authenticated using techniques that require no cryptographic assumptions (for a survey of these techniques, see \[Sim92\]). Employing digital signatures, on the other hand, would require assumptions limiting the computational power of faulty replicas. Third, pairwise authentication typically incurs a low computation overhead on replicas, whereas digitally signing each message would impose a significantly higher overhead.
To achieve efficient diffusion in our framework, we suggest two round-based algorithms: “Random”, which is an epidemic-style protocol in which each replica sends messages to randomly chosen replicas in each round, and “$`\mathrm{}`$-Tree-Random”, which diffuses updates along a tree structure. For these algorithms, two measures of quality are studied: The first one, delay, is the expected number of rounds until any individual update is accepted by all correct replicas in the system. The delay measure expresses the speed of propagation. The second, fan-in, is the expected maximum number of messages received by any replica in any round from correct replicas. Fan-in is a measure of the load inflicted on individual replicas in the common case, and hence, of any potential bottlenecks in execution. We evaluate these measures for each of the protocols we present. In addition to these results, we prove a lower bound of $`\mathrm{\Omega }(\frac{t}{F^{out}}\mathrm{log}\frac{n}{\alpha })`$ on the delay of any diffusion protocol, where $`F^{out}`$ is the “fan-out” of the protocol, i.e., a bound on the number of messages sent by any correct process in any round. We also show an inherent tradeoff between good (low) latency and good (low) fan-in, namely that their product is at least $`\mathrm{\Omega }(tn/\alpha )`$. Using this tradeoff, we demonstrate that our protocols cover much of the spectrum of optimal-delay protocols for their respective fan-in to within logarithmic factors.
We emphasize that our treatment of full Byzantine failures renders our problem fundamentally different from the case of crash failures only. Intuitively, any diffusion process has two phases: In the first phase, the initially active replicas for an update send this update, while the other replicas remain inactive. This phase continues while inactive replicas have fewer than $`t`$ messages. In the second phase, new replicas become active and propagate updates themselves, resulting in an exponential growth of the set of active replicas. In Figure 1 we depict the progress of epidemic diffusion. The figure shows the number of active replicas plotted against round number, for a system of $`n=100`$ replicas with different values of $`t`$, where $`\alpha =t+1`$. The case $`t=1`$ is indistinguishable from diffusion with benign failures only, since a single update received by a replica immediately turns it into an active one. Thus, in this case, the first phase is degenerate, and the exponential-growth phase occurs from the start. Previous work has analyzed the diffusion process in that case, proving propagation delay \[DGH+87\] that is logarithmic in the number of replicas. However, in the case that we consider here, i.e., $`t2`$, the delay is dominated by the initial phase.
The rest of the paper is organized as follows. In Section 1.1 we illustrate specific applications for which Byzantine message diffusion is suitable, and which motivated our study. We discuss related work in Section 1.2. In Section 2 we lay out assumptions and notation used throughout the paper, and in Section 3 we define our measures of diffusion performance. In Section 4 we provide general theorems regarding the delay and fan-in of diffusion protocols. In Section 5 we introduce our first diffusion protocol, Random, and analyze its properties, and in Section 6 we describe the $`\mathrm{}`$-Tree-Random protocol and its properties. We summarize and discuss our results in Section 7. Section 8 provides simulation results that demonstrate the likely behavior of our protocols in practice. We conclude in Section 9.
### 1.1 Motivation
The motivating application of our work on message diffusion is a data replication system called Fleet. (Fleet is not yet documented, but is based on similar design principles as a predecessor system called Phalanx \[MR98b\].) Fleet replicates data so that it will survive even the malicious corruption of some data replicas, and does so using adaptations of quorum systems to such environments \[MR98a\]. A characteristic of these replication techniques that is important for this discussion is that each update is sent to only a relatively small subset (quorum) of servers, but one that is guaranteed to include $`t`$ correct ones, where the number of faulty replicas is assumed to be less than $`t`$. Thus, after an update, most correct replicas have not actually received this update, and indeed any given correct replica can be arbitrarily out-of-date.
While this local inconsistency does not impact the global consistency properties of the data when the network is connected (due to the properties of the quorum systems we employ), it does make the system more sensitive to network partitions. That is, when the network partitions—and thus either global data consistency or progress of data operations must be sacrificed—the application may dictate that data operations continue locally even at the risk of using stale data. To limit how stale local data is when the network partitions, we use a diffusion protocol while the network is connected to propagate updates to all replicas, in the background and without imposing additional overhead on the critical path of data operations. In this way, the system can still efficiently guarantee strict consistency in case a full quorum is accessed, but can additionally provide relaxed consistency guarantees when only local information is used.
Another variation on quorum systems, probabilistic quorum systems \[MRW97, MRWW98\], stands to benefit from properly designed message diffusion in different ways than above. Probabilistic quorum systems are a means for gaining dramatically in performance and resilience over traditional (strict) quorum systems by allowing a marginal, controllable probability of inconsistency for data reads. When coupled with an effective diffusion technique, the probability of inconsistency can be driven toward zero when updates are sufficiently dispersed in time.
More generally, diffusion is a fundamental mechanism for driving replicated data to a consistent state in a highly decentralized system. Our study sheds light on the use of diffusion protocols in systems where arbitrary failures are a concern, and may form a basis of solutions for disseminating critical information in survivable systems (e.g., routing table updates in a survivable network architecture).
### 1.2 Related work
The style of update diffusion studied here has previously been studied in systems that can suffer benign failures only. Notably, Demers et.al. \[DGH+87\] performed a detailed study of epidemic algorithms for the benign setting, in which each update is initially known at a single replica and must be diffused to all replicas with minimal traffic overhead. One of the algorithms they studied, called anti-entropy and apparently initially proposed in \[BLNS82\], was adopted in Xerox’s Clearinghouse project (see \[DGH+87\]) and the Ensemble system \[BHO+98\]. Similar ideas also underly IP-Multicast \[Dee89\] and MUSE (for USENET News propagation) \[LOM94\]. This anti-entropy technique forms the basis for one of the algorithms (Random) that we study here. As described previously, however, the analysis provided here of the epidemic-style update diffusion is fundamentally different for Byzantine environments than for environments that suffer benign failures only.
Prior studies of update diffusion in distributed systems that can suffer Byzantine failures have focused on single-source broadcast protocols that provide reliable communication to replicas and replica agreement on the broadcast value (e.g., \[LSP82, DS83, BT85, MR96\]), sometimes with additional ordering guarantees on the delivery of updates from different sources (e.g., \[Rei94, CASD95, MM95, KMM98\]). The problem that we consider here is different from these works in the following ways. First, in these prior works, it is assumed that one replica begins with each update, and that this replica may be faulty—in which case the correct replicas can agree on an arbitrary update. In contrast, in our scenario we assume that at least a threshold $`t>1`$ of correct replicas begin with each update, and that only these updates (and no arbitrary ones) can be accepted by correct replicas. Second, these prior works focus on certain reliability, i.e., guaranteeing that all correct replicas (or all correct replicas in some agreed-upon subset of replicas) receive the update. Our protocols diffuse each update to all correct servers only with some probability that is determined by the number of rounds for which the update is propagated before it is discarded. Our goal is to analyze the number of rounds until the update is expected to be diffused globally and the load imposed on each replica as measured by the number of messages it receives in each round.
## 2 System model
We assume a system of $`n`$ replicas, denoted $`p_1,\mathrm{},p_n`$. A replica that conforms to its I/O and timing specifications is said to be correct. A faulty replica is one that deviates from its specification. A faulty replica can exhibit arbitrary behavior (Byzantine failures). We assume that strictly fewer than $`t`$ replicas fail, where $`t`$ is a globally known system parameter.
Replicas can communicate via a completely connected point-to-point network. Communication channels between correct replicas are reliable and authenticated, in the sense that a correct replica $`p_i`$ receives a message on the communication channel from another correct replica $`p_j`$ if and only if $`p_j`$ sent that message to $`p_i`$. Moreover, we assume that communication channels between correct replicas impose a bounded latency $`\mathrm{\Delta }`$ on message transmission; i.e., communication channels are synchronous. Our protocols will also work to diffuse updates in an asynchronous system, but in this case we can provide no delay or fan-in analysis. Thus, we restrict our attention to synchronous systems here.
Our diffusion protocols proceed in synchronous rounds. A system parameter, fan-out, denoted $`F^{out}`$, bounds from above the number of messages any correct replica sends in a single round. A replica receives and processes all messages sent to it in a round, before the next round starts. Thus, rounds begin at least $`\mathrm{\Delta }`$ time units apart.
Each update $`u`$ is introduced into the system at a set $`I_u`$ of $`\alpha t`$ correct replicas, and possibly also at some other, faulty replicas. We assume that all replicas in $`I_u`$ initially receive $`u`$ simultaneously (i.e., in the same round). The goal of a diffusion protocol is to cause $`u`$ to be accepted at all correct replicas in the system. The update $`u`$ is accepted at correct replica $`p_i`$ if $`p_iI_u`$ or $`p_i`$ has received $`u`$ from $`t`$ other distinct replicas. If $`p_i`$ has accepted $`u`$, then we also say that $`p_i`$ is active for $`u`$ (and is passive otherwise). In all of our diffusion protocols, we assume that each message contains all the updates known to the sender, though in practice, obvious techniques can reduce the actual number of updates sent to necessary ones only.
## 3 Measures
We study two complexity measures: delay and fan-in. For each update, the delay is the expected number of rounds from the time the update is introduced to the system until all correct replicas accept the update. Formally, let $`\eta _u`$ be the round number in which update $`u`$ is introduced to the system, and let $`\tau _p^u`$ be the round in which a correct replica $`p`$ accepts update $`u`$. The delay is $`E[\mathrm{max}_p\{\tau _p^u\}\eta _u]`$, where the expectation is over the random choices of the algorithm and the maximization is over correct replicas.
We define Fan-in to be the expected maximum number of messages that any correct replica receives in a single round from correct replicas under all possible failure scenarios. Formally, let $`\rho _p^i`$ be the number of messages received in round $`i`$ by replica $`p`$ from correct replicas. Then the fan-in in round $`i`$ is $`E[\mathrm{max}_{p,C}\{\rho _p^i\}]`$, where the maximum is taken with respect to all correct replicas $`p`$ and all failure configurations $`C`$ containing fewer than $`t`$ failures. An amortized fan-in is the expected maximum number of messages received over multiple rounds, normalized by the number of rounds. Formally, a $`k`$-amortized fan-in starting at round $`l`$ is $`E[\mathrm{max}_{p,C}\{_{i=l}^{l+k}\rho _p^i/k\}]`$. We emphasize that fan-in and amortized fan-in are measures only for messages from correct replicas. Let $`F^{in}`$ denotes the fan-in. In a round a correct replica may receive messages from $`F^{in}+t1`$ different replicas, and may receive any number of messages from faulty replicas.
A possible alternative is to define fan-in as an absolute bound limiting the number of replicas from which each correct replica will accept messages in each round. However, this would render the system vulnerable to “denial of service” attacks by faulty replicas: by sending many messages, faulty replicas could force messages from correct replicas to compete with up to $`t1`$ messages from faulty replicas in every round, thus significantly changing the behavior of our protocols.
## 4 General Results
In this section we present general results concerning the delay and fan-in of any propagation algorithm. Our first result is a lower bound on delay, that stems from the restriction on fan-out, $`F^{out}`$. This lower bound is for the worst case delay, i.e., when faulty replicas send no messages.
###### Theorem 4.1
The delay of any diffusion algorithm $`A`$ is $`\mathrm{\Omega }(\frac{t}{F^{out}}\mathrm{log}\frac{n}{\alpha })`$.
Proof: Let $`u`$ be any update, and let $`m_k`$ denote the total number of times $`u`$ is sent by correct processes in rounds $`\eta _u+1,\mathrm{},\eta _u+k`$ in $`A`$. Denote by $`\alpha _k`$ the number of correct replicas that have accepted update $`u`$ by the time round $`\eta _u+k`$ completes. Since $`t`$ copies of update $`u`$ need to reach a replica (not in $`I_u`$) in order for it to accept the update, we have that $`\alpha _k\alpha +m_k/t`$. Furthermore, since at most $`F^{out}\alpha _k`$ new updates are sent by correct processes in round $`\eta _u+k+1`$, we have that $`m_{k+1}m_k+F^{out}\alpha _kF^{out}_{j=0}^k\alpha _j`$, where $`\alpha _0=\alpha `$. By induction on $`k`$, it can be shown that $`\alpha _k\alpha (1+\frac{F^{out}}{t})^k`$. Therefore, for $`k<\frac{t}{F^{out}}\mathrm{log}\frac{n}{\alpha }`$ we have that $`\alpha _k<n`$, which implies that not all the replicas are active for update $`u`$. $`\mathrm{}`$
The next theorem shows that there is an inherent tradeoff between fan-in and delay.
###### Theorem 4.2
Let $`A`$ be any propagation algorithm. Denote by $`D`$ its delay, and by $`F^{in}`$ its $`D`$-amortized fan-in. Then $`DF^{in}=\mathrm{\Omega }(tn/\alpha )`$, for $`t2\mathrm{log}n`$.
Proof: Let $`u`$ be any update. Since the $`D`$-amortized fan-in of $`A`$ is $`F^{in}`$, with probability $`0.9`$ (where $`0.9`$ is arbitrarily chosen here as some constant between $`0`$ and $`1`$), the number of messages received (from correct replicas) by any replica in rounds $`\eta _u+1,\mathrm{},\eta _u+D`$ is less than $`10DF^{in}`$. From now on we will assume that every replica $`p_j`$ receives at most $`10DF^{in}`$ messages in rounds $`\eta _u+1,\mathrm{},\eta _u+D`$. This means that for each $`p_j`$, if $`p_j`$ is updated by a set $`S_j`$ of replicas during rounds $`\eta _u+1,\mathrm{},\eta _u+D`$, then $`|S_j|10DF^{in}`$. Some replica $`p_j`$ becomes active for $`u`$ if out of the updates in $`S_j`$ at least $`t`$ are from $`I_u`$, i.e. $`|S_jI_u|t`$. In order to show the lower bound, we need to exhibit an initial set $`I_u`$, such that if $`10DF^{in}`$ is too small then no replica becomes active. More specifically, for $`D\frac{1}{2}\frac{nt}{10F^{in}\alpha }`$, we show that there exists a set $`I_u`$ such that for each $`p_j`$, we have $`|S_jI_u|<t`$.
We choose the initial set $`I_u`$ as a random subset of $`\{p_1,\mathrm{},p_n\}`$ of size $`\alpha `$. Let $`X_j`$ denote the number of replicas in $`I_u`$ from which messages are received by replica $`p_j`$ during rounds $`\eta _u+1,\mathrm{},\eta _u+D`$, i.e., $`X_j=|S_jI_u|`$. Since $`p_j`$ receives at most $`10DF^{in}`$ messages in these rounds, we get
$`Prob[X_jk]`$ $`<`$ $`{\displaystyle \underset{i=k}{\overset{10DF^{in}}{}}}{\displaystyle \frac{\left(\genfrac{}{}{0pt}{}{10DF^{in}}{i}\right)\left(\genfrac{}{}{0pt}{}{n10DF^{in}}{\alpha i}\right)}{\left(\genfrac{}{}{0pt}{}{n}{\alpha }\right)}}`$
$`<`$ $`{\displaystyle \underset{i=k}{\overset{n}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{10DF^{in}}{i}}\right)\left({\displaystyle \frac{\alpha }{n}}\right)^i`$
$``$ $`\left({\displaystyle \frac{10eDF^{in}\alpha }{kn}}\right)^kc,`$
where the constant $`c`$ is at most $`2`$ if $`D\frac{1}{2}\frac{nk}{10eF^{in}\alpha }`$, and hence we have that $`Prob[X_jt]<(1/2)^t`$. By our assumption that $`t2\mathrm{log}n`$, we have that $`Prob[X_jt]<1/n^2`$. This implies that the probability that all the $`X_j`$ are at most $`t`$ is at least $`1(1/n)`$.
We have shown that for most subsets $`I_u`$ if $`D\frac{1}{2}\frac{nt}{10eF^{in}\alpha }`$ no new replica would become active. Therefore, for some specific $`I_u`$ it also holds. (In fact it holds for most.)
Recall that at the start of the proof we assumed that the $`D`$-amortized fan-in is at most $`10F^{in}`$. This holds with probability at least $`0.9`$. Therefore in $`0.9`$ of the runs the delay is at least $`\frac{1}{2}\frac{nt}{10eF^{in}\alpha }`$, which implies that the expected delay is $`\mathrm{\Omega }(\frac{nt}{F^{in}\alpha })`$. $`\mathrm{}`$
## 5 Random Propagation
In this section, we present a random diffusion method and examine its delay and fan-in measures. In this algorithm, which we refer to as simply “Random”, each replica, at each round, chooses $`F^{out}`$ replicas uniformly at random from all replicas and sends messages to them. This method is similar to the “anti-entropy” method of \[BLNS82, DGH+87\].
In the next theorem we use the notation of
$$R_{\beta ,t}=\beta \underset{j=\beta t+1}{\overset{\beta }{}}1/j\beta \mathrm{log}\frac{\beta }{\beta t+1}+O\left(\frac{\beta }{\beta t+1}\right),$$
which is the result of the analysis of the coupon collector problem, i.e., the expected number of steps for collecting $`t`$ distinct ‘coupons’ out of $`\beta `$ different ones by random polling (see \[MR95, ch. 3\]). It is worth discussing how $`R_{\beta ,t}`$ behaves for various values of $`\beta `$ and $`t`$. For $`\beta =t`$ we have $`R_{\beta ,t}t\mathrm{log}t`$. For $`\beta 2t`$ we have $`R_{\beta ,t}1.5t`$. For all $`\beta t`$, we have $`R_{\beta ,t}t`$. This implies that if the initial set size $`\beta `$ is very close to $`t`$, then we have a slightly superlinear behavior of $`R_{\beta ,t}`$ as a function of $`t`$, while if $`\beta `$ is a fraction away from $`t`$ then we have $`R_{\beta ,t}`$ as a linear function in $`t`$.
###### Theorem 5.1
The delay of the Random algorithm is $`O\left(\frac{R_{\alpha ,t}}{F^{out}}(\frac{n}{\alpha })^{(1\frac{1}{2R_{\alpha ,t}})}+\frac{\mathrm{log}(n)}{F^{out}}\right)`$ for $`2<tn/4`$.
Proof: The outline of the proof is as follows. For the most part, we consider bounds on the number of messages sent, rather than directly on the number of rounds. It is more convenient to argue about the number of messages, since the distribution of the destination of each replica’s next message is fixed, namely uniform over all replicas. As long as we know that there are between $`\alpha `$ and $`2\alpha `$ replicas active for $`u`$, we can translate an upper bound on the number of messages to an approximate upper bound on the number of rounds.
More specifically, so long as the number $`\beta `$ of active replicas does not reach a quarter of the system, i.e., $`\alpha \beta n/4`$, we study $`m^+(\beta )`$, an upper bound on the number of messages needed to be sent such that with high probability, $`1q^+(\beta )`$, we have $`\beta `$ new replicas change state to active. We then analyze the algorithm as composed of phases starting with $`\beta =2^j\alpha `$. The upper bound on the number of messages to reach half the system is $`_{j=0}^{\mathrm{}}m^+(2^j\alpha )`$, the bound on the number of rounds is $`_{j=0}^{\mathrm{}}m^+(2^j\alpha )/(2^jF^{out}\alpha )`$, and the error probability is at most $`_{j=0}^{\mathrm{}}q^+(2^j\alpha )`$, where $`\mathrm{}=\mathrm{log}(n/2\alpha )1`$. In the analysis we assume for simplicity that $`n=2^j\alpha `$ for some $`j`$, and this implies that in the last component we study, there are at most $`n/4`$ active replicas.
At the end, we consider the case where $`\beta >n/4`$, and bound from above the number of rounds needed to complete the propagation algorithm. This case adds only an additive factor of $`O((t+\mathrm{log}n)/F^{out})`$ to the total delay.
We start with the analysis of the number of messages required to move from $`\beta `$ active replicas to $`2\beta `$, where $`\beta n/4`$. For any $`m`$, let $`N_i^m`$ be the number of messages that $`p_i`$ received, out of the first $`m`$ messages, and let $`S_i^m`$ be the number of distinct replicas that sent the $`N_i^m`$ messages. Let $`U_i^m`$ be an indicator variable such that $`U_i^m=1`$ if $`p_i`$ receives messages from $`t`$ or more distinct replicas after $`m`$ messages are sent, and $`U_i^m=0`$ otherwise. I.e. $`U_i^m=1`$ if and only if $`S_i^mt`$.
We now use the coupon collector’s analysis to bound the probability that $`S_i^mt`$ when $`N_i^m`$ messages are received. Thus, a replica needs to get an expected $`R_{\beta ,t}`$ messages before $`S_i^mt`$, and so with probability $`1/2`$ it would need more than $`2R_{\beta ,t}`$ messages to collect $`t`$ different messages. For $`mn+2R_{\beta ,t}`$ we have that
$`Prob[U_i^m=1]`$
$``$ $`Prob[N_i^m=2R_{\beta ,t}]Prob[U_i^m=1|N_i^m=2R_{\beta ,t}]`$
$``$ $`\left({\displaystyle \genfrac{}{}{0pt}{}{m}{2R_{\beta ,t}}}\right)\left({\displaystyle \frac{1}{n}}\right)^{2R_{\beta ,t}}\left(1{\displaystyle \frac{1}{n}}\right)^{m2R_{\beta ,t}}\left({\displaystyle \frac{1}{2}}\right)`$
$``$ $`\left({\displaystyle \frac{m}{2R_{\beta ,t}}}\right)^{2R_{\beta ,t}}\left({\displaystyle \frac{1}{n}}\right)^{2R_{\beta ,t}}e^{(m2R_{\beta ,t})/n}\left({\displaystyle \frac{1}{2}}\right)`$
$``$ $`\left({\displaystyle \frac{m}{2nR_{\beta ,t}}}\right)^{2R_{\beta ,t}}\left({\displaystyle \frac{1}{6}}\right)`$
Let $`U^m`$ denote the number of replicas that received messages from $`t`$ or more replicas after $`m`$ messages are sent, i.e., $`U^m=_{i=\beta +1}^nU_i^m`$, where the active replicas are $`p_1,\mathrm{}p_\beta `$. For $`\beta n/4`$ we have,
$`E[U^m]`$ $``$ $`(n\beta )\left({\displaystyle \frac{m}{2nR_{\beta ,t}}}\right)^{2R_{\beta ,t}}\left({\displaystyle \frac{1}{6}}\right)`$
$``$ $`{\displaystyle \frac{n}{12}}\left({\displaystyle \frac{m}{2nR_{\beta ,t}}}\right)^{2R_{\beta ,t}},`$
where the right inequality uses the fact that $`\beta n/4`$.
Our aim is to analyze the distribution of $`U^m`$. More specifically, we would like to find $`m^+(\beta )`$ such that,
$$Prob[U^m2\beta ]>1q^+(\beta )$$
for any $`m>m^+(\beta )`$.
Generally, the analysis is simpler when the random variables are independent. Unfortunately, the random variables $`U_i^m`$ are not independent, but using a classical result by Hoeffding \[Hoe63, Theorem 4\], the dependency works only in our favor. Namely, let $`X_i^m`$ be i.i.d. binary random variables with $`Prob[X_i^m=1]=Prob[U_i^m=1]`$, and $`X^m=_{i=1}^nX_i^m`$. Then,
$$Prob[U^mE[U^m]\gamma ]Prob[X^mE[X^m]\gamma ].$$
From now on we will prove the bounds for $`X^m`$ and they will apply also to $`U^m`$. First, using a Chernoff bound (see \[KV94\]) we have that,
$$Prob\left[X^{m^+(\beta )}\frac{1}{2}E[X^{m^+(\beta )}]\right]e^{\frac{E[X^{m^+(\beta )}]}{8}}.$$
For $`m^+(\beta )=2nR_{\beta ,t}(24\beta /n)^{1/2R_{\beta ,t}}`$, we have $`E[X^{m^+(\beta )}]2\beta `$, and hence
$$Prob[X^{m^+(\beta )}\beta ]e^{\beta /4}=q^+(\beta ).$$
For the analysis of the Random algorithm, we view the algorithm as running in phases so long as $`\beta n/4`$. There will be $`\mathrm{}=\mathrm{log}(n/2\alpha )1`$ phases, and in each phase we start with $`\beta =2^j\alpha `$ initial replicas, for $`0j\mathrm{}`$. The $`j`$th phase runs for $`m^+(2^j\alpha )/(F^{out}2^j\alpha )`$ rounds. We say that a phase is “good” if by the end of the phase the number of active replicas has at least doubled. The probability that some phase is not good is bounded by,
$$\underset{j=0}{\overset{\mathrm{}}{}}q^+(2^j\alpha )=\left(\underset{j=0}{\overset{\mathrm{}}{}}e^{2^j\alpha /4}\right)2e^{\alpha /4}1/2,$$
for $`\alpha 6`$. Assuming that all the phases are good, at the end half of the replicas are active.
The number of rounds until half the system is active is at most,
$`{\displaystyle \underset{j=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{m^+(2^j\alpha )}{F^{out}2^j\alpha }}`$ $`=`$ $`{\displaystyle \underset{j=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{2nR_{2^j\alpha ,t}(24\times 2^j\alpha /n)^{1/(2R_{2^j\alpha ,t})}}{F^{out}2^j\alpha }}`$
$``$ $`{\displaystyle \frac{2nR_{\alpha ,t}}{F^{out}\alpha }}{\displaystyle \underset{j=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{(242^j\alpha /n)^{1/(2R_{\alpha ,t})}}{2^j}}`$
$`=`$ $`O\left({\displaystyle \frac{R_{\alpha ,t}}{F^{out}}}\left({\displaystyle \frac{n}{\alpha }}\right)^{1\frac{1}{2R_{\alpha ,t}}}\right),`$
where we used here the fact that $`R_{\beta ,t}`$ is a decreasing function in $`\beta `$.
We now reach the last stage of the algorithm, when $`\beta n/2`$. Unfortunately, there are too few passive replicas to use the analysis above for $`m^+(\beta )`$, since we cannot drive the expectation of $`X^m`$ any higher than $`\beta `$. We therefore employ a different technique here.
We give an upper bound on the expected number of rounds for completion at the last stage. Fix any replica $`p`$, and let $`V_i`$ be the number of new update in round $`i`$ that $`p`$ receives. Since $`tn/4`$, we have $`\beta tn/4`$, and so:
$$E[V_i]=(\beta t)\frac{F^{out}}{n}\frac{F^{out}}{4}.$$
Let $`V^r`$ denote the number of new updates received by $`p`$ in $`r`$ rounds, hence $`V^r=_{i=1}^rV_i`$. Then, $`E[V^r]rF^{out}/4`$. Using the Chernoff bound we have,
$$Prob[V^r<rF^{out}/8]e^{F^{out}r/64}$$
Let $`r^+=(8t+128\mathrm{log}(n))/F^{out}`$. The probability that $`V^{r^+}`$ is less than $`t`$ is at most $`1/n^2`$. The probability that some replica receives less than $`t`$ new updates in $`r^+`$ rounds is thus less than $`1/n`$, and so in an expected $`O((t+\mathrm{log}(n))/F^{out})`$ rounds the algorithm terminates.
Putting the two bounds together, we have an expected $`O(\frac{R_{\alpha ,t}}{F^{out}}\left(\frac{n}{\alpha }\right)^{1\frac{1}{2R_{\alpha ,t}}}+\frac{\mathrm{log}(n)}{F^{out}})`$ number of rounds. $`\mathrm{}`$
The proof of the theorem reveals that it takes the same order of magnitude of rounds just to add $`\alpha `$ more replicas to be active as it is to make all the replicas active. This is due to the phenomena that having more replicas active reduces the time to propagate the update. This is why we have a rapid transition from not having accepted the update by any new replicas to having them accepted by all replicas.
Note that when $`t=\mathrm{\Omega }(\mathrm{log}n)`$, then simply by sending to replicas in a round-robin fashion, the initially active replicas can propagate an update in $`O(\frac{nt}{\alpha F^{out}})`$ rounds to the rest of the system. The Random algorithm reaches essentially the same bound in this case. This implies that the same delay would have been reached if the replicas that accepted the update would not have participated in propagating it (and only the original set of replicas would do all the propagating). Finally, note that in failure-free runs of the system, the upper bound proved in Theorem 5.1 is also the lower bound on the expected delay, i.e., it is tight.
The next theorem bounds the fan-in of the random algorithm. Recall that the fan-in measure is with respect to the messages sent by the correct replicas.
###### Theorem 5.2
The fan-in of the Random algorithm is $`O(F^{out}+\mathrm{log}n)`$, and when $`F^{out}1/4\mathrm{log}n`$, it is $`O(\frac{F^{out}+\mathrm{log}n}{\mathrm{log}\mathrm{log}n\mathrm{log}F^{out}})`$. (Note that when $`F^{out}=1`$, this fan-in is $`O(\frac{\mathrm{log}n}{\mathrm{log}\mathrm{log}n})`$). The $`(\mathrm{log}n)`$-amortized fan-in is $`O(F^{out})`$.
Proof: The probability that a replica receives $`k`$ messages or more in one round is bounded by $`\left(\genfrac{}{}{0pt}{}{nF^{out}}{k}\right)(1/n)^k`$, which is bounded by $`(eF^{out}/k)^k`$. For $`k=c(F^{out}+\mathrm{log}n)`$, this bound is $`O(1/n^2)`$, for some $`c>0`$. Hence the probability that any replica receives more than $`k=c(F^{out}+\mathrm{log}n)`$ in a round is small. Therefore, the fan-in is bounded by $`O(F^{out}+\mathrm{log}n)`$. If $`F^{out}1/4\mathrm{log}n`$, then for $`k=c\frac{F^{out}+\mathrm{log}n}{\mathrm{log}\mathrm{log}n\mathrm{log}F^{out}}`$, this bound is $`O(1/n^2)`$, for some $`c>0`$. Therefore, in this case, the fan-in is bounded by $`O(\frac{F^{out}+\mathrm{log}n}{\mathrm{log}\mathrm{log}n\mathrm{log}F^{out}})`$.
The probability that in $`\mathrm{log}n`$ rounds a specific replica receives more than $`k=6F^{out}\mathrm{log}n`$ messages is bounded by $`\left(\genfrac{}{}{0pt}{}{nF^{out}\mathrm{log}n}{k}\right)(1/n)^k`$ which is bounded by $`1/n^2`$. The probability that any replica receives more than $`k=6F^{out}\mathrm{log}n`$ messages is bounded by $`1/n`$. Thus, the $`(\mathrm{log}n)`$-amortized fan-in is at most $`O(F^{out})`$. $`\mathrm{}`$
## 6 Tree-Random
The Random algorithm above is one way to propagate an update. Its benefit is the low fan-in per replica. In this section, we devise a different approach that sacrifices both the uniformity and the fan-in in order to optimize the delay. We start with a specific instance of our approach, called Tree-Random. Tree-Random is a special case of a family of algorithms $`\mathrm{}`$-Tree-Random, which we introduce later. It is presented first to demonstrate one extremum, in terms of its fan-in and delay, contrasting the Random algorithm.
We define the Tree-Random algorithm as follows. We partition the replicas into blocks of size $`4t`$, and arrange these blocks on the nodes of a binary tree. For each replica there are three interesting sets of replicas. The first set is the $`4t`$ replicas at the root of the tree. The second and third sets are the $`4t`$ replicas at the right and left sons of the node that the replica is in. The total number of interesting replicas for each replica is at most $`12t`$, and we call it the candidate set of the replica. In each round, each replica chooses $`F^{out}`$ replicas from its candidate set uniformly at random and sends a message to those replicas.
###### Theorem 6.1
The delay of the Tree-Random algorithm is $`O(\frac{R_{\alpha ,t}}{F^{out}}+\frac{\mathrm{log}(\alpha )}{F^{out}}+\frac{t}{F^{out}}\mathrm{log}(n/t))`$ for $`n>8t`$.
Proof: Let $`u`$ be any update. We say that a node in the tree is active for $`u`$ if $`2t`$ correct replicas (out of the $`4t`$ replicas in the node) are active for $`u`$. We start by bounding the expected number of rounds, starting from $`\eta _u`$, for the root to become active. The time until the root is active can be bounded by the delay of the Random algorithm with $`4t+\alpha `$ replicas. Since on average one of every three messages is targeted at the root, within expected $`O(R_{\alpha ,t}/F^{out}+\mathrm{log}(\alpha )/F^{out})`$ rounds the root becomes active.
The next step of the proof is to bound how much time it takes from when a node becomes active until its child becomes active. We will not be interested in the expected time, but rather focus on the time until there is at least a constant probability that the child is active, and show a bound of $`O(t/F^{out})`$ rounds.
Given that $`2t`$ correct replicas in the parent node are active, each replica in the child node has an expectation of receiving $`F^{out}/12`$ updates from new replicas in every round. Using a Chernoff bound, this implies that in $`\mathrm{}=96t/F^{out}`$ rounds each replica has a probability of $`e^t`$ of not becoming active. The probability that the child node is not active (i.e. less of $`2t`$ of its replicas are active) after $`\mathrm{}`$ rounds is bounded by $`b=3te^t<5/6`$ for $`t2`$.
In order to bound the delay we consider the delay until a leaf node becomes active. We show that for each leaf node, with high probability its delay is bounded by $`O(t\mathrm{log}(n/t))`$. Each leaf node has $`\mathrm{log}(n/4t)`$ nodes on the path leading from the root to it. Partition the rounds into meta-rounds, each containing $`\mathrm{}`$ rounds. For each meta-round there is a probability of at least $`1b`$ that another node on the path would become active. This implies that in $`k`$ meta-rounds, we have an expected number of $`(1b)k`$ active nodes on the path. Therefore, the probability that we have less than $`(1b)k/2`$ is at most $`e^{(1b)k/8}`$. We have $`\mathrm{log}(n/4t)`$ nodes on the the path, this gives the constraint that $`k2\mathrm{log}(n/4t)/(1b)`$. In addition we like the probability that there exists a leaf node that does not become active to be less than $`(t/n)^2`$, which holds for $`k16\mathrm{log}(n/4t)/(1b)`$. Consider $`k=16\mathrm{log}(n/4t)/(1b)`$ meta rounds. Since there are at most $`n/4t`$ leaves in the tree, then with probability at least $`14t/n>1/2`$ the number of meta-rounds is at most $`k=O(\mathrm{log}(n/t))`$. Thus, the delay is $`k\mathrm{}=O(t\mathrm{log}(n/t)/F^{out})`$. This implies that the total expected delay is bounded by $`O(R_{\alpha ,t}/F^{out}+\mathrm{log}(\alpha )/F^{out}+t\mathrm{log}(n/t)/F^{out})`$. $`\mathrm{}`$
Two points about this theorem are worthy of noting. First, we did not attempt to optimize for the best constants. In fact, we note that much of the constant factor in the Tree-Random propagation delay can be eliminated if we modify the algorithm to propagate messages deterministically down the tree (but continue selecting targets at random from the root node).
Second, the Tree-Random algorithm gains its speed at the expense of a large fan-in. The replicas at the root of the tree receive $`O(n)`$ messages in each round of the protocol, and therefore in practice, constitute a centralized bottleneck. Theorem 4.2 shows that in our model there is an inherent tradeoff between the fan-in and the delay.
The next theorem claims a bound on the fan-in of the Tree-Random algorithm.
###### Theorem 6.2
The fan-in of the Tree-Random algorithm is $`\mathrm{\Theta }(nF^{out}/t)`$, for $`n=\mathrm{\Omega }(\frac{t}{F^{out}}\mathrm{log}n)`$.
Proof: Any replica at the root has a probability of $`F^{out}/(12t)`$ receiving a message from any other replica. This implies that the expected number of messages per round is $`nF^{out}/(12t)`$, which establishes the lower bound. The probability that a replica receives more than $`2\frac{F^{out}n}{12t}`$ is bounded by $`e^{F^{out}n/3(12t)}`$ (using the Chernoff bound). Since $`n=\mathrm{\Omega }(\frac{t}{F^{out}}\mathrm{log}n)`$, the probability is bounded by $`1/n^2`$, and the theorem follows. $`\mathrm{}`$
We now define and analyze the generalized $`\mathrm{}`$-Tree-Random method. We partition the replicas into blocks of size $`\mathrm{}`$, and arrange these blocks on the nodes of a binary tree. As in the Tree-Random algorithm, for each replica there are three interesting sets of replicas. The first set is the $`\mathrm{}`$ replicas at the root of the tree. The second and third sets are the $`\mathrm{}`$ replicas at the right and left sons of the node that the replica is in. The total number of replicas in the three sets is at most $`3\mathrm{}`$, and we call it the candidate set of the replica. In each round, each replica chooses $`F^{out}`$ replicas from its candidate set uniformly at random and sends a message to those replicas.
Note that the Tree-Random propagation is simply setting $`\mathrm{}=4t`$ and the random propagation is simply setting $`\mathrm{}=n`$.
###### Theorem 6.3
The $`\mathrm{}`$-Tree-Random algorithm has delay
$$O\left(\frac{R_{\alpha ,t}}{F^{out}}\left(\frac{\mathrm{}+\alpha }{\alpha }\right)^{11/t}+\frac{\mathrm{log}(\mathrm{}+\alpha )}{F^{out}}+\frac{t}{F^{out}}\mathrm{log}(n/\mathrm{})\right)$$
and fan-in $`\mathrm{\Theta }(nF^{out}/\mathrm{})`$, for $`4t\mathrm{}nF^{out}/\mathrm{log}n`$.
Proof: The proof of the fan-in is identical to the one of the Tree-Random algorithm. We have $`\mathrm{}`$ replicas at the root. Each replica sends to each replica at the root with probability $`F^{out}/3\mathrm{}`$. Therefore the expected number of updates to each replica in the root is $`nF^{out}/3\mathrm{}`$, which establishes the lower bound on fan-in. With probability $`e^{nF^{out}/3(3\mathrm{})}1/n^2`$, each replica receives less than $`2nF^{out}/3\mathrm{}`$ updates in a round.
The proof on the delay has two parts. The first is computing the time it takes to make all the replicas in the root active. This can be bounded by the delay of the Random algorithm with $`\mathrm{}+\alpha `$ replicas, and so is $`O\left(\frac{R_{\alpha ,t}}{F^{out}}(\frac{\mathrm{}+\alpha }{\alpha })^{11/t}+\frac{\mathrm{log}(\mathrm{}+\alpha )}{F^{out}}\right)`$.
The second part is propagating on the tree. This part is similar to the Tree algorithm. As before, in each node at each round, each replica has a constant probability of receiving messages from $`\mathrm{\Theta }(F^{out})`$ new replica. This implies that with some constant probability $`1b`$ all the replicas in a node are active after $`O(t/F^{out})`$ rounds. The analysis of the propagation to a leaf node is identical to before, and thus this second stage takes $`O(\mathrm{log}(n/\mathrm{}))`$ meta-rounds and the total delay on the second stage is $`O(\frac{t}{F^{out}}\mathrm{log}(n/\mathrm{}))`$. $`\mathrm{}`$
## 7 Discussion
Our results for the Random and $`\mathrm{}`$-Tree-Random algorithms are summarized in Table 1.
Using the fan-in/delay bound of Theorem 4.2, we now examine our diffusion methods. The Random algorithm has $`O(\mathrm{log}n)`$-amortized fan-in of $`O(F^{out})`$, yielding a product of delay and amortized fan-in of $`O\left(t(\frac{n}{\alpha })^{(1\frac{1}{3t})}+\mathrm{log}(n)\right)`$ when $`\alpha 2t`$. This is slightly inferior to the lower bound in the range of $`t`$ for which the lower bound applies. The Tree-Random method has fan-in (and amortized fan-in) of $`O(nF^{out}/t)`$ and delay $`O(\frac{\mathrm{log}(\alpha )}{F^{out}}+\frac{t}{F^{out}}\mathrm{log}(n/t))`$ if $`\alpha 2t`$. So, their product is $`O(\frac{n\mathrm{log}(\alpha )}{t}+n\mathrm{log}(n/t))`$, which again is inferior to the lower bound of $`\mathrm{\Omega }(tn/\alpha )`$ since $`t/\alpha 1`$. However, recall from Theorem 4.1 that the delay is always $`\mathrm{\Omega }(\frac{t}{F^{out}}\mathrm{log}(\frac{n}{\alpha }))`$, and so for the fan-in of $`O(nF^{out}/t)`$ it is impossible to achieve optimal delay/fan-in tradeoff. In the general $`\mathrm{}`$-Tree-Random method, putting $`\mathrm{}\alpha \mathrm{log}(n/\alpha )`$, the $`\mathrm{}`$-Tree algorithm exhibits a fan-in/delay product of at most $`O(\frac{tn}{\alpha })`$, which is optimal. If $`\mathrm{}<\alpha \mathrm{log}(n/\alpha )`$, the product is within a logarithmic factor from optimal. Hence, Tree propagation provides a spectrum of protocols that have optimal delay/fan-in tradeoff to within a logarithmic factor.
Our lower bound of $`\mathrm{\Omega }(\frac{t}{F^{out}}\mathrm{log}\frac{n}{\alpha })`$ on the delay of any diffusion protocol says that we pay a high price for Byzantine fault tolerance: when $`t`$ is large, diffusion in our model is (necessarily) slower than diffusion in system models admitting only benign failures. By comparison, in systems admitting only benign failures there are known algorithms for diffusing updates with $`O(\mathrm{log}n)`$ delay, including one on which the Random algorithm studied here is based \[Pit87\].
## 8 Simulation Results
Figure 2 depicts simulation results of the Random and Tree-Random algorithms. The figure portrays the delay of the two methods for varying system sizes (on a logarithmic scale), where $`t`$ was fixed to be $`16`$. In part (a) of this figure, we took the size $`\alpha `$ of the initial set $`I_u`$ to be $`\alpha =t+1`$. This graph clearly demonstrates the benefit of the Tree-Random method in these settings, especially for large system sizes. In fact, we had to draw the upper half of the $`y`$-axis scale in this graph disproportionately in order for the small delay numbers of Tree-Random, compared with the large delay numbers exhibited by the Random method, to be visible. Part (b) of the graph uses $`\alpha =\sqrt{2tn}`$, which reflects the minimal initial set that we would use in the Fleet system, which is one of the primary motivations for our study (see Section 1.1). For such large initial sets, Random outperforms Tree-Random for all feasible systems sizes, and the benefit of Tree-Random is only of theoretical interest (e.g., $`n>1000000`$).
In realistic large area networks, it is unlikely that $`100\%`$ of the messages arrive in the round they were sent in, even for a fairly large inter-round period. In addition, it may be desirable to set the inter-round delay reasonably low, at the expense of letting some messages arrive late. Some messages may be dropped in realistic scenarios, and hence, to accommodate such failures, we also ran our simulations while relaxing our synchrony assumptions. In these simulations, we allowed some threshold—up to $`5\%`$—of the messages to arrive in later rounds than the rounds they were sent or to be omitted by the receiver. The resulting behavior of the protocols were comparable to the synchronous settings. We conclude that our protocols can just as effectively be used in asynchronous environments in which the inter-round delay is appropriately tuned.
## 9 Conclusion
In this paper we have provided the first analysis of epidemic-style update diffusion in systems that may suffer Byzantine component failures. We require that no spurious updates be accepted by correct replicas, and thus that each correct replica receive an update from $`t`$ other replicas before accepting it, where the number of faulty replicas is less than $`t`$. In this setting, we analyzed the delay and fan-in of diffusion protocols. We proved a lower bound on the delay of any diffusion protocol, and a general tradeoff between the delay and fan-in of any diffusion protocol. We also proposed two diffusion protocols and analyzed their delay and fan-in.
|
no-problem/9908/hep-th9908106.html
|
ar5iv
|
text
|
# hep-th/9908106CERN-TH-99-251 The Matrix model and the non-commutative geometry of the supermembrane
## 1. Introduction
One of the basic ingredients of M–threory is the eleven dimensional (11-d) supermembrane for which some years ago a consistent action has been written in a general background of 11-d supergravity. The supermembrane has a uniquely defined self–interaction, which comes in contrast to the superstring, from a infinite dimensional gauge symmetry apparent in the light-cone gauge as the area-preserving diffeomorphisms on the surface of the membrane.
Because of the absence of the dilaton field for the supermembrane, there is no topological expansion over all possible three manifolds analogous to the string case. The supermembrane, due to its unique self-interaction, is possible to break into other supermembranes so in a sense is already a second quantized theory but up to now there is no consistent perturbative expansion. In the light-cone gauge, and flat space-time, there are two classes of membrane vacua, points and tensionless strings, so a low–energy effective field theory of supermembrane massless excitations would be either eleven-dimensional supergravity or a field theory for tensionless strings. Hopefully, recent efforts for understanding the coupling of 11-d supergravity with the supermembrane will help to the construction of its effective low energy field theory.
In this letter, we present arguments that the Matrix model describes the non-commutative geometry of the 11-d supermembrane, and M theory is the ’t Hooft topological expansion of the Matrix model. We demostrate the existence of a topological charge and the corresponding Bogomol’nyi bound and we discuss the integrability of the instanton sector.
## 2. Non-commutative geometry of the membrane
It is a well known fact that the Matrix model was one of the first ideas for the study of the dynamics of the bosonic membrane in the light-cone frame and in the approximation of finite number of oscillation modes . The true dynamics would be determined by taking the limit of infinite number of modes. In the finite mode approximation the Hamiltonian of the membrane is exactly the same with $`SU(N)`$ Yang-Mills (YM) classical mechanics and this system is known to possess interesting chaotic dynamics and a discrete spectrum at the level of quantum mechanics (QM) . Later on, Townsend et al discovered the supermembrane Lagrangian in 11 dimensions and the finite mode truncation, as was expected, is described by the Hamiltonian of the supersymmetric $`SU(N)`$ YM mechanics. It was found that the quantum mechanical spectrum of this model is continuous; at that time this was considered to be the end of the supermembrane as a fundamendal object replacing the superstring and producing all the low energy physics that could be useful for the unification of gauge and gravitational forces .
In ref. the question of a deeper origin of the $`SU(N)`$ YM classical mechanics as an approximation of the membrane dynamics was considered and it was found that $`SU(N)`$ represents the Lie algebra of the finite Heisenberg group, which acts on a discretized membrane representing a toroidal discrete phase space. The membrane coordinates are approximated by $`N\times N`$ matrices (YM gauge fields), which represent collectively $`N^2`$ number of points in the target space. The large $`N`$–limit to reproduce the continuous surface of the membrane, should be such that all the positions of the $`SU(N)`$ matrices are filled up in a continuous way and this limit has not been expressed, up to now, in a mathematically consistent way . The non-commutative geometry of the discrete membrane is generated by the finite and discrete Heisenberg group and the space of functions on the surface of the membrane is the algebra of $`N\times N`$ complex matrices.
In modern language the $`SU(N)`$ YM classical mechanics is the YM theory on non-commutative 2-torus. It is interesting that the torus compactified Matrix model is equivalent with the M–theory compactification in a constant antisymmetric background gauge field. In this case, the Matrix model description becomes that of a gauge theory on a non–commutative torus.
It is well known that the usual Quantum Mechanics can be represented on functions of the phase-space variables, with the Moyal bracket<sup>1</sup><sup>1</sup>1For a recent discussion see and references therein. replacing the classical Poisson bracket. Recently the vertex operators of open strings in an external antisymmetric gauge field $`B_{\mu \nu }`$ were found to obey non-commutative relations of the Weyl type, which induces a Moyal bracket structure on the space of functions on the string momenta <sup>2</sup><sup>2</sup>2For recent discussions see..
## 3. The Heisenberg-Weyl group and the Moyal bracket
To start with, we introduce the irreducible representations of the finite Heisenberg group appropriate for the Matrix model non-commutative geometry of a toroidal membrane. The Hilbert space $`_\mathrm{\Gamma }`$ of the wave functions on the torus $`\mathrm{\Gamma }=/𝕃`$ of complex modulus $`\tau =\tau _1+ı\tau _2`$, where $`𝕃`$ is the integer lattice, $`𝕃=\{m_1+\tau m_2|(m_1,m_2)\times \}`$ is defined as the space of functions of complex argument $`z=x+ıy`$:
$$f(z)=\underset{n}{}c_n\mathrm{e}^{ı\pi n^2\tau +2\pi ınz}$$
(1)
with norm
$$f^2=\mathrm{e}^{2\pi y^2/\tau _2}|f(z)|^2𝑑x𝑑y,\tau _2>0.$$
(2)
Consider the subspace $`_N(\mathrm{\Gamma })`$ of $`_\mathrm{\Gamma }`$ with periodic Fourier coefficients $`\{c_n\}_n`$ of period $`N`$:
$$c_n=c_{n+N}n,N.$$
(3)
The space $`_N(\mathrm{\Gamma })`$ is $`N`$-dimensional and there is a discrete Heisenberg group, with generators $`𝒮_{1/N}`$ and $`𝒯_1`$ acting as
$`(𝒮_{1/N}f)(z)`$ $`=`$ $`{\displaystyle \underset{n}{}}c_n\mathrm{e}^{2\pi ın/N}\mathrm{e}^{2\pi ınz+\pi ın^2\tau }`$
$`(𝒯_1f)(z)`$ $`=`$ $`{\displaystyle \underset{n}{}}c_{n1}\mathrm{e}^{2\pi ınz+\pi ın^2\tau },c_n.`$ (4)
On the $`N`$-dimensional subspace of vectors $`(c_1,\mathrm{},c_N)`$ the two generators are represented by $`N\times N`$ matrices, $`Q,P`$
$`(𝒮_{1/N})_{n_1,n_2}=Q_{n_1,n_2}=\omega ^{(n_11)}\delta _{n_1,n_2},(𝒯_1)_{n_1,n_2}=P_{n_1,n_2}=\delta _{n_11,n_2},`$
with $`\omega =\mathrm{exp}(2\pi ı/N)`$. They satisfy the Weyl relation $`QP=\omega PQ`$.
The Heisenberg group elements are defined as
$$𝒥_{r,s}=\omega ^{rs/2}P^rQ^s.$$
(6)
These $`N\times N`$ matrices are unitary $`𝒥_{r,s}^{}=𝒥_{r,s}`$ and periodic with period $`N`$, i.e. $`𝒥_{r,s}^N=1`$. They realize a projective representation of the discrete translation group $`Z_N\times Z_N`$:
$`𝒥_{r,s}𝒥_{r^{},s^{}}=\omega ^{(r^{}srs^{})/2}𝒥_{r+r^{},s+s^{}}`$ (7)
In ref the finite $`N`$-Matrix model is considered as a non-commutative QM system (see also ), but the canonical commutation relations were not represented through the finite Heisenberg group basis $`𝒥_{r,s}`$. It is possible to define finite dimensional matrices $`\widehat{p},\widehat{q}`$ such that $`Q=e^{ı\widehat{q}}`$ and $`P=e^{ı\widehat{p}}`$
$`\widehat{q}_{ij}={\displaystyle \frac{2\pi }{N}}(s+1i)\delta _{ij},`$ $`\widehat{p}_{ij}=ı{\displaystyle \frac{\pi }{N}}{\displaystyle \frac{(1)^{(ij)}}{sin\frac{\pi }{N}(ij)}}`$ (8)
where $`N=2s+1`$ and $`s`$ is an integer. Here we have shifted by $`s`$ rows and columns of $`Q`$ and $`P`$ matrices defined in relations (3.). These matrices satisfy new Heisenberg commutation relations, which have a very simple form
$$ı[\widehat{q},\widehat{p}]_{ij}=\frac{2\pi }{N}\frac{\frac{\pi }{N}(ij)(1)^{(ij)}}{\mathrm{sin}\frac{\pi }{N}(ij)}$$
(9)
when $`ij`$ and zero when $`i=j`$. The matrix $`\widehat{q}`$ satisfies the torus compactification relations of the Matrix model, with corrections due to their finite size
$`P^1\widehat{q}P=\widehat{q}+{\displaystyle \frac{2\pi }{N}}I_N2\pi I_0,`$ (10)
where $`I_N`$ is the $`N\times N`$ identity matrix and $`I_0`$ the $`N\times N`$ diagonal matrix with diagonal elements $`\{1,0,\mathrm{},0\}`$.
The bosonic part of the matrix model is the $`SU(N)`$ YM classical mechanics and the gauge fields are linear combinations of the elements $`𝒥_{r,s}`$, i.e.,
$$A_l(t)=\underset{r,s=0}{\overset{N1}{}}A_l^{r,s}J_{r,s},l=1,\mathrm{},d1$$
(11)
which can be considered as coherent states of the discrete and finite toroidal phase-space $`N\times N`$ lattice. The $`A_l`$ matrices are the non-commutative coordinates of the discrete membrane in $`d1`$ dimensions.
There is another representation of the standard quantum mechanics on the space of functions of the phase-space variables. This is the unique deformation of the Poisson bracket, the Moyal bracket
$$\{\{f,g\}\}_\lambda (u,v)=\frac{1}{\lambda }\mathrm{sin}\left(\lambda \left(_u_v^{}_u^{}_v\right)\right)f(u,v)g(u^{},v^{})|_{u=u^{},v=v^{}}$$
(12)
Here, $`\lambda `$ corresponds to the Planck constant and the Moyal bracket gives a structure of infinite dimensional algebra on the space of functions on the torus generated by
$$e_{r,s}(u,v)=\frac{1}{2\pi }e^{ı(ru+sv)}$$
(13)
where $`u,v[0,2\pi ]`$ and $`r,s`$. This algebra is the trigonometric algebra of Fairlie Fletcher and Zachos:
$$\{\{e_{r,s},e_{r^{},s^{}}\}\}_\lambda (u,v)=\frac{1}{2\pi \lambda }\mathrm{sin}\left(\lambda \left(rs^{}r^{}s\right)\right)e_{r+r^{},s+s^{}}(u,v)$$
(14)
which also icludes the case $`\lambda =\frac{2\pi }{N}`$. This case gives the $`SU(N)`$ algebra in the base $`𝒥_{r,s}`$:
$`[𝒥_{r,s},𝒥_{r^{},s^{}}]=2ı\mathrm{sin}\left({\displaystyle \frac{2\pi }{N}}(rs^{}r^{}s)\right)𝒥_{r+r^{},s+s^{}}`$ (15)
if the $`e_{r,s}`$ functions are identified with $`e_{r+kN,s+mN}`$ for $`k,m`$. The Heisenberg group matrices $`𝒥_{r,s}`$ have been introduced by Weyl.
When $`\lambda 0`$ (or $`N\mathrm{}`$), we recover the Poisson algebra of the area preserving transformations of the torus
$`\{e_{r,s},e_{r^{},s^{}}\}(u,v)=(r^{}srs^{}){\displaystyle \frac{1}{2\pi }}e_{r+r^{},s+s^{}}.`$ (16)
The Matrix model has various large $`N`$–limits. Up to now it is not known how to get the quantum mechanics of supermembrane starting from this model, even though, various compactifications indicate that it has membrane states as excitations. We believe that the appropriate limit is the ’t Hooft topological expansion of the $`SU(N)`$ YM–mechanics. To this end, we shall determine what happens to the Heisenberg group matrices $`𝒥_{r,s}`$ in this limit. We observe that these matrices contain powers of the root of unity along two diagonals so we start with $`\omega =e^{2\pi ı\frac{M}{N}}`$, ($`M,N`$ co-prime integers). The correct large $`N`$–limit for $`SU(N)`$ is the inductive one, i.e., $`SU(N)SU(N+1)SU(N+2)\mathrm{}`$ which we get if we let $`M,N\mathrm{}`$ with $`M/N=`$ constant. Note that the constant $`\mathrm{}=2\pi \frac{M}{N}`$ can be identified with the flux of the 3-index antisymmetric gauge field per unit membrane area. The Weyl relations become the Heisenberg group relations for an infinite phase-space lattice and in the Fourier transform space of both canonical variables the Matrix model describes a toroidal continuous membrane with Matrix commutators replaced by Moyal brackets . Since the limit $`\mathrm{}0`$ replaces the Moyal bracket by Poisson, we get from Moyal YM theory the membrane. Higher order corrections to $`\mathrm{}`$ can be represented as membranes with attached handles on the initial membrane which is determined by the $`SU(N)`$ chosen basis, in our case the torus.
In this limit, the light-cone gauge equations of motion for the membrane
$$\ddot{X}_i=\{X_k,\{X_k,X_i\}\};i,k=1,\mathrm{},d1$$
(17)
and the corresponding Gauss law $`\{X_i,\dot{X}_i\}=0`$ are replaced by
$`\ddot{X}_i`$ $`=`$ $`\{\{X_k,\{\{X_k,X_i\}\}\}\}`$ (18)
$`\{\{X_i,\dot{X}_i\}\}`$ $`=`$ $`0,i,k=1,\mathrm{}d1.`$ (19)
When the space of functions on the toroidal membrane is replaced by the algebra of $`N\times N`$ matrices, the coordinates of the membrane become the matrices $`A_i(t)`$, the velocity is the $`SU(N)`$ electric field $`E_i(t)=\dot{A}_i(t)`$, and the magnetic field in three or seven dimensions is $`B_i(t)=\frac{1}{2}f_{ijk}[A_j,A_k]`$ where $`f_{ijk}`$ is the $`ϵ_{ijk}`$ totally antisymmetric symbol in three dimensions and $`\mathrm{\Psi }_{ijk}`$ the octonionic multiplication table in seven dimensions .
The Moyal bracket generalizes both Poisson brackets and matrix commutators, so that one is tempted to consider a system where the Poisson bracket is replaced by the Moyal one . The question of the appearance of Moyal bracket for physical reasons in the dynamics of membrane is up to now open. We know that there are other limits of the Matrix model, one leads to perturbative string field theory , and the Poisson limit in which the $`SU(N)`$ symmetry becomes the area-preserving diffeomorphism group. We believe that the physical origin of the Moyal bracket is due to the presence of the antisymmetric background field $`C_{ijk}`$ in the light-cone gauge which gives a ‘magnetic’ flux (Hall effect), trasforming the surface of the membrane into a non-commutative phase–space. This is true for open membranes where the topological term of the action receives a contribution from the boundary.
## 4. Topological charge, Bogomol’nyi bound and Integrability.
In order to explain the appearance of non-abelian electric-magnetic type of duality in the membrane theory, we recall that for YM–potentials independent of space coordinates the self-duality equation in the gauge $`A_0=0`$ is
$`\dot{A}_i`$ $`=`$ $`{\displaystyle \frac{1}{2}}ϵ_{ijk}[A_j,A_k],i,j,k=1,2,3`$ (20)
According to ref the only non-trivial higher dimensional YM self-duality equations exist in 8 space-time dimensions which, for the 7-space coordinate independent potentials, can be written (in the $`A_0=0`$ gauge) as
$`\dot{A}_i`$ $`=`$ $`{\displaystyle \frac{1}{2}}\mathrm{\Psi }_{ijk}[A_j,A_k],i,j,k=1,\mathrm{}7`$ (21)
where $`\mathrm{\Psi }_{ijk}`$ is the multiplication table of the seven imaginary octonionic units.
It is now tempting to take the large $`N`$-limit and replace the commutators by Poisson (Moyal) brackets to obtain the self-duality equations for membranes (non-commutative instantons for the Moyal case). In this limit we replace the gauge potentials $`A_i`$ by the membrane coordinates $`X_i`$. Then, the 3-d system is,
$`\dot{X}_i`$ $`=`$ $`{\displaystyle \frac{1}{2}}ϵ_{ijk}\{X_j,X_k\},i,j,k=1,2,3,`$ (22)
while in seven space dimensions
$`\dot{X}_i`$ $`=`$ $`{\displaystyle \frac{1}{2}}\mathrm{\Psi }_{ijk}\{X_j,X_k\},i,j,k=1,\mathrm{},7`$ (23)
and correspondingly for the case of Moyal brackets in three dimensions and in seven dimensions. It is easy to see that the self-duality membrane equations, imply the second order Euclidean-time, equations of motion in the light-cone gauge as well as the Gauss law.
One striking feature of the self-duality membrane equations is their simple geometrical meaning. These equations state that the normal vector at a point of the membrane surface and the velocity at the same point are parallel (self-dual) or anti–parallel (anti-self-dual). The possibility to write down self-duality equations based on the existence of vector cross-product comes from the existence of the quaternionic and octonionic algebras. Since these are the only existing division algebras the 3 and 7 dimensions are unique<sup>3</sup><sup>3</sup>3For other approach to self-duality see also.. The validity of this geometrical statement could be extended in a general curved space-time background as a definition of the self-dual membranes.
If one takes the limit where the commutator of matrices is replaced by commutator of operators or the Moyal bracket, then the self-duality equations become the Moyal Nahm or Moyal-Bogomol’nyi equations of .
The membrane instantons carry a topological charge density which satisfies a Bogomol’nyi bound :
$`\mathrm{\Omega }(X)={\displaystyle \frac{1}{3!}}ϵ^{abc}f_{ijk}X_a^iX_b^jX_c^k`$ (24)
where $`X_a^i=_{\xi _a}X^i`$, $`a,b,c=1,2,3`$ and $`f_{ijk}=ϵ_{ijk}`$ when $`i,j,k=1,2,3`$ and $`f_{ijk}=\mathrm{\Psi }_{ijk}`$ for $`i,j,k=1,\mathrm{},7`$. This topological charge density defines the topological charge of the membrane
$`Q`$ $`=`$ $`{\displaystyle \frac{1}{V_3}}{\displaystyle d^3\xi \mathrm{\Omega }(X)}`$ (25)
where $`V_3`$ is the volume of the integration region. The topological charge $`Q`$ is an integer and represents the degree of the map from the membrane to its world volume. We display below the convenient representation of the topological charge which will help us demonstrate that it is a lower bound of the membrane action for topologically non-trivial membranes
$`\mathrm{\Omega }(X)={\displaystyle \frac{1}{2}}\dot{X}_if_{ijk}\{X^j,X^k\}={\displaystyle \frac{1}{2}}\{X^j,X^k\}^2`$ (26)
where the self-duality equations as well as the properties of $`f_{ijk}`$ in three and seven dimensions have been used. The topological charge of the membrane can now be written as
$`Q`$ $`=`$ $`{\displaystyle \frac{1}{2V_3}}{\displaystyle _M}d^3\xi \{X^j,X^k\}^2`$ (27)
The minimum value of $`Q`$ ($`Q=1`$) is obtained for the membrane instanton compactified on a world–volume torus, $`X_1=\sqrt{2}\sigma _1`$, $`X_2=\sqrt{2}\sigma _2`$ and $`X_3=2t`$.
The Euclidean action can be written as
$`S`$ $`=`$ $`{\displaystyle \frac{1}{V_3}}{\displaystyle d^3\xi \left(\frac{1}{2}\dot{X}_i^2+\frac{1}{4}\{X_j,X_k\}^2\right)}`$ (28)
From the inequality $`(\dot{X}_i\pm \frac{1}{2}f_{ijk}\{X_j,X_k\})^20`$ we derive that,
$`SQ`$ (29)
and the equality holds only for the self-dual or anti-self-dual membranes. So the self-dual or anti-self-dual membranes are BPS Euclidean-time membrane world-volume solitons. As we have seen in ref, the 3$`d`$ and 7-d self-dual solutions preserve 8 and 1 supersymmetries respectively or $`1/2`$ and $`1/16^{th}`$ of the supersymmetry of the light-cone supermembrane Hamiltonian. This is a direct consequence of the above Bogomol’nyi bound and the $`SO(3)`$ and $`G_2`$ rotational space symmetry of the above cases.
The role of the membrane instantons is important in developing a perturbative expansion. Configurations of the membrane around instantons cannot collapse to points or strings, because they have different topological charge. The 3-index antisymmetric gauge field which is so crucial for the uniqueness of the supermembrane Lagrangian participates in the bosonic part through the Cern-Simons term. If its vacuum expectation value is non-zro and proportional to $`\mathrm{\Psi }_{ijk}`$ (in the corresponding 7 dimensions), then the topological charge defined above, separates the functional integral into membrane topological sectors. Going now to the case of Moyal-Nahm equations, there is a corresponding topological charge without an obvious geometrical meaning and the Bogomol’nyi bound is valid in this case too. This bound is important for the stability of the corresponding Moyal-Nahm instantons. Recent discussions on the role of instantons in non-commutative YM theories (non-commutative instantons) imply that they can be considered as regularizations of small size instantons in standard YM theories (see e.g. ). The case of Moyal-Nahm equations could be considered as non-commutative membrane instantons which regularize the Poisson or membrane case.
We now make few remarks on the integrability of the self-dual equations. The 3-d self-duality system has a Lax pair and an infinite number of conservation laws . In order to see this, we first rewrite the self-duality equations in the form
$$\dot{X}_+=i\{X_3,X_+\},\dot{X}_{}=i\{X_3,X_{}\},\dot{X}_3=\frac{1}{2}i\{X_+,X_{}\},$$
(30)
where
$$X_\pm =X_1\pm iX_2$$
(31)
The Lax pair equations can be written as
$$\dot{\psi }=L_{X_3+\lambda X_{}}\psi ,\dot{\psi }=L_{\frac{1}{\lambda }X_+X_3}\psi ,$$
(32)
where the differential operators $`L_f`$ are defined as
$$L_fi\left(\frac{f}{\varphi }\frac{}{\mathrm{cos}\theta }\frac{f}{\mathrm{cos}\theta }\frac{}{\varphi }\right).$$
(33)
The compatibility condition of $`(\text{32})`$ is
$$[_tL_{X_3+\lambda X_{}},_tL_{\frac{1}{\lambda }X_+X_3}]=0,$$
(34)
from which, comparing the two sides for the coefficients of the powers $`\frac{1}{\lambda },\lambda ^0,\lambda ^1`$ of the spectral parameter $`\lambda `$, we find $`(\text{30})`$. From the linear system $`(\text{32})`$, using the inverse–scattering method, one could in principle construct all solutions of the self-duality equations.
The infinite number of conservation laws are derived as follows: from the Cartesian formulation
$`{\displaystyle \frac{dX_i}{dt}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}ϵ_{ijk}\{X_j,X_k\}`$ (35)
contracting with a complex 3-vector $`u_i`$ such that
$`u_i`$ $`=`$ $`ϵ_{ijk}u_jv_k,`$ (36)
where $`u_iu_i=0`$, and $`v`$ is another complex vector with $`v_iv_i=1`$ and $`u_iv_i=0`$, we find,
$`{\displaystyle \frac{duX}{dt}}`$ $`=`$ $`\{uX,vX\}`$ (37)
The latter is a Lax pair type equation, which implies
$`{\displaystyle \frac{d}{dt}}{\displaystyle d^2\xi (uX)^n}=0`$ (38)
Applying the same method in seven dimensions with two complex 7-vectors $`u_i,v_i`$ such that $`u_iu_i=0`$, $`v_iv_i=1`$ and $`u_iv_i=0`$, leads to the equation
$`{\displaystyle \frac{duX}{dt}}`$ $`=`$ $`\{uX,vX\}+{\displaystyle \frac{1}{2}}\varphi _{jklm}u_jv_k\{X_l,X_m\}`$ (39)
The curvature tensor $`\varphi _{jklm}`$ is defined as the dual of $`\mathrm{\Psi }_{ijk}`$ in seven dimensions. When equation (39) is restricted to three dimensions we recover (37). We observe that the presence of the curvature tensor is an obstacle for the integrability. At this point, we may look for an extended definition of integrability replacing the zero-curvature condition with the octonionic curvature one. We can restrict the above equation in particular subspaces of solutions where integrability appears. One possibility is the factorization of the time .
We conclude with a few remarks. In this note we have given arguments that the Matrix model describes a non-commutative YM theory for the supermembrane in the presence of background three-index antisymmetric gauge fields. If this conjecture is true, it implies that the excitations of this model in various compactifications are also physical excitations of the supermembrane. So the supermembrane should contain 11-d supergravity at least in weak coupling limits given by small radii of the compactification manifolds. It is tempting to calculate correlation functions of membrane observables using the Matrix model and then take the large $`N`$-limit as was discussed in section 3. On the other hand, perturbation theory for the supermembrane could be defined through the expansion in the parameter $`\mathrm{}/N`$, with $`M/N\mathrm{}`$ for $`M,N\mathrm{}`$. In this expansion all the topologies of the membrane appear as splitting and joining interactions The other known large $`N`$–limit gives the string perturbation theory as a QM sector of the supermembrane.
As this work was written, we have been kindly informed that the Moyal limit of the Matrix model has been studied in connection with the higher derivative corrections to the Born-Infeld Lagrangians for the D2–brane. For a very recent, interesting paper on D–branes in group manifolds, see
One of us (EGF) would like to thank prof. Albert Schwarz for a valuable discussion.
|
no-problem/9908/nucl-ex9908020.html
|
ar5iv
|
text
|
# Analysis by neutron activation analysis a some ancient Dacian ceramics
## 1 Introduction
Ceramics is the most common archaeological material and therefore it is very used material by the historians to draw temporal and cultural characterizations. The importance of knowledge the compositional scheme of the pottery is well known<sup>1-5</sup> although very rarely one can get important conclusions from the elemental analysis of the potsherds<sup>6,8</sup>. Perlman and Assaro<sup>7</sup>, on the basis of an analysis of thousands of objects of ancient ceramics by neutron activation analysis established a method of classification on the objects in well defined groups, characterized on the historic point of view (culture, dating, style, etc).
In this paper we have been analyzed by neutron activation analysis (NAA) samples of ancient Dacian ceramics, from Romanian territories, from 3 different establishments from Romanian territory: Strei San Giorgiu, Hunedoara, Popesti, Giurgiu and Fierbinti, Ialomita. Ceramics was delivered by National Museum of History from Bucharest. We have searched the characteristic element or the ratio of elements for a given Dacian archaeological settlement.
## 2 Experimental method
The samples listed in the Table 1 have been analyzed by neutron activation analysis. We considered that the analysis should give an image of the bulk of the objects and therefore the surface of the shards was removed. Also we had into consideration the homogeneity of the samples and that the samples must be representative for the whole object. Samples of potsherds have been cut, weighted and wrapped individually in plastic foil. The samples of 10-30 mg have been irradiated at at the rabbit system of VVR-S reactor of the NIPNE, Bucharest-Magurele, at a flux of 1.3 x 10<sup>12</sup> neutrons/cm<sup>2</sup>$``$sec<sup>-1</sup>, for a period of 15 minutes. A standard spectroscopic pure metallic copper was used as neutron flux standard.
The measurements have been performed with a Ge(Li) detector, 135 cm<sup>3</sup> coupled at PC with a MCA interface. The system gave a resolution of 2.4 keV at 1.33 MeV (<sup>60</sup>Co). The radioactivity of the samples has been measured after a decay of <sup>27</sup>Al (T<sub>1/2</sub>=2.54 min), the major element in the structure of ceramics. Then after a decay time of 10 min we have observed in the $`\gamma `$ spectra of the irradiated samples $`\gamma `$ rays of the isotopes corresponding to the elements: Ba, Mn and Na. After a decay time of 3-4 d we measured again the radioactivity in the ceramics samples and we could observed the elements: Sm, Eu, Sc, La and K.
## 3 Results and discussions
In the Table 2, 3 and 4 are shown the results of the NAA of the three groups of ancient Dacian potsherds S, P and F, from the three different establishments: Strei San Giorgiu, Hunedoara, Popesti, Giurgiu and Fierbinti, Ialomita. The concentrations are given in ppm, and when the concentration was larger than 10,000 ppm the result was given in percents. The considered statistical errors have been $``$1% for Mn and Na and $`<`$5% for the others elements.
One could observe from the Fig. 1 that the concentration of Ba seams to vary relatively from one group to other. The mean values and the standard deviations for the concentration of Ba for each group are the following:
| Ceramics Strei San Giorgiu | C<sub>Ba</sub>=2248$`\pm `$833 ppm |
| --- | --- |
| Ceramics Popesti | C<sub>Ba</sub>=796$`\pm `$226 ppm |
| Ceramics Fierbinti | C<sub>Ba</sub>=4138$`\pm `$467 ppm |
Then we applied the procedure to consider the ratio Na/Mn found to be constant in ancient ceramics, for a given archaeological settlement, for Maya period<sup>4</sup>.
The values of the means and the standard deviations of the ratio Na/Mn, for the 3 groups of analyzed Dacian ceramics are the following:
| Ceramics Strei San Giorgiu | Na/Mn=28.3$`\pm `$22.9 ppm |
| --- | --- |
| Ceramics Popesti | Na/Mn=7.40$`\pm `$3.53 ppm |
| Ceramics Fierbinti | Na/Mn=7.92$`\pm `$1.68 ppm |
We could remove in calculus of the means, the values of the concentrations far away of the mean and one get the following values:
| Ceramics Strei San Giorgiu | Na/Mn=15.04$`\pm `$0.07 ppm |
| --- | --- |
| Ceramics Popesti | Na/Mn=5.91$`\pm `$1.30 ppm |
| Ceramics Fierbinti | Na/Mn=7.32$`\pm `$1.18ppm |
For the case of analyzed samples of ancient Dacian potsherds we could say that the ratio of the concentrations of Na/Mn is not constant and can not characterize a given settlement. Ba is the elements that could be considered to relatively differentiate the three groups of ceramics. To draw conclusions it must to improve the statistics of analysis and also to pay more attention to the homogeneity of the samples, that in the case of ceramics it is a very important parameter of the analysis.
References
1. A. Aspinal, D. N. Slater, ”Neutron activation analysis of medieval ceramics”, Nature 217 (1968) 368
2. J. S. Olin and Ed. V. Sayre, ”Trace analysis of English and American pottery of the american colonial period” The 1968 Intern. Conference of Modern Trends in Activation Analysis” (1968) p. 207
3. N. Saleh, A. Hallak and C. Bennet, ”PIXE analysis of ancient ordanian pottery”, Nuclear Instruments and Methods 181 (1981) p. 527
4. Ed. Sayre, ”Activation Analysis applications in art and archaeology”, in Advances in Activation Analysis, eds. J.M.A. Lenihan, S.J. Thomson and V.P. Guinn, Academic Press, London, p.157
5. Ch. Lahanier, F.D. Preusser and L. Van Zelst, ”Study and conservation of museum objects: use of classical analytical techniques”, Nuclear Instruments and Methods, B14 (1986) p.2
6. Zvi Goffer, Archaeological Chemistry, Chemical Analysis, Vol. 55, eds. P.J. Elving J. D. Winefordner, John Wiley & Sons, p.108
7. I. Perlman and F. Assaro, ”Deduction of provenience of pottery from trace element analysis”, Scientific Methods in Medieval Archaeology, ed. R. Berger Univ. of California Press (1970) p.389
8. A. Millet and H. Catling, ”Composition and provenance: a challenge”, Archaeometry, Vol. 9 (1966) p.92
|
no-problem/9908/astro-ph9908157.html
|
ar5iv
|
text
|
# On the degree of scale invariance of inflationary perturbations
## I Introduction
Inflation generates adiabatic density perturbations that can seed the formation of structure in the Universe. They arise from quantum fluctuations in the field that drives inflation and are stretched to astrophysical size by the enormous growth of the scale factor during inflation . The magnitude of these perturbations was recognized early on to be important in constraining inflationary models. The nearly scale-invariant value for the scalar spectral index, $`n1`$, is considered to be one of the three principal predictions of inflation, and the deviation of $`n`$ from unity is an important probe of the underlying dynamics of inflation .
The advantage of scale-invariant primordial density perturbations was first spelled out nearly three decades ago : any other spectrum, in the absence of a long-wavelength or short-wavelength cutoff, will have excessively large perturbations on small scales or large scales.<sup>*</sup><sup>*</sup>*Inflation provides a natural cutoff on comoving scales smaller than $``$1 km, the horizon size at the end of inflation; perturbations on scales larger than the present horizon will not be important until long into the future. Thus, for inflation exact scale invariance is not necessary to avoid problems with excessively large perturbations. Even though inflation provided the first realization of such a spectrum, long before inflation many cosmologists considered the scale-invariant spectrum to be the only sensible one. For this reason, the inflationary prediction of a deviation from scale invariance – even if small – becomes all the more important.
One of the pioneering papers on inflationary fluctuations emphasized that the fluctuations were not precisely scale-invariant; the first quantitative discussion followed a year later . The COBE DMR detection of CBR anisotropy awakened the inflationary community to the testability of the inflationary density-perturbation prediction. The connection between $`(n1)`$ and the underlying inflationary potential was pointed out soon thereafter , and the possibility of reconstructing the inflationary potential from measurements of CBR anisotropy began being discussed . It is now quite clear that the degree of deviation from scalar invariance is an important test and probe of inflation.
Particular inflationary potentials and the values of $`n`$ they predict have been widely discussed in literature (see e.g., Refs. ). Lyth and Riotto , for example, remark that many inflationary potentials can be written in the form $`V(\varphi )=V_0(1\pm \mu \varphi ^p)`$ (in the interval relevant for inflation), and conclude that virtually all potentials of this form give $`0.84<n<0.98`$ or $`1.04<n<1.16`$ (also see Ref. ). Experimental limits on $`n`$, derived from CBR anisotropy measurements, are not yet very stringent, $`0.7<n<1.2`$ . Even the stronger bound claimed by Bond and Jaffe , $`n=0.95\pm 0.06`$, falls far short of the potential of future CBR experiments (e.g., the MAP and Planck satellites), $`\sigma _n0.01`$ .
The purpose of our paper is to discuss the general issue of the deviation from scale invariance, and to explain why scale invariance is a generic feature of inflation. In so doing, we will take a very agnostic approach to models. In view of our lack of knowledge about physics of the scalar sector and of the inflationary-energy scale, this seems justified. As we show, the slow-roll conditions necessary for inflation are closely related to the possible deviation from scale invariance. To illustrate what must be done to achieve significant deviation from scale invariance, we discuss models based upon smooth potentials where $`n`$ is much smaller than and much larger than unity.
## II Why inflationary perturbations are nearly scale invariant
The equations governing inflation are well known
$`\ddot{\varphi }+3H\dot{\varphi }+V^{}(\varphi )`$ $`=`$ $`0`$ (1)
$`H^2\left({\displaystyle \frac{\dot{a}}{a}}\right)^2`$ $`=`$ $`{\displaystyle \frac{8\pi }{3m_{PL}^2}}\left[V(\varphi )+{\displaystyle \frac{1}{2}}\dot{\varphi }^2\right]`$ (2)
$`N\mathrm{ln}(a_f/a_i)`$ $`=`$ $`{\displaystyle _{\varphi _i}^{\varphi _f}}H𝑑t`$ (3)
$`\delta _H^2(k)`$ $``$ $`V^3/V^2k^{n1},`$ (4)
where $`a(t)`$ is the cosmic scale factor, derivatives with respect to the field $`\varphi `$ are denoted by prime, and derivatives with respect to time by overdot. The quantity $`\delta _H`$ is the post-inflation horizon-crossing amplitude of the density perturbation, which, if the perturbations are not precisely scale invariant is a function of comoving wavenumber $`k`$. (The dimensionless amplitude $`\delta _H`$ also corresponds to the dimensionless amplitude of the fluctuations in the gravitational potential.)
In computing the density perturbations, the value of the potential and its first derivative are evaluated when the scale $`k`$ crossed outside the horizon during inflation. Because both $`V`$ and $`V^{}`$ can vary, $`\delta _H^2k^{n1}`$ in general depends upon scale; exact scale-invariance corresponds to $`n=1`$. For most models, $`\delta _H^2`$ is not a true power law, but rather $`n`$ varies slowly with scale, typically $`|dn/d\mathrm{ln}k|10^3`$ ; in fact, both $`n`$ and $`dn/d\mathrm{ln}k`$ are measurable cosmological parameters and can provide important information about the potential.
In the slow-roll approximation the $`\ddot{\varphi }`$ term is neglected in the equation of motion for $`\varphi `$ and the kinetic term is neglected in the Friedmann equation :
$`\dot{\varphi }`$ $``$ $`{\displaystyle \frac{V^{}}{3H}}`$ (5)
$`N`$ $``$ $`{\displaystyle \frac{8\pi }{m_{PL}}}{\displaystyle _{\varphi _i}^{\varphi _f}}{\displaystyle \frac{d\varphi }{x(\varphi )}}.`$ (6)
The power-law index $`n`$ is given by
$$(n1)=\frac{x_{60}^2}{8\pi }+\frac{m_{PL}x_{60}^{}}{4\pi },$$
(8)
where $`x(\varphi )m_{PL}V^{}(\varphi )/V(\varphi )`$ measures the steepness of the potential and $`x^{}=dx/d\varphi `$ measures the change in steepness. (Higher-order corrections are discussed and the next correction is given in Ref. .) The subscript “60” indicates that these parameters are evaluated roughly 60 e-folds before the end of inflation, when the scales relevant for structure formation crossed outside the horizon.
Deviation from scale invariance is a generic prediction since the inflationary potential cannot be absolutely flat, and it is controlled by the steepness and the change in steepness of the potential. Significant deviation from scale invariance requires a steep potential or one whose steepness changes rapidly. Further, Eq. (8) immediately hints that it is easier to make models with a “red spectrum” ($`n<1`$), than with a “blue spectrum” ($`n>1`$), because the first term in Eq. (8) is manifestly negative, while the second term can be of either sign. In addition, $`x_{60}^2/8\pi `$ is usually larger in absolute value than $`m_{PL}x_{60}^{}/4\pi `$.
The two conditions on the potential needed to ensure the validity of the slow-roll approximation are (see e.g., Refs. ):
$`m_{PL}V^{}/V=x`$ $`\begin{array}{c}<\hfill \\ \hfill \end{array}`$ $`\sqrt{48\pi }`$ (17)
$`m_{PL}^2V^{\prime \prime }/V`$ $`\begin{array}{c}<\hfill \\ \hfill \end{array}`$ $`24\pi .`$ (26)
Note that the first slow-roll condition constrains the first term in the expression for $`(n1)`$, and the second slow-roll condition constrains the second term since, $`m_{PL}x^{}=m_{PL}^2V^{\prime \prime }/Vx^2`$.
A model that can give $`n`$ significantly less than 1 is power-law inflation (there are other models too ). It also illustrates the tension between sufficient inflation and large deviation from scale invariance. The potential for power-law inflation is exponential,
$$V=V_0\mathrm{exp}(\beta \varphi /m_{PL}),$$
(27)
the scale factor of the Universe evolves according to a power law
$$a(t)t^{16\pi /\beta ^2}t^p\text{with}p16\pi /\beta ^2,$$
(28)
and
$$\dot{\varphi }=\sqrt{\frac{p}{4\pi }}\frac{m_{PL}}{t}.$$
(29)
Further, $`n`$ can be calculated exactly in the case of power-law inflation
$$(n1)=\frac{2}{1p}\frac{2}{p}(\mathrm{slow}\mathrm{roll}\mathrm{limit}).$$
(30)
For this potential $`x=\beta `$, $`x^{}=0`$ (constant steepness), and the slow-roll constraint implies $`|\beta |\begin{array}{c}<\hfill \\ \hfill \end{array}7`$, or $`p\begin{array}{c}>\hfill \\ \hfill \end{array}1`$. This is not very constraining as $`p>1`$ is required for the superluminal expansion necessary for inflation . The quantitative requirement of sufficient inflation to solve the horizon problem and a safe return to a radiation-dominated Universe before big-bang nucleosynthesis (reheat temperature $`T_{\mathrm{RH}}1`$MeV and reheat age $`t_{\mathrm{RH}}1`$sec) and baryogenesis ($`T_{\mathrm{RH}}>1`$TeV and $`t_{\mathrm{RH}}<10^{12}`$sec) restricts $`p`$ more seriously.
In particular, the amount of inflation is depends upon when inflation ends:
$$N=\frac{8\pi }{m_{PL}}_{\varphi _i}^{\varphi _f}\frac{d\varphi }{x(\varphi )}=p\mathrm{ln}(H_i/H_f),$$
(31)
where $`H_i=p/t_i`$ and $`H_f=p/t_f`$. The number of e-folds $`N`$ required to solve the horizon problem (i.e., expand a Hubble-sized patch at the beginning of inflation to comoving size larger than the present Hubble volume) is approximately 60, but depends upon $`H_i`$ and $`H_f`$ if $`p`$ is not $`1`$ (see e.g., Ref. ):
$$N>74+\mathrm{ln}(H_i/H_f)+\frac{1}{2}\mathrm{ln}(H_f/m_{PL}).$$
(32)
Bringing everything together, the constraint to $`p`$ is
$$p>1+\frac{74}{\mathrm{ln}(H_i/H_f)}+\frac{1}{2}\frac{\mathrm{ln}(H_f/m_{PL})}{\mathrm{ln}(H_i/H_f)}.$$
(33)
Based upon the gravity-wave contribution to CBR anisotropy $`H_i`$ must be less than about $`10^5m_{PL}`$ and the baryogenesis constraint implies $`H_f\begin{array}{c}>\hfill \\ \hfill \end{array}(1\mathrm{TeV})^2/m_{PL}10^{32}m_{PL}`$. Since reheating is not expected to be very efficient and baryogenesis may require a temperature much greater than $`1`$TeV (if it involves GUT, rather than electroweak, physics), we can safely say that $`H_f10^{32}m_{PL}`$. Thus, sufficient inflation and safe return to a radiation-dominated Universe before baryogenesis requires:
$`p`$ $``$ $`2`$ (34)
$`(1n)`$ $``$ $`2.`$ (35)
Even insisting that $`H_f\begin{array}{c}>\hfill \\ \hfill \end{array}(10^{13}\mathrm{GeV})^2/m_{PL}`$, a typical inflation scale, only leads to $`p\begin{array}{c}>\hfill \\ \hfill \end{array}5`$ and $`n\begin{array}{c}>\hfill \\ \hfill \end{array}0.5`$, which is still a large deviation from scale invariance.
While the exponential potential allows a very large deviation from $`n=1`$, it illustrates the tension between achieving sufficient inflation and large deviation from scale invariance: because $`(1n)=2/(p1)`$, large deviation from scale invariance implies a slow, prolonged inflation, $`\mathrm{ln}(t_f/t_i)N(1n)/2`$, with the change in the inflaton field being many times the Planck mass, $`\mathrm{\Delta }\varphi N\sqrt{(1n)/(8\pi )}m_{PL}m_{PL}`$. Other models also exhibit this tension: For example, for the potential $`V(\varphi )=V_0m^2\varphi ^2/2+\lambda \varphi ^4/4`$, the lower limit to $`n`$ is set by the condition of sufficient inflation .
Achieving $`n`$ significantly greater 1 provides a different challenge since the first term in the equation for $`(n1)`$ is negative and the work must be done by the change-in-steepness term, $`m_{PL}x^{}/4\pi `$. To see the difficulty of doing so, let us assume that we can expand the slow-roll parameter $`x(\varphi )`$ around a point $`\varphi _{}`$ in the slow-roll region:
$$x(\varphi )x_{}+x_{}^{}(\varphi \varphi _{}).$$
(36)
This expression holds for potentials whose steepness does not change much in the slow-roll region. $`N`$ can now be evaluated explicitly:
$`N={\displaystyle \frac{8\pi }{m_{PL}}}{\displaystyle _{\varphi _i}^{\varphi _f}}{\displaystyle \frac{d\varphi }{x(\varphi )}}={\displaystyle \frac{8\pi }{x_{60}^{}m_{PL}}}\mathrm{ln}\left({\displaystyle \frac{x_i}{x_f}}\right),`$ (37)
where $`x_i`$ and $`x_f`$ are understood to have been evaluated according to expression (36). Combining expressions (37) and (8), we get
$$n1=\frac{2}{N}\mathrm{ln}\left(\frac{x_i}{x_f}\right)\frac{x_{60}^2}{8\pi },$$
(38)
and the difficulty of obtaining large $`n1`$ is now more transparent. For example, to get $`n1.5`$ with $`N60`$ we need $`\mathrm{ln}(x_i/x_f)>15`$ – more, if $`x_{60}^2/8\pi `$ is not negligible. Not only does such a large change seem unnatural, but it probably invalidates the expansion in Eq. (36).
Note, Eq. (38) (and others below) make it appear that $`(n1)`$ depends directly upon the amount of inflation. This is not really the case, because $`N`$ is the number of e-folds that occur during the time $`x`$ evolves from $`x_i`$ to $`x_f`$. In relating $`(n1)`$ to properties of the potential it is probably most useful to set $`N=60`$, and further to expand $`x(\varphi )`$ around $`\varphi _{60}`$, the era relevant to creating our present Hubble volume. Therefore, we choose $`\varphi _i=\varphi _{}=\varphi _{60}`$.
Now further specialize to the case where $`x_{60}^2/8\pi |m_{PL}x_{60}^{}|/4\pi `$ and $`|x_{60}||x_{60}^{}\mathrm{\Delta }\varphi |`$, where $`\mathrm{\Delta }\varphi =\varphi _f\varphi _i`$. Here we have explicitly assumed that the change in the steepness of the potential is small. It now follows that
$`N`$ $``$ $`{\displaystyle \frac{8\pi }{m_{PL}}}\left|{\displaystyle \frac{\mathrm{\Delta }\varphi }{x_{60}}}\right|`$ (39)
$`(n1)`$ $``$ $`{\displaystyle \frac{2}{N}}\left|{\displaystyle \frac{\mathrm{\Delta }\varphi }{x_{60}}}\right|x_{60}^{}<{\displaystyle \frac{2}{N}}`$ (40)
(note that $`\mathrm{\Delta }\varphi `$ and $`x_{60}`$ are of opposite sign). Thus, we get a very strong constraint on $`n`$ in this case, $`(n1)<0.04`$, and learn that to achieve $`n`$ significantly greater than unity, the scalar field must change by much more than $`m_{PL}`$.
One well-known class of inflationary models that gives $`n1`$ is hybrid inflation ; in the slow-roll region, $`V(\varphi )V_0(1+\mu \varphi ^2)`$. In these models,
$`N`$ $``$ $`{\displaystyle \frac{4\pi }{\mu m_{PL}^2}}\mathrm{ln}(\varphi _i/\varphi _f)`$ (41)
$`(n1)`$ $``$ $`{\displaystyle \frac{m_{PL}x^{}}{4\pi }}={\displaystyle \frac{\mu m_{PL}^2}{2\pi }}{\displaystyle \frac{2}{N}}\mathrm{ln}(\varphi _i/\varphi _f).`$ (42)
Thus, $`n`$ significantly larger than 1 can be achieved, albeit at the expense of an exponentially long roll, $`\varphi _i/\varphi _f=\mathrm{exp}[N(n1)/2]`$. However, $`\varphi _f`$ may not be arbitrarily small here – in fact, the smallest value it can take in the semi-classical approximation is equal to the magnitude of quantum fluctuations of the field, $`H/2\pi `$ (this is further discussed in the next section). This constraint, in combination with the other constraints, limits the maximum value of $`n`$ in hybrid inflation scenarios to $`n1.2`$ .
To end, as well as summarize, this discussion, let us rewrite Eq. (38) by expressing $`x_{60}^2/8\pi `$ in terms of $`N`$ and $`\mathrm{\Delta }\varphi `$ by assuming that $`x(\varphi )`$ doesn’t change too much:
$$(n1)\frac{2}{N}\mathrm{ln}(x_i/x_f)\frac{8\pi }{N^2}\left(\frac{\mathrm{\Delta }\varphi }{m_{PL}}\right)^2.$$
(43)
As this equation illustrates, unless $`\mathrm{\Delta }\varphi /m_{PL}`$ is large or the steepness changes significantly, $`|n1|\begin{array}{c}<\hfill \\ \hfill \end{array}2/N0.04`$. This is certainly borne out by inflationary model building: with a few notable exceptions all models predict $`|n1|0.1`$ .
## III Models with very blue spectra
### A Constraints
The conditions for successful inflation were spelled out a decade ago . The règles de jeu are:
$``$ Slow-roll conditions must be satisfied.
$``$ Sufficient number of e-folds to solve the horizon problem ($`N\begin{array}{c}>\hfill \\ \hfill \end{array}60`$).
$``$ Density perturbations of the correct amplitude
$$\delta _HV_{60}^{3/2}/V_{60}^{}10^5.$$
(44)
$``$ The distance that $`\varphi `$ rolls in a Hubble time must exceed the size of quantum fluctuations, otherwise the semi-classical approximation breaks down
$$\dot{\varphi }H^1H/2\pi V^{}V^{3/2}/m_{PL}^3,$$
(45)
which is automatically satisfied if the density perturbations are small. Additionally, no aspect of inflation should hinge upon $`\varphi _i`$ or $`\varphi _f`$ being smaller than $`H/2\pi `$, the size of the quantum fluctuations.
$``$ “Graceful exit” from inflation. The potential should have a stable minimum with zero energy around which the field oscillates at the stage of reheating. The reheat temperature must be sufficiently high to safely return the Universe to a radiation-dominated phase in time for baryogenesis and BBN.
$``$ No overproduction of undesired relics such as magnetic monopoles, gravitinos, or other nonrelativistic particles.
There are additional constraints that the potential should obey in order to give $`n1`$:
(a) $`m_{PL}x_{60}^{}/4\pi `$ has to be large and positive, while $`x_{60}^2/8\pi `$ should be negligibleOf course, $`x_{60}^2/8\pi `$ is not required to be negligible, but it seems that it is even more difficult to get large $`(n1)`$ without this assumption.. Therefore $`|x_{60}|\begin{array}{c}<\hfill \\ \hfill \end{array}O(1)`$ and $`m_{PL}x_{60}^{}4\pi (n1)`$. In other words, at 60 e-folds before the end of inflation the potential should be nearly flat and starting to slope upwards.
(b) To obtain 60 e-folds of inflation, the potential should be nearly flat in some region during inflation. However, the potential must not become too flat, since then density perturbations diverge ($`\delta _H1/V^{}`$). Therefore, the potential should have a point of approximate inflection where $`V^{}(\varphi )`$ is small but not zero.
### B Example 1
A potential with the characteristics just mentioned is
$$V=V_0+M^4\left[\mathrm{sinh}\left(\frac{\varphi \varphi _1}{f}\right)+e^{{\scriptscriptstyle \frac{\varphi }{g}}}\right],$$
(46)
where $`M`$, $`f`$ $`g`$ and $`\varphi _1`$ are constants with dimension of mass. The plot of the potential, with the parameters calculated below, is shown in the top panel of Fig. 1. The hyperbolic sine was invoked to satisfy requirements (a) and (b), while the exponential was used to produce a stable minimum.
We make the following assumptions to make the analysis simpler (later justified by our choice of parameters below):
1) $`V_0`$ dominates the potential in the slow-roll region,
$$V_0M^4\mathrm{sinh}\left(\frac{\varphi \varphi _1}{f}\right)\text{for}\varphi _i>\varphi >\varphi _f.$$
(47)
2) $`fg`$ so that the factor $`\mathrm{exp}(\varphi /g)`$ can be completely ignored in the slow-roll region.
3) $`(\varphi \varphi _1)/f`$ is at least of the order of a few for $`\varphi _i>\varphi >\varphi _f`$, so that $`\mathrm{sinh}[(\varphi \varphi _1)/f]1`$.
4) For simplicity we take $`\varphi _i=\varphi _{60}`$.
In terms of the dimensionless parameter $`K{\displaystyle \frac{M^4m_{PL}}{V_0f}}`$,
$`x`$ $``$ $`K\mathrm{cosh}\left({\displaystyle \frac{\varphi \varphi _1}{f}}\right),`$ (48)
$`x^{}`$ $``$ $`{\displaystyle \frac{K}{f}}\mathrm{sinh}\left({\displaystyle \frac{\varphi \varphi _1}{f}}\right){\displaystyle \frac{K^2}{m_{PL}}}\mathrm{cosh}^2\left({\displaystyle \frac{\varphi \varphi _1}{f}}\right).`$ (49)
The condition that $`x_{60}\begin{array}{c}<\hfill \\ \hfill \end{array}𝒪(1)`$ becomes
$$K\mathrm{cosh}\left(\frac{\varphi _{60}\varphi _1}{f}\right)\begin{array}{c}<\hfill \\ \hfill \end{array}𝒪(1),$$
(50)
and the end of inflation occurs one of the slow-roll conditions breaks down; in this case $`m_{PL}^2V^{\prime \prime }/V24\pi `$, or
$$\frac{m_{PL}K}{f}\mathrm{sinh}\left(\frac{\varphi _f\varphi _1}{f}\right)24\pi .$$
(51)
We can now write
$$(n1)\frac{m_{PL}K}{4\pi f}\mathrm{sinh}\left(\frac{\varphi _{60}\varphi _1}{f}\right).$$
(52)
That inflation produces density perturbations of the correct magnitude implies
$$\sqrt{V_0}4.310^6x_{60}m_{PL}^2.$$
(53)
The expression for the number of e-folds can be calculated analytically. Introducing $`\alpha =(\varphi \varphi _1)/f`$, we have:
$`N`$ $`=`$ $`{\displaystyle \frac{8\pi }{m_{PL}}}{\displaystyle _{\varphi _i}^{\varphi _f}}{\displaystyle \frac{d\varphi }{x(\varphi )}}`$ (54)
$`=`$ $`{\displaystyle \frac{8\pi f}{Km_{PL}}}\mathrm{tan}^1[\mathrm{sinh}(\alpha )]|_{\alpha _i}^{\alpha _f}{\displaystyle \frac{8\pi ^2f}{Km_{PL}}}.`$ (55)
In the last equality we used the fact that both $`\alpha _i`$ and $`|\alpha _f|`$ are at least of the order of a few, so that $`\mathrm{tan}^1[\mathrm{sinh}(\alpha _i)]\mathrm{tan}^1[\mathrm{sinh}(\alpha _f)]\pi /2`$. This assumption will also be fully justified with our choice of parameters below.
Finally, the potential should have a stable minimum (with $`V=0`$) at some $`\varphi =\varphi _R`$. This implies that $`V(\varphi _R)=0`$ and $`V^{}(\varphi _R)=0`$.
Before proceeding, we must specify $`n`$. We choose, somewhat arbitrarily, $`n=2`$. Of course, for such a large $`n`$ we should include terms beyond the lowest order, complicating the analysis. But we are not looking for accuracy – if $`n=2`$ is obtainable to first order, then one can certainly say that $`n1`$ is obtainable. (In fact, for the two potentials chosen, the second-order correction decreases $`n1`$ only slightly.)
We now have to choose parameters $`V_0`$, $`M`$, $`f`$, $`g`$, $`\varphi _1`$, $`\varphi _{60}`$, $`\varphi _f`$ and $`\varphi _R`$ to satisfy Conditions (50 \- 55), as well as $`V(\varphi _R)=0`$ and $`V^{}(\varphi _R)=0`$. The choice of these parameters is by no means unique, however. Here is such a set:
$`V_0=1.710^{13}m_{PL}^4`$
$`M^4=1.310^{17}m_{PL}^4`$
$`f=7.610^3m_{PL}`$
$`g=f/5`$
$`{\displaystyle \frac{\varphi _1}{f}}=8.80`$
$`{\displaystyle \frac{\varphi _f}{f}}=4.10`$
$`{\displaystyle \frac{\varphi _{60}}{f}}=11.75.`$
To verify our analytic results we integrated the equation of motion for $`\varphi `$ numerically and computed the spectrum of density perturbations. We did so neglecting the $`\ddot{\varphi }`$ in the equation of motion for $`\varphi `$ and the kinetic energy of the field (slow-roll approximation) and taking both these quantities into account. The result is that $`N_{\mathrm{slow}\mathrm{roll}}=57.3`$ and $`N_{\mathrm{exact}}=57.9`$. Thus, the field really rolls as predicted by analytic methods ($`N60`$), and the slow-roll approximation holds well for this potential.
The numerical results for the spectrum of density perturbations did contain a surprise, shown in Fig. 2. While this potential achieved large $`n`$, slightly smaller than 2, over a few e-folds $`n`$ falls to a smaller valueStarting the roll higher on the potential will increase the highest $`n`$ achieved without violating any of the constraints. However, $`n`$ will fall to equally low values after a few e-folds as with the original $`\varphi _i`$. . Indeed, even restricting the spectrum to astrophysically interesting scales, $`1`$Mpc to $`10^4`$Mpc, the spectrum is not a good power law, $`|dn/d\mathrm{ln}k|0.3`$, and is reminiscent of the “designer spectra” with special features constructed in Ref. . The reason is simple: in achieving $`x^{}1`$ an even larger value of $`x^{\prime \prime }`$ was attained.
### C Example 2
Is there anything special about the hyperbolic sine? Not really – for example, a potential of the form “$`\varphi +\varphi ^3`$” also works. Consider the potential
$$V=V_0+M^4\left[\left(\frac{\varphi \varphi _1}{f}\right)+\left(\frac{\varphi \varphi _1}{f}\right)^3+e^{{\scriptscriptstyle \frac{\varphi }{g}}}\right].$$
(57)
Again, we assume that $`V_0`$ dominates during inflation, that $`\varphi _i=\varphi _{60}`$ and that $`\mathrm{exp}(\varphi /g)`$ can be ignored in the inflationary region. To evaluate $`N`$, we further assume that $`|(\varphi _{60}\varphi _1)/f|1`$ and $`|(\varphi _f\varphi _1)/f|1`$. All of these assumptions are justified by the choice of parameters below.
The analysis of the inflationary constraints is similar. We conclude that large $`n`$ (here $`n=2`$) is possible, with the following parameters:
$`V_0=1.0910^{12}m_{PL}^4`$
$`M^4=1.4610^{16}m_{PL}^4`$
$`f=g=1.3310^2m_{PL}`$
$`{\displaystyle \frac{\varphi _1}{f}}=13.54`$
$`{\displaystyle \frac{\varphi _f}{f}}=1.82`$
$`{\displaystyle \frac{\varphi _{60}}{f}}=16.34.`$
This potential is shown in the bottom panel of Fig. 1. Numerical integration of the equation of motion shows that our “60 e-folds” is actually $`N_{\mathrm{slowroll}}=55.0`$ and $`N_{\mathrm{exact}}=56.0`$. Further, just as with the hyperbolic sine potential, $`n2`$ is achieved, but the spectrum of perturbations is not a good power law. Both potentials achieve a large change in steepness by having inflation occur near an approximate inflection point; however, the derivative of the change in steepness is also large, and $`n`$ varies significantly. The change in $`n`$ can be mitigated at the expense of a smaller value of $`n`$; see Fig. 2.
## IV Conclusions
The deviation of inflationary density perturbations from exact scale invariance ($`n=1`$) is controlled by the steepness of the potential and the change in steepness, cf. Eq. (8). The steepness of the potential also controls the relationship between the amount of inflation and change in the field driving inflation, $`N8\pi (\mathrm{\Delta }\varphi /m_{PL})/x`$. A very “red spectrum” can be achieved at the expense of a steep potential and prolonged inflation ($`t_f/t_i1`$ and $`\mathrm{\Delta }\varphi m_{PL}`$); the simplest example is power-law inflation. A very “blue spectrum” can be achieved at the expense of a large change in steepness near an inflection point in the potential and a poor power law. In both cases there appears to be a degree of unnaturalness.
The robustness of the inflationary prediction of that density perturbations are approximately scale-invariant is expressed by Eq. (43),
$$(n1)\frac{2}{N}\mathrm{ln}(x_i/x_f)\frac{8\pi }{N^2}\left(\frac{\mathrm{\Delta }\varphi }{m_{PL}}\right)^2.$$
Unless the change in steepness of the potential is large, $`|\mathrm{ln}(x_i/x_f)|1`$, or the duration of inflation is very long, $`\mathrm{\Delta }\varphi m_{PL}`$, the deviation from scale invariance must be small, $`|n1|\begin{array}{c}<\hfill \\ \hfill \end{array}𝒪(2/N)0.1`$. Even for an extreme range in $`n`$, say from $`n=0.5`$ to $`n1.5`$, the variation of $`\delta _H`$ over astrophysically interesting scales, $``$1 Mpc to $`10^4`$Mpc, is not especially large – a factor of $`10`$ or so – but is easily measurable.
Inflation also predicts a nearly scale-invariant spectrum of gravitational waves (tensor perturbations). The deviation from scale invariance is controlled solely by the first term in $`(n1)`$ , $`n_T=x_{60}^2/8\pi `$. Thus, only a red spectrum is possible, with the same remarks applying as for density (scalar) perturbations with $`n1`$. In addition, the relative amplitude of the scalar and tensor perturbations is related to the deviation of the tensor perturbations from scale invariance, $`T/S7n_T`$ ($`S`$ and $`T`$ are respectively the scalar and tensor contributions to the variance of the quadrupole anisotropy of the CBR). Detection of the gravity-wave perturbations is an important, but very challenging, test of inflation; if, in addition, the spectral index of the tensor perturbations can be measured, it provides a consistency test of inflation .
Finally, measurements of the anisotropy of the CBR and of the power spectrum of inhomogeneity today which will be made over the next decade will probe the nature of the primeval density perturbations and determine $`n`$ precisely ($`\sigma _n0.01`$) . By so doing they will provide a key test of inflation and provide insight into the underlying dynamics. On the basis of our work here, as well as previous studies (see e.g., Ref. ), one would expect $`(n1)𝒪(0.1)`$ or less, but not precisely zero. The determination that $`|n1|\begin{array}{c}>\hfill \\ \hfill \end{array}𝒪(0.2)`$, or for that matter $`n=1`$, would point to a handful of less generic potentials. The deviation of $`n`$ from unity is a key test of inflation and provides valuable information about the underlying potential .
|
no-problem/9908/astro-ph9908052.html
|
ar5iv
|
text
|
# NICMOS and VLA Observations of the Gravitatonally Lensed Ultraluminous BAL Quasar APM 08279+5255: Detection of a Third Image
## 1 Introduction
Discovered serendipitously in a survey of carbon stars within the Galactic halo, the bright $`(\mathrm{m}_\mathrm{r}=15.2)`$ z=3.87 broad absorption line quasar APM 08279+5255 was found to be positionally coincident with a source in the IRAS Faint Source Catalog (Irwin et al. (1998)). The bolometric luminosity, inferred from these optical and far-IR fluxes, exceeds $`5\times 10^{15}\mathrm{L}_{}(\mathrm{\Omega }_\mathrm{o}=1,\mathrm{H}_\mathrm{o}=50\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1,\mathrm{assumedthroughout})`$, making APM 08279+5255 the most luminous object currently known. Submillimeter photometry is consistent with thermal emission from a massive $`(3\times 10^8\mathrm{M}_{})`$, warm dust component (Lewis et al. (1998)), with APM 08279+5255 displaying an overall spectral energy distribution which is similar to other ultraluminous systems at high redshift (e.g. H1413+117 Barvainis et al. (1995)). Recent observations with the 30m IRAM telescope have also detected a massive quantity of molecular gas; such an environment is ripe for star formation, and this process may indeed be responsible for a substantial fraction of APM 08279+5255’s phenomenal luminosity (Downes et al. (1998)). From their CO observations, Downes et al. (1998) determined a redshift of 3.911 for APM 08279+5255. This differs by $`2500\mathrm{km}\mathrm{s}^1`$ from that determined from the high ionization broad emission lines (Irwin et al. (1998)). Since it is essentially impossible to measure accurately the systemic redshift of a complex BAL QSO like APM 08279+5255 from optical spectra, we have chosen to adopt the value of z=3.911 throughout this work.
Gravitational lensing can distort our view of the distant universe, enhancing the flux of galaxies and AGN, giving them the appearance of extraordinary systems. This has been the case for several ultraluminous systems discovered in recent years \[e.g. H1413+117 (Magain et al. (1988); Kneib et al. (1998)), IRAS FSC 10214+4724 (Rowan-Robinson et al. (1991); Broadhurst & Lehár (1995); Eisenhardt et al. (1996)) and the proto-galaxy candidate MS1512-cB58 (Yee et al. (1996); Williams & Lewis (1996); Seitz et al. (1998))\]. Given the extreme inferred luminosity of APM 08279+5255, the possibility that gravitational lensing is influencing the observed properties must also be considered.
Analysis of the point spread function (PSF) derived from the first ground-based observations, obtained with the Jacobus Kapteyn Telescope (JKT), revealed that APM 08279+5255 possesses a non-stellar profile. Rather, its profile was found to be better represented by a pair of point-like sources, separated by $`0\text{′′}\text{.}4`$ (Irwin et al. (1998)). This was interpreted as indicative of multiple imaging of APM 08279+5255 by a massive system along the line-of-sight, probably one or both of the strong Mg II absorbers seen at z=1.18 and z=1.81. The two components were found to possess very similar brightnesses, within 10%; with this, the magnification of the optical continuum was estimated to be $`40`$. Follow-up observations with the adaptive optics bonnette at the Canada-France-Hawaii telescope confirmed the existence of multiple components in APM 08279+5255, with a separation of $`0\text{′′}\text{.}35\pm 0\text{′′}\text{.}02`$ (Ledoux et al. (1998)). The relative brightness of the two images was found to be $`1.21\pm 0.25`$, consistent with the JKT observations. The submillimeter-infrared flux, which arises in a larger emission region, is subject to less enhancement \[for IRAS FSC 10214+4724 the optical flux is thought to be magnified a factor of $`23`$ times that of the dominant infrared emission (Eisenhardt et al. (1996))\].
While the above observations revealed that APM 08279+5255 is a good candidate for gravitational lensing, the scale of the PSF in the images is comparable to the image separation, and no detection of the lensing galaxy was made. The uncertainty of the relative image/lens configuration leads to uncertainty in the lens model and hence the inferred lensing magnification, without which the intrinsic properties of APM 08279+5255 cannot be determined. In an effort to confirm the lensing hypothesis and identify the lensing lensing galaxy, APM 08279+5255 was observed with NICMOS on the Hubble Space Telescope and with the VLA, and the results of these observations are presented here. In Section 2 the details of the observations are presented, while a lens model for APM 08279+5255 is presented in Section 3. The conclusions of this study are presented in Section 4.
## 2 Observations
### 2.1 NICMOS
On the 11th of October 1998, we observed APM 08279+5255 with the NICMOS<sup>1</sup><sup>1</sup>1 Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract No. NAS5-26555. infra-red camera on board the Hubble Space Telescope (HST). The highest spatial resolution NICMOS camera NIC1 was used to obtain 12 exposures of 14 sec each in the J-band (F110W), and 12 exposures of 40 sec each in the H-band (F160W). A further 12 exposures of 10 sec each were secured in the K-band (F205W) with the NIC2 camera (as a K-band filter is not available on NIC1). Each set of exposures were taken with 1” offset pointings in a spiral dither pattern. The main difference, for the present purposes, between the NIC1 and NIC2 cameras is the pixel scale: the pixel scale is $`0\text{′′}\text{.}043`$ on NIC1 and $`0\text{′′}\text{.}075`$ on NIC2. The orientation of the instrument was such that the position angle of the CCD y-axis on all exposures with the NIC1 camera was $`66.84^{}`$ E of N, and all exposures with the NIC2 camera have the CCD y-axis $`66.03^{}`$ E of N.
The raw frames were pre-reduced with the package CALNICA, using the appropriate calibration files from the STScI archive. This processing includes bias subtraction, flat fielding, dark correction, and a photometric calibration for converting detector counts to physical flux units.
A visual inspection of the preprocessed frames (see the top row of Figure 1) revealed two bright images separated by $`0\text{′′}\text{.}35`$$`0\text{′′}\text{.}4`$, as previously inferred from ground-based images. The brightest image, which we hereafter refer to as component “A”, is located towards the NE; the slightly fainter component “B” is on the SW end of the system. Clearly visible on the preprocessed F110W and F160W frames is a third, previously undetected, image. This component (which we shall label “C”), is substantially fainter than components A and B, and is located between those two bright images.
The gravitational lensing models which are compared to the data in §3 below require an accurate description of the observations in terms of relative fluxes and positions of the components of this system. To this end we have analyzed the data in two alternate ways, measuring magnitudes and positions on individual preprocessed frames, and on a combined image stack in each passband.
Since the three components are at best $`3`$ pixels separated from each other, it is necessary to perform a careful point-spread function fitting analysis, a requirement of which is the determination of a good PSF model for each camera and passband. The NICMOS PSF depends on many factors. The factors that arise from spacecraft settings include: the chosen camera and filter, the focus setting, and the position of the source on the detector. The PSF is wavelength dependent, so the source spectrum also affects its PSF. It was for this last reason that we adopted PSFs simulated with the TINYTIM algorithm (which models the effects of all of the above constraints), instead of picking stars from the STScI archive of NICMOS observations to construct a PSF. We assumed that APM 08279+5255 has a flat spectrum in $`\nu F_\nu `$ over the region from $`0.8\mu m2.35\mu m`$, as suggested by the spectral energy distribution displayed in Lewis et al. (1998), their Figure 2. This choice is supported by the photometric results of the present study, listed in Table 1 below.
In the first pass of data-reductions, we constructed a TINYTIM PSF appropriate for the camera, filter, focus position and assumed source spectrum, and used the approximate location of component A in each frame as the PSF position. This PSF was then fit to the data on each data frame using the ALLSTAR PSF fitting program (Stetson 1987) to measure the magnitudes and positions of the three components. These position measurements are accurate to typically better than $`0.1`$ pixels (judging from the RMS scatter in $`|\stackrel{}{x_A}\stackrel{}{x_B}|`$). This information enables us to use TINYTIM to construct better PSFs, taking into account the actual sub-pixel location of each component on each frame. Refined magnitudes and positions were subsequently obtained by re-running the ALLSTAR program; the mean and RMS values of these measurements are listed in Table 1.
A combined stacked frame was constructed, using the positions of components A and B on each frame to define the frame registration. The individual preprocessed data frames were resampled onto a finer grid at a scale of $`0\text{′′}\text{.}025/\mathrm{pixel}`$, using the STSDAS “DRIZZLE” algorithm, and then medianed to give a combined frame in each of the F110W, F160W and F205W passbands; these high resolution frames are reproduced in Figure 1, and are shown as a color-composite map in Figure 2.
In the middle-row panels of Figure 1 we have chosen image brightness cuts that emphasize a peculiarity of our dataset: the first Airy ring is not uniform, but instead appears brighter on the higher row number side of the object centers. This feature of the PSF is not predicted by the TINYTIM software<sup>2</sup><sup>2</sup>2The anonymous referee brought to our attention that other NICMOS data (e.g. the F160W NIC2 exposures of RXJ0911.4+0551) also display a non-uniform Airy ring.. Note that it cannot be due to some structure of the source, as it is coincident with the Airy ring in each passband. The deviation away from the model PSF is significant — a normal TINYTIM PSF model that fits the core of component A in the F110W filter underestimates the flux in the first Airy ring by approximately 10% of the total flux (i.e. the first Airy ring is approximately 50% brighter than expected). We implemented a simple fix of this problem by altering the model PSFs: we applied a linear ramp to the first Airy ring, while leaving the central region of the PSFs ($`0\text{′′}\text{.}1`$ pixels in F110W, $`0\text{′′}\text{.}15`$ pixels in F160W, and $`0\text{′′}\text{.}2`$ pixels in F205W) unaltered. The form of the linear ramp was chosen to be $`PSF^{}(x,y)=S(yy_c)PSF(x,y)`$, where $`S`$ is the slope of the ramp, and $`y_c`$ is the PSF center. As can be seen from the middle-row panels Figure 1, the region above (towards high row number) component A is free of contamination from other components, as is the region below component B; it was to the data in these two regions that we fit the multiplicative slope $`S`$, which gave $`S=200\%/\mathrm{arcsec}`$ in F110W, $`S=120\%/\mathrm{arcsec}`$ in F110W, and $`S=66\%/\mathrm{arcsec}`$ in F205W.
Using these PSF models with an improved estimate of the brightness distribution around the first Airy ring, we obtained ALLSTAR measurements of the magnitudes and positions of the three components A, B and C on the median-combined image stacks in each color. These measurements are listed in Table 1.
We now summarize the observational results.
* The reduced chi-squared value of the PSF fits to the three components in the median-combined stacked frame is $`\chi ^2<2`$, indicating that to good approximation, the three components are point sources. The residuals of these PSF fits are displayed in the bottom-row panels of Figure 1.
* The colors of components A and B are identical to within the uncertainties. If we accept that the measurements from the stacked frames are more reliable than the mean value of the measurements from individual frames, then the colors of component C are also consistent with those of A and B. The simplest hypothesis is therefore that the components A, B and C have identical colors.
* Given that the colors of the components are identical, we can average their relative brightnesses over the three passbands. We find then that the relative brightness of components A and B is $`0.773\pm 0.007`$ ($`0.772\pm 0.012`$), where the value in brackets is the mean of the measurements on unstacked frames, and the uncertainties are the RMS scatter in the three measurements. The relative brightness of components A and C is $`0.175\pm 0.008`$, ($`0.202\pm 0.013`$). Thus, we find good consistency between the measurement methods, which indicates that the relative brightnesses of components A, B and C are well constrained by the data.
* The fluxes of the components in $`\nu F_\nu `$ are approximately constant as a function of wavelength (slightly brighter in H).
* Averaging over the position measurements in each passband, the distance from A to B is $`0.377\pm 0.002`$ ($`0.379\pm 0.001`$), and the distance from A to C is $`0.150\pm 0.006`$ ($`0.146\pm 0.007`$). The location of component C is not on the straight line connecting A and B. This can be seen by measuring the position of component C in a new coordinate system $`(X,Y)`$, where $`X`$ points from A to B, and $`Y`$ is orthogonal to $`X`$ (and points in a north-westerly direction). In these coordinates, $`X_C=0.146\pm 0.007`$ ($`X_C=0.143\pm 0.007`$) and $`Y_C=0.031\pm 0.001`$ ($`Y_C=0.030\pm 0.0005`$). While these RMS uncertainties in the $`Y_C`$ positions likely underestimate the true uncertainty, it is clear that to a high confidence level, the three components of APM 08279+5255 are not co-linear.
### 2.2 VLA
We obtained 3.6 cm radio observations of APM 08279+5255, using the hybrid BnA configuration of the NRAO<sup>3</sup><sup>3</sup>3The National Radio Astronomy Observatory is a Facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc. Very Large Array (VLA), on 18<sup>th</sup> June, 1998. We integrated on the target for about 2.5 hours, divided into half-hour scans, interleaved with a nearby phase calibrator (B0820+560). The interferometer data were calibrated and mapped using standard AIPS<sup>4</sup><sup>4</sup>4 AIPS (Astronomical Image Processing System) is distributed by NRAO. procedures, and the flux densities were calibrated to 3C 147 (Baars et al. 1977). The target proved to be very faint in the radio, and no significant polarized emission was detected. Thus only a single iteration of CLEAN mapping was performed, without self-calibration.
The resulting radio map (see Figure 3) shows a very faint source, with a peak flux density of only 0.26 mJy/beam, where the off-source map RMS was 0.013 mJy/beam, close to the expected thermal noise. The peak of the radio emission is at $`\alpha `$=08:27:58.00, $`\delta `$=+52:55:26.9 (B1950), displaced by $`0\stackrel{}{\mathrm{.}}6`$ from the HST A-image position, probably due to a combination of HST and VLA astrometric errors. The total VLA flux of the source is $`0.45\pm 0.03`$ mJy, integrated over a $`1arcsec`$ square aperture. There is also a marginally detected source (peak SNR$`5`$), roughly $`3arcsec`$ to the southwest, with a total flux density of $`0.09\pm 0.03`$ mJy. To best display the radio structure, given the elliptical natural beam, we used a $`0\stackrel{}{\mathrm{.}}4`$ FWHM circular restoring beam.
The radio source is clearly resolved along the A–B image axis. We fitted two Gaussian components to a $`1arcsec`$ square region enclosing the radio source, with a relative offset fixed to the HST A–B offset, and widths matched to the $`0\stackrel{}{\mathrm{.}}4`$ convolving beam. The two components had an A/B flux density ratio of 0.6, consistent with the HST image brightness ratio of 0.7, but with a noticeable residual between them. Adding a third component at the C-image offset decreased the fit $`\chi ^2`$ by $`30\%`$, and yielded B/A and C/A flux ratios of 0.9 and 0.5 respectively. However, we could not definitively confirm the C-image: the fit $`\chi ^2`$ remained less than twice the best value provided that the C image contained less than twice the A image flux density.
## 3 Gravitational Lensing
### 3.1 Image Configuration
Using ground-based data, both Irwin et al. (1998) and Ledoux et al. (1998) determined that APM 08279+5255 comprised a pair of point-like images separated by $`0\text{′′}\text{.}4`$. With the limited information available, namely the image separation and relative brightnesses, both modeled the lensing configuration as a singular isothermal sphere and this suggested that the quasar source in APM 08279+5255 is magnified by a factor of 20–40. Taking this into account, intrinsically APM 08279+5255 still ranks amongst the brightest systems known.
The data presented here greatly enhances our view of APM 08279+5255, clearly resolving the system into a pair of bright point-like images either side of a fainter third image. But before the degree of gravitational lensing in this system can be fully explored, the nature of the third image must be investigated. Two possibilities present themselves; either it is the lensing galaxy which is responsible for splitting the bright image pair, or it represents a true ‘third’ image of the quasar source.
Several lines of argument point towards the latter possibility. First, morphologically image C is point-like and it possesses identical colors to the brighter components. The spectra of APM 08279+5255 with both the 2.5m Isaac Newton Telescope (Irwin et al. (1998)) and the 10m Keck I Telescope (Ellison et al. 1999a ; Ellison et al. (1999)) reveal the presence for two Mg II absorption systems at z=1.18 and z=1.81. If we adopt the lower of these redshifts as the potential redshift of the lensing galaxy, then its distance modulus is $`44.6`$ ($`H_0=50\mathrm{k}\mathrm{m}\mathrm{s}^1\mathrm{Mpc}^1`$, $`q_0=0.5`$). Thus if image C is the lens, its absolute magnitude would be have to be prodigious, $`\mathrm{M}_\mathrm{J}32`$ (note that the K-correction is approximately zero, as the SED of component C is flat in $`\nu f_\nu `$). This is inconsistent with the expected Faber-Jackson luminosity of the lens given its small $`130\mathrm{km}\mathrm{s}^1`$ velocity dispersion (Irwin et al. 1998), predicted by a simple isothermal sphere lensing model. Finally, examination of the Keck I HIRES spectra (Ellison et al. 1999a ; Ellison et al. (1999)), which have a resolution of $`6\mathrm{km}\mathrm{s}^1`$ (0.04Å/pixel), reveals a number of absorption systems both along the line-of-sight to the quasar, the Ly<sub>α</sub> forest, and associated with the quasar source, the broad absorption lines. In large spectral intervals, through regions of significant optical depth, Ly<sub>α</sub> absorption is saturated, and the spectrum is effectively black with a signal that is consistent with zero. Since flux from the (foreground) lensing galaxy must fill in these troughs, their darkness can be used to place an upper limit to the brightness of the lens. Indeed, between 5500Å to 5900Å, the darkest 200 pixel-wide region is centered at 5769.2Å, and the $`3\sigma `$ upper limit to the mean flux in this 200 pixel region is 600 times fainter than the mean flux over the interval 5500Å to 5900Å (we have adopted the Ellison et al. 1999a error spectrum as a reasonable estimate of all sources of noise in the spectrum, including the uncertainty arising from scattered light in the HIRES spectrograph). This implies that the – as yet undetected – lensing galaxy must be at least $`7`$ magnitudes fainter than the quasar, that is, $`\mathrm{V}\stackrel{>}{}22`$. Thus, the hypothesis that image C is the lensing galaxy leads to an unrealistically red color: $`\mathrm{V}\mathrm{K}\stackrel{>}{}8`$. These facts provide compelling evidence that C is a third image of the high redshift quasar source. This, however, can not be conclusively demonstrated without further observations of APM 08279+5255, and in the following sections, which deal with gravitational lens modeling, both the third image and lensing galaxy possibilities as a source for C will be considered.
### 3.2 Modeling
Given APM 08279+5255’s apparent position as the most luminous object currently known, it is important to determine how much the action of gravitational lensing is enhancing our view of this distant source. The excellent resolution of the NICMOS images reveals a third point-like source between the brighter quasar image, and while available evidence points to this being a third image of the quasar source, the possibility that this emission arises in the lensing galaxy is not ruled out. In terms of modeling the gravitational lensing in APM 08279+5255 both possibilities are considered.
#### 3.2.1 C as a Third Quasar Image
If image C represents a third image of the quasar source, then extant observations have failed to detect the lensing galaxy and hence its characteristics and position relative to the quasar images remains unknown to us. However, due to the fact that the image positions are not co-linear, we know that this lens must possess some degree of asymmetry, either due to some intrinsic ellipticity or shearing from a nearby companion. Similarly, to produce the relatively bright third image, a finite core radius is required (Wallington & Narayan (1993)). We would wish, therefore, to employ a finite core, and an elliptical mass distribution in modeling the gravitational lensing in this system. Any such model, however, requires a minimum of 7 parameters (two source positions, an ellipticity, core radius, mass orientation, slope and normalization), while the data only offer 5 constraints (three relative positions and two relative image brightnesses). To this end, we constructed a simple model with a general search of parameter space.
To reduce parameter space, the mass distribution is taken as being isothermal at large radii, turning over in the inner regions to give a core. Sampling the image characteristics for a range of source positions, ellipticities and core radii, resulted in the model presented in Figure 4. The left-hand panel presents the position of the source relative to the caustic distribution in the source plane. The right-hand panel presents the image plane with the corresponding critical lines. The small circles represent the resultant positions of the images; these are superimposed on the NICMOS image. Both the relative image positions and magnitudes match those outlined in Table 1. The lensing mass is oriented 76<sup>o</sup> east of north, with a small deviation from circular symmetry $`(ϵ0.01)`$. The core radius of the lens is $`0\text{′′}\text{.}21`$, and assuming that this galaxy is at a redshift of z=1.18, its mass interior to the Einstein ring is $`2\times 10^{10}\mathrm{M}_{}`$. While this recovers the observed image plane characteristics it is, of course, highly degenerate, but it does demonstrate that APM 08279+5255 is consistent with being a gravitational lensed system.
We do note, however, that more complex lens models are possible and APM 08279+5255 may represent lensing by a ‘naked cusp’ (Wallington & Narayan (1993)). Similarly, the location of at least one other lensing system along the line-of-sight to APM 08279+5255 can lead to a more complex caustic network in the source plane. With either of these, the observed image configuration can be reproduced with quite different degrees of magnification. The true nature of the gravitational lensing configuration in this system will remain uncertain until the lensing galaxy has been detected.
#### 3.2.2 C as the Lensing Galaxy
The observed three image configuration is consistent with a fundamental theorem of gravitational lensing which dictates that an odd number of images are always formed by non-singular mass distributions (Burke (1981)). It is, however, at odds with observations of most gravitationally lensed systems, as these are seen to possess an even number of images (c.f. http://cfa-www.harvard.edu/castles/); this seemingly incompatibility between gravitational lens theory and observations is usually solved by invoking a (near-)singular core in the lensing galaxy which can drive the magnification of one of the images to zero (Narasimha Subramanian & Chitre (1986)). The inverse of this suggests that the lensing mass distribution must contain a finite core radius if the image at C is to be appreciably magnified; this is exactly the situation with the previous model (Section 3.2.1).
Unless this is the first true bona-fide three image lensed system, however, we must also consider the case that C represents the lensing galaxy and the ‘true’ third image has been demagnified by an almost singular core. With this, the lensing potential of C was modeled as an elliptical, isothermal potential (c.f. Kochanek, Blandford, Lawrence & Narayan (1989)). The lensing configuration offers 5 constraints: two positions, relative to the lens, for each image, and the relative image brightnesses. The model has 3 free parameters, the ellipticity, orientation and normalization of the mass distribution. Using C as the position of the lens is not used as a constraint, rather it is also considered another pair of free parameters. With 5 free parameters and 5 constraints a search for an exact model fit can be made.
The resulting minimized model lensing configuration for this model is presented in Figure 5; here the grey represents the position of the source relative to the lensing caustics, while the observed images and critical lines are solid. The best position for the lens is $`0\text{′′}\text{.}0989`$ West and $`0\text{′′}\text{.}1026`$ South of image A, consistent with the position of image C (Table 1). The mass has a 1-D velocity dispersion of 126$`\mathrm{km}\mathrm{s}^1`$ and is oriented at 104 east of north, an offset of $`28^{}`$ from the model presented in Section 3.2.1, although the ellipticity of the mass profile is substantial, with $`ϵ=0.2`$; the differences in the models are understandable given the different interpretations of the nature of C. The source has an impact parameter, relative to the lens, of $`0\text{′′}\text{.}01`$ (SSW). Again, the location of the source is similar to that seen in Section 3.2.1. This analysis illustrates, from a gravitational lensing point of view, that C could represent the lensing galaxy.
But what of the nature of C? As discussed previously, if it is the lensing system then its brightness indicates that it is not a ‘normal’ galactic system. One possibility is that C is itself a luminous quasar, explaining both its apparent brightness and point-like appearance. Such quasar-quasar lensing has been addressed (Wampler (1997)), although only high resolution imaging and spectroscopy will uncover C’s true nature.
### 3.3 The Intrinsic Luminosity of APM 08279+5255
Using the models described above, the degree to which the quasar source is magnified can be determined. Considering the first model of Section 3.2.1, which assumes image C is a third image of the quasar, the total magnification of the quasar source is $`90`$. Similarly, the model described in Section 3.2.2, in which image C represents the lensing galaxy responsible for splitting the quasar light into the A and B images, the corresponding magnification of the quasar continuum source is $`7.5`$. We expect the magnification of the far-IR continuum source to be a factor of 2–3 times lower than the magnification of the quasar continuum source, due to the larger size of the emission region. Taking these into account, the intrinsic luminosity of APM 08279+5255 is $`10^{14}10^{15}\mathrm{L}_{}`$ and it retains its place amongst the most luminous systems currently known.
## 4 Conclusions
We have presented new observations of the ultraluminous BAL quasar APM 08279+5255, using both NICMOS on the Hubble Space Telescope and the VLA at 3.5cm. These clearly demonstrate the composite nature of the system, separating APM 08279+5255 into a pair of images, A and B, of comparable brightness. The NICMOS images also reveal the presence of a third point-source, C, between the brighter two. The VLA image shows structure corresponding to the background quasar being a faint radio source and we conclude that intrinsically APM 08279+5255 is a radio quiet quasar with ordinary radio properties. Although the radio source is resolved along A–B, we can not determine from these radio data whether it comprises the A+B images or the A+B+C images.
There are two possible interpretations of the source of C: either this central image is a detection of the foreground lensing galaxy, or it represents a third image of the high redshift quasar. Several arguments indicate that this latter proposition is a more likely description of the available data. A lensing model in which the three observed images are true images of the quasar source was presented. This recovers both the image configuration and and relative brightnesses with a total lensing magnification of $`90`$.
An alternative model in which image C is the lensing galaxy, was also explored. The position of C was found to be coincident with the position of the lensing galaxy predicted from a simple lensing model, indicating that this scenario may not be implausible. However, to account for its apparent brightness, C must also be an intrinsically luminous source. If this latter model is correct, the simple lensing configuration it offers, with a corresponding time delay of 5 days, provides an ideal example of a “golden lens” (Williams & Schecter 1997) with which $`\mathrm{H}_\mathrm{o}`$ can be determined from photometric monitoring.
Considering the magnification of the above models, the intrinsic luminosity of APM 08279+5255 is $`10^{14}10^{15}\mathrm{L}_{}`$, placing it amongst the most luminous systems currently known, and its apparent brightness makes it ideal to study aspects of the high redshift Universe (Hines et al. (1999); Ellison et al. (1999); Ellison et al. 1999a ).
Recent near and mid-IR KECK images were brought to our attention just prior to submitting this article (Egami et al. 1999). Taken in excellent seeing, these also clearly reveal the presence of the third image in the same location and with a similar brightness to that found in the the NICMOS images. Given its apparent brightness, they also conclude that this represents a third image of the quasar, rather than the lensing galaxy. They also construct a very similar lensing model to the one presented in this paper, although ours possesses a slightly higher total magnification (90 compared to their 71). Overall, our conclusions are in excellent agreement with theirs.
We thank the anonymous referee for constructive comments, and Dr. E. Egami for pointing out an error in the orientation of one of our lensing models.
|
no-problem/9908/cond-mat9908013.html
|
ar5iv
|
text
|
# Mound formation in nonequilibrium surface growth morphology does not necessarily imply a Schwoebel instability
\[
## Abstract
We demonstrate, using well-established nonequilibrium growth models, that mound formation in the dynamical surface growth morphology does not necessarily imply a surface edge diffusion bias (“the Schwoebel barrier”) as has been almost universally accepted in the literature. We find mounded morphologies in several nonequilibrium growth models which incorporate no Schwoebel barrier. Our work should lead to a critical re-evaluation of recent experimental observations of mounded morphologies which have been theoretically interpreted in terms of Schwoebel barrier effects.
\]
In vacuum deposition growth of thin films or epitaxial layers (e.g. MBE) it is common to find mound formation in the evolving dynamical surface growth morphology. Although the details of the mounded morphology could differ considerably depending on the systems and growth conditions, the basic mounding phenomenon in surface growth has been reported in a large number of recent experimental publications . The typical experiment monitors vacuum deposition growth on substrates using STM and/or AFM spectroscopies. Growth mounds are observed under typical MBE-type growth conditions, and the resultant mounded morphology is statistically analyzed by studying the dynamical surface height $`h(𝐫,t)`$ as a function of the position $`𝐫`$ on the surface and growth time $`t`$. Much attention has focused on this ubiquitous phenomenon of mounding and the associated pattern formation during nonequilibrium surface growth for reasons of possible technological interest (e.g. the possibility of producing controlled nanoscale thin film or interface patterns) and fundamental interest (e.g. understanding nonequilibrium growth and pattern formation).
The theoretical interpretation of the mounding phenomenon has been almost exclusively based on the step-edge diffusion bias or the so-called Schwoebel barrier effect (also known as the Ehrlich-Schwoebel , ES, barrier). The basic idea of the ES barrier-induced mounding (often referred to as an instability) is simple : The ES effect produces an additional energy barrier for diffusing adatoms on terraces from coming “down” toward the substrate, thus probablistically inhibiting attachment of atoms to lower or down-steps and enhancing their attachment to upper or up-steps; the result is therefore mound formation because deposited atoms cannot come down from upper to lower terraces and so three-dimensional mounds or pyramids result as atoms are deposited on the top of already existing terraces.
The physical picture underlying mounded growth under an ES barrier is manifestly obvious, and clearly the existence of an ES barrier is a sufficient condition for mound formation in nonequilibrium surface growth. Our interest in this Letter is to discuss the necessary condition for mound formation in nonequilibrium surface growth morphology — more precisely, we want to ask the inverse question, namely, whether the observation of mound formation requires the existence of an ES barrier as has been almost exclusively (and uncritically in our opinion) accepted in the recent literature. Through concrete examples we demonstrate rather compellingly that the mound formation in nonequilibrium surface growth morphology does not necessarily imply the existence of an ES barrier, and we contend (and results presented in this Letter establish) that the recent experimental observations of mound formation in nonequilibrium surface growth morphology should not be taken as definitive evidence in favor of an ES barrier-induced universal mechanism for pattern formation in surface growth. Mound formation in nonequilibrium surface growth is a non-universal phenomenon, and could have very different underlying causes in different systems and situations.
Before presenting our results we point out that the possible nonuniversality in surface growth mound formation (i.e. mounds do not necessarily imply an ES barrier) has recently been mentioned in at least two experimental publications where it was emphasized that the mounded patterns seen on Si and GaAs , InP surfaces during MBE growth were not consistent with the phenomenology of a Schwoebel instability. These papers have, however, been essentially ignored in the literature, and the ES barrier-Schwoebel instability paradigm is by now so well-entrenched in the literature that the experimental observations of mound formation during nonequilibrium growth are often forced to conform to the ES barrier scenario even when the resultant data analyses lead to the inconsistent conclusion about the non-existence of any ES barrier in the systems under study . There have been only two proposed mechanisms in the literature which lead to mounding without any explicit ES barrier: One of them invokes a preferential attachment to up-steps compared with down-steps (the so-called “step-adatom” attraction), which, in effect, is equivalent to having an ES barrier because the attachment probability to down-steps is lower than that to up-steps exactly as it is in the regular ES barrier case — we therefore do not distinguish it from the ES barrier mechanism, and in fact, within the growth models we study, these two mechanisms are physically and mathematically indistinguishable. The second mounding alternative is the so-called edge diffusion induced mounding, where diffusion of adatoms around cluster edges is shown to lead to mound formation during nonequilibrium surface growth even in the absence of any finite ES barrier. One of the concrete examples we discuss below, the spectacular pyramidal pattern formation (Fig. 3(c)) in the 2+1 dimensional (d) noise reduced Wolf-Villain (WV) model , arises from such a nonequilibrium edge diffusion effect (perhaps in a somewhat unexpected context). We also demonstrate, using the WV model and the Das Sarma-Tamborenea (DT) model , that mound formation during nonequilibrium surface growth is, in fact, almost a generic feature of limited mobility growth models , which typically have comparatively large values of the roughness exponent ($`\alpha `$) characterizing the growth morphology. Below we demonstrate that mound formation in surface morphology arising from this generic “large $`\alpha `$” effect (without any explicit ES barrier) is qualitatively virtually indistinguishable from that in growth under an ES barrier. Mound formation in the presence of strong edge diffusion (as in the d=2+1 WV model in Fig. 3) is, on the other hand, morphologically quite distinct from the ES barrier- or the large $`\alpha `$\- induced mound formation.
Our results are based on the extensively studied limited mobility nonequilibrium WV and DT growth models. Both models have been widely studied in the context of kinetic surface roughening in nonequilibrium solid-on-solid (SOS) epitaxial growth — the interest in and the importance of these models lie in the fact that these were the first concrete physically motivated growth models falling outside the well-known Edwards-Wilkinson-Kardar-Parisi-Zhang generic universality class in kinetic surface roughening. Both models involve random deposition of atoms on a square lattice singular substrate (with a growth rate of 1 layer/sec. where the growth rate defines the unit of time) under the SOS constraint with no evaporation or desorption. An incident atom can diffuse instantaneously before incorporation if it satisfies certain diffusion rules which differ slightly in the two models. In the WV model the incident atom can diffuse within a diffusion length $`l`$ (which is taken to be one with the lattice constant being chosen as the length unit, i.e. only nearest-neighbor diffusion, in all the results shown in this paper — larger values of $`l`$ do not change our conclusions) in order to maximize its local coordination number or equivalently the number of nearest neighbor bonds it forms with other atoms (if there are several possible final sites satisfying the maximum coordination condition equivalently then the incident atom chooses one of those sites with equal random probability and if no other site increases the local coordination compared with the incident site then the atom stays at the incident site). The DT model is similar to the WV model except for two crucial differences: (1) only incident atoms with no lateral bonds (i.e. with the local coordination number of one — a nearest-neighbor bond to the atom below is necessary to satisfy the SOS constraint) are allowed to diffuse (all other deposited atoms, with one or more lateral bonds, are incorporated into the growing film at their incident sites); (2) the incident atoms move only to increase their local coordination number (and not to maximize it as in the WV model) — all possible incorporation sites with finite lateral local coordination numbers are accepted with random equal probability. Although these two differences between the DT and the WV model have turned out to be crucial in distinguishing their asymptotic universality class, the two models exhibit very similar growth behavior for a long transient pre-asymptotic regime. It is easy to incorporate an ES barrier in the DT (or WV) model by introducing differential probabilities $`P_u`$ and $`P_l`$ for adatom attachment to an upper and a lower step respectively — the original DT model has $`P_u=P_l`$, and an ES barrier is explicitly incorporated in the model by having $`P_l<P_u1`$. We call this situation the DT-ES model (we use $`P_u=1`$ throughout with no loss of generality). We also note, as mentioned above, that within the DT-ES model the ES barrier ($`P_l<P_u`$) and the step-adatom attraction ($`P_u>P_l`$) are manifestly equivalent, and we therefore do not consider them as separate mechanisms. We note also that in some of our simulations below we have used the noise reduction technique which have earlier been successful in limited mobility growth models in reducing the strong stochastic noise effect through an effective coarse-graining procedure.
In Fig. 1 and 2 we present our d=1+1 growth simulations, which demonstrate the point we want to make in this Letter. We show in Fig. 1 the simulated growth morphologies at three different times for four different situations, two of which (Fig. 1(a),(b)) have finite ES barriers and the other two (Fig. 1(c),(d)) do not. The important point we wish to emphasize is that, while the four morphologies and their dynamical evolutions shown in Fig. 1 are quite distinct in their details, they all share one crucial common feature: they all indicate mound formation although the details of the mounds and the controlling length scales are obviously quite different in the different cases. Just the mere observation of mounded morphology, which is clearly present in Figs. 1(c),(d), thus does not necessarily imply the existence of an ES barrier. To further quantify the mounding apparent in the simulated morphologies of Fig. 1 we show in Fig. 2 the calculated height-height correlation function, $`H(r)h(𝐱)h(𝐫+𝐱)_𝐱^{1/2}`$, along the surface for two different times.
All the calculated $`H(r)`$ show clear oscillations as a function of $`r`$, which by definition implies mound formation. It is indeed true that the presence of considerable stochastic noise associated with the deposition process in the DT, WV models make the $`H(r)`$-oscillations quite noisy, but there is no questioning the fact that oscillations are present in $`H(r)`$ even when there is no explicit ES barrier present in the growth model (Figs. 2(c),(d)). We have explicitly verified that such growth mounds (or equivalently $`H(r)`$ oscillations) are absent in the growth models which correspond to the generic Edwards-Wilkinson-Kardar-Parisi-Zhang universality class, and arise only in the DT, WV limited mobility growth models which exhibit non-generic behavior with a large value of the roughness exponent $`\alpha `$. In fact, the effective $`\alpha `$ in the DT, WV models is essentially unity, which is the same as what one expects in a naive theoretical description of growth under the ES barrier (although the underlying growth mechanisms are completely different in the two situations). We believe that any surface growth involving a “large” roughness exponent ( $`0.5<\alpha 1`$) will invariably show “mounded” morphology independent of whether there is an ES barrier in the system or not. We contend that this effectively large $`\alpha `$ is the physical origin for mounded morphology in semiconductor MBE growth where one expects the surface diffusion driven linear or nonlinear conserved fourth order (in contrast to the generic second order) dynamical growth universality class to apply which has the asymptotic exponent : $`\alpha `$ (d=1+1) $``$ 1; $`\alpha `$ (d=2+1) $``$ 0.67 (nonlinear), 1 (linear). One recent experimental paper , which reports the observation of mounded GaAs and InP growth with $`\alpha 0.50.6`$, has explicitly made this case, and all the reported mound formations in semiconductor MBE growth are consistent with our contention of the mounds arising from \[as in our Fig. 1(c),(d)\] a large effective roughness exponent rather than a Schwoebel instability. The calculated ES barrier on semiconductor surfaces are invariably small, providing further support to our contention that mounding in semiconductor surface growth is not an ES barrier effect, but arises instead from the fourth order growth equations which have large roughness exponents. Two very recent experimental publications have reached the same conclusion in non-semiconductor MBE growth studies — in these recent publications spectacular mounded surface growth morphologies have been interpreted on the basis of the fourth order conserved growth equations .
Finally, in Fig. 3 and 4 we present our results for the physically more relevant d=2+1 nonequilibrium surface growth. In Fig. 3(a)-(c) we show the growth morphologies for the DT-ES, DT, and the noise-reduced WV model, respectively whereas in the main Fig. 3 we show the scaled height-height correlation function. It is apparent that all three models (one with an ES barrier and the other two without) have qualitatively similar oscillations in $`H(r)`$ indicating mounded growth, and the differences between the growth models are purely quantitative. Thus we come to the same conclusion: mound formation, by itself, does not imply the existence of an ES barrier; the details of the morphology obviously will depend on the existence (or not) of an ES barrier. We note that the effective values of the roughness exponent are very similar in Fig. 3(a) and (b), both being approximately $`\alpha 0.5`$ (far below the asymptotic value $`\alpha 1`$ expected in the ES barrier growth — we have verified that this asymptotic $`\alpha 1`$ is achieved in our simulations at an astronomically long time of $`10^9`$ layers).
The most astonishing result we show in Fig. 3 is the spectacular pyramidal mound formation in the d=2+1 noise reduced WV model (without any ES barrier). The strikingly regular pyramidal pattern formation (Fig. 3(c)) in our noise reduced WV model in fact has a magic slope and strong coarsening behavior. The pattern is very reminiscent of the theoretical growth model studied earlier in ref. in the context of nonequilibrium growth under an ES barrier where very similar patterns with slope selection were proposed as a generic scenario for growth under a Schwoebel instability. In our case of the noise reduced d=2+1 WV model of Fig. 3(c), there is no ES barrier, but there is strong cluster-edge diffusion as explained schematically in Fig. 4. This strong edge diffusion (which obviously cannot happen in 1+1 dimensional growth) arises in the WV model (but not in the DT model) from the hopping of adatoms which have finite lateral nearest neighbor bonds (and are therefore the edge atoms in a cluster). This edge diffusion leads to an “uphill” surface current (discussed in entirely different contexts in ), which leads to the formation of the slope-selected pyramidal patterned growth morphology. While noise reduction enhances the edge current strengthening the pattern formation (the uphill current is extremely weak in the ordinary WV model due to the strong suppression by the deposition shot noise), our results of Fig. 3 estabish compellingly that the WV model in d=2+1 is, in fact, unstable (uphill current) in contrast to the situation in d=1+1. Thus, the WV model belongs to totally different universality classes in d=1+1 and 2+1 dimensions! We mention that in (unphysical) higher (e.g. d=3+1, 4+1, etc.) dimensions, the WV model would be even more unstable, forming even stronger mounds since the edge diffusion effects will increase substantially in higher dimensions due to the possibility of many more configurations of nearest-neighbor bonding. We have therefore provided the explanation for the long-standing puzzle of an instability in high-dimensional (d $`>`$ 2+1) WV model simulations which were reported in the literature some years ago. More details on this phenomenon will be published elsewhere .
In conclusion, we have shown through concrete examples that, while a Schwoebel instability is certainly sufficient to cause mounded surface growth morphology, the reverse (which has been almost universally assumed in the literature) is simply not true : an ES barrier is by no means necessary to produce mounds, and mound formation in nonequilibrium surface growth morphology does not necessarily imply the existence of a Schwoebel instability. In particular, we show that a large roughness exponent (without any ES barrier) as in the fourth order conserved growth universality class produces mounded growth morphologies which are indistinguishable from the ES barrier effect.
|
no-problem/9908/astro-ph9908062.html
|
ar5iv
|
text
|
# The effect of radiative cooling on the X-ray properties of galaxy clusters.
## 1 Introduction
Clusters of galaxies are the largest virialised structures in the Universe, evolving rapidly at recent times in many popular cosmological models. Even at moderate redshifts the number of large dark matter halos in a cold dark matter Universe with a significant, positive cosmological constant is higher than in a standard cold dark matter Universe and it is precisely because both the number density and size of large dark matter halos evolve at different rates in popular cosmological models that observations of galaxy clusters provide an important discriminator between rival cosmologies.
The advent of X-ray satellites opened up a whole new area in observational astronomy. Hot gas, typically at temperatures of $`10^{78}K`$ sitting in the deep potential wells of galaxy clusters emits radiation via thermal bremsstrahlung. This emission is heavily biased towards the central regions of the cluster because the flux is weighted as the gas density squared. Given the gas temperature and the X-ray surface brightness, the gas column density and a spherically symmetric gas density profile can be estimated. If the hot gas is assumed to reside close to hydrostatic equilibrium within the dark matter halo the underlying dark matter density distribution can be derived.
Since the early work of Abramopoulos & Ku (1983) and Jones & Forman (1984) on the radial density profiles of large galaxy clusters, debate has raged about the presence or absence of large, $`250h^1kpc`$, constant density cores in the X-ray emitting gas. Unfortunately X-ray imaging is notoriously difficult because of the inherent large beam size. In addition, the centre of the emission region is not easy to determine and any centering error also acts to smooth out any central increase when the results are azimuthally averaged and plotted as a radial profile (Beers & Tonry 1986). More recent high resolution images of several clusters have helped to resolve this controversy. Some galaxy clusters do indeed appear to exhibit a large, resolved core but others have a much smaller core, close to the resolution threshold of the instrument. Both White, Jones & Forman (1997) (who studied 207 clusters imaged by the Einstein observatory) and Peres et al. (1998) (who looked at ROSAT observations of the flux-limited sample of clusters provided by Edge et al. 1990) suggest that this dichotomy relies on the presence or absence of a cooling flow. Clusters with a cooling flow appear to have small cores (around $`50h^1kpc`$) whilst clusters without cooling flows have much larger cores. The work of Allen (1998) on the discrepancy between the large X-ray core radii and the small core radii deduced from strong lensing observations (e.g. Kneib et al. 1996) also reached the conclusion that cooling flow type clusters had small core radii in their matter distributions. Because the X-ray flux rises as the local density squared, the total X-ray emission from a cluster is very sensitive to the central density.
High resolution N-body simulations of galaxy clusters (Moore et al. 1998) produce a radial dark matter profile that has no core. The radial profile continues to rise until the resolution threshold is reached, well within the required radius if the gas is to trace the dark matter and still reproduce the X-ray observations. The production of a central constant density dark matter core has been a long standing problem for collisionless dark matter simulations (Pearce, Thomas & Couchman 1993, Navarro & White 1994), although in previous work the resolution threshold was still close to the observed core sizes and so until the latest high resolution studies this was still a tentative result.
Recently several groups have tried to reconstruct the radial temperature profile of galaxy clusters. Markevitch et al. (1998) used ASCA data from 30 clusters and concluded that the temperature falls steeply at large radii. However, Irwin, Bregman & Evrard (1999) analysed ROSAT PSPC images of 26 clusters and concluded that the radial temperature profiles were generally flat out to the virial radius.
The inclusion of a gaseous component into simulations allows the previously assumed relationship between the gas and the dark matter to be derived directly. The first study of this type (Evrard 1990) was carried out without the effects of radiative cooling but reproduced well many theoretical predictions such as a bias between the dark matter and the baryonic material. Similar non-cooling simulations have proved popular (Cen & Ostriker 1994, Kang et al. 1994, Bryan et al. 1994, Navarro, Frenk & White 1995, Bartelmann & Steinmetz 1996, Eke, Navarro & Frenk 1998, Bryan & Norman 1998) and such a simulation formed the basis of the Santa Barbara project in which a dozen groups simulated the formation of the same galaxy cluster (Frenk et al. 1999).
Extending this work to include a dissipative component has proved difficult because the formation of galaxies introduces many additional physical effects. Metzler & Evrard (1994) and Evrard, Metzler & Navarro (1996) circumvented this by introducing a galactic component by hand into a simulation that didn’t follow radiative cooling of the gas to study the effects of feedback on the cluster profile. Fully consistent attempts to follow the radiative cooling of the hot intra-cluster gas have only recently been achievable because of the extra computational overhead involved. Early attempts to include the effects of radiative cooling (Thomas & Couchman 1992, Katz & White 1993, Evrard, Summers & Davis 1994, Frenk et al. 1996) either suffered from poor resolution, focussed on the galactic population or suffered from overmerging effects.
In a now classic paper Katz & White (1993) examined the effect of radiative cooling on the X-ray profile of a single galaxy cluster, a study repeated recently by Lewis et al. (1999). Their simulated cluster has properties that are not observed; a Virgo sized cluster with a super-massive central galaxy and an enormous $`400\mathrm{M}_{}/\mathrm{yr}`$ associated cooling flow. As they point out, massive brightest cluster galaxies of this size are observed within the Universe, as are massive cooling flows, however they fail to stress their rarity, particularly in objects of similar size to the Virgo cluster. More recently Suginohara & Ostriker (1998) simulated a different cluster and also produced an object which has properties they themselves admit are unobserved – “The high resolution simulation resulted in a gas density profile steeply rising toward the center, with consequent very high X-ray luminosity; however, these properties are not observed”. They suggest that feedback of energy from supernovae might account for the discrepancy. In this paper, we obtain quite different results because, unlike previous studies, a realistic fraction of the baryonic material has cooled to form galaxies. Large, bright central cluster galaxies can have a dramatic effect on the cluster potential and consequently the X-ray properties. A reasonable treatment of these objects is extremely important in studies of this type. To prevent too much material cooling to form the central object within each halo we employ a modified form of smoothed-particle hydrodynamics — this will be described further in Section 2, below.
If the effects of cooling are included in the models then one might expect the entropy of the intracluster medium to decrease. Paradoxically, several mechanisms have been suggested that may produce large, constant-density gas cores by *raising* the entropy of the gas at the centre of the cluster:
* Radiative cooling is very efficient in small dark matter halos because the cooling time is less than the dynamical time (White & Rees 1978). These knots of cold, dense gas can be equated with proto-galaxies and, as they are dense and collapsing, they may be reasonably expected to produce stars (Katz 1992). Star formation leads to energy feedback into the interstellar medium via supernovae explosions (Katz 1992, Mihos & Hernquist 1994, Navarro & White 1994, Gerritsen & Icke 1997). Unless the gas immediately recools, this heating acts to increase the entropy of the surrounding material, pushing it onto a higher adiabat and preventing it settling to the very high densities and temperatures required for it to trace the underlying dark matter (Wu, Fabian & Nulsen 1998).
* The presence of galaxies orbiting within the cluster potential acts to stir up the gas, heating it as friction and turbulence dissipate the galaxies’ velocity, simultaneously producing velocity and spatial bias in the galaxy distribution (Frenk et al. 1996). This effect is most pronounced in the centre of the cluster.
* A third mechanism for producing a core is that radiative cooling of the gas at the centre of each potential well acts as a drain on the low entropy material (which cools preferentially). If the remaining gas cannot cool rapidly enough, a core would develop because only high entropy material remains, an effect postulated in the first paper to include radiative cooling (Thomas & Couchman 1992) and later reiterated by Waxman & Miralda-Escude (1995) and Bower (1997). It is this mechanism that we investigate in this paper. We show that the entropy of the intracluster medium is indeed increased, that this leads to a greatly reduced X-ray luminosity, but that it does *not* give a large, constant-density core.
The remainder of this paper is laid out as follows: in Section 2 we present the large hydrodynamical simulations, both with and without the effects of radiative cooling, that we have performed; in Section 3 we extract radial density and temperature profiles for the 20 largest galaxy clusters within each simulation and contrast the profiles with the underlying dark matter distribution; this is followed in Section 4 by a discussion of our findings.
## 2 The simulations
The simulations that we have carried out use the adaptive particle-particle, particle-mesh (AP<sup>3</sup>M) method (Couchman 1991) coupled to the smoothed particle hydrodynamics (SPH) technique (Gingold & Monaghan 1977, Lucy 1977) to follow 2 million gas and 2 million dark matter particles in a box of side $`100Mpc`$ (Couchman, Thomas & Pearce 1995, Pearce & Couchman 1997).
We have performed simulations in two types of flat cold dark matter cosmology, one standard (SCDM) and one with a cosmological constant ($`\mathrm{\Lambda }`$CDM), with the same parameters assumed by Jenkins et al. (1998) ($`\mathrm{\Omega }=1.0`$, $`\mathrm{\Lambda }=0.0`$, h=0.5, $`\sigma _8=0.6`$ for the former and $`\mathrm{\Omega }=0.3`$, $`\mathrm{\Lambda }=0.7`$, h=0.7, $`\sigma _8=0.9`$ for the latter). The baryon fraction was set from Big Bang nucleosynthesis constraints, $`\mathrm{\Omega }_bh^2=0.015`$ (Copi, Schramm & Turner 1995) and we have assumed an unevolving gas metallicity of 0.3 times the solar value. These parameters produce a gas mass per particle of $`2\times 10^9\mathrm{M}_{}`$ in each case and are summarised in Table 1. The dark matter mass is only slightly lower than that given by Steinmetz & White (1997—their Equation 9, but note that there is a typographical error) at which artificial 2-body heating balances radiative cooling. Thus we expect there to be some numerical heating in our simulations. However, as we have chosen to neglect real heat sources, such as supernovae in galaxies, this is of little importance and does not affect our conclusions which concern the differences between runs with and without radiative cooling.
Since we smooth over 32 SPH particles, the smallest gaseous object that can be effectively resolved has a mass of $`6.4\times 10^{10}\mathrm{M}_{}`$. We employ a comoving $`\beta `$-spline gravitational softening equivalent to a Plummer softening of $`10h^1kpc`$ for redshifts $`0<z<1.5`$ (2.5 for $`\mathrm{\Lambda }`$CDM)—at earlier times the softening has a fixed physical size, with the minimum SPH resolution set to match this. We note that a spatial resolution of $`10h^1kpc`$ is over twice the typical scale length of elliptical galaxies and this may lead to enhanced tidal disruption, drag and merging within the largest clusters of objects. However, the force softening cannot be reduced further without introducing 2-body effects. A smaller softening would also lead to a further increase in the number of timesteps required; we already require around 10000 for each cooling run.
In addition to these two simulations which both included the effects of radiative cooling, we repeated the $`\mathrm{\Lambda }`$CDM model without cooling. This simulation has input parameters close to those used by Eke, Navarro & Frenk (1998) who used the same cosmology and the resultant clusters look very similar and exhibit similar X-ray properties (see Fig. 7 for a comparison).
The properties of the galaxies in the two simulations with radiative cooling have been described in Pearce et al. (1999, 2000). They clearly demonstrate an acceptable match to both the spatial and luminosity distribution of observed galaxies. This was achieved by employing three numerical approximations: a mass-resolution of $`6.4\times 10^{10}\mathrm{M}_{}`$ below which objects cannot cool efficiently, a length-resolution (ie softening) of $`10h^1kpc`$, and decoupling of the hot, halo gas from the cold galactic gas. Improving the mass and/or length resolution would increase the fraction of cold gas in our simulations, producing galaxies that were too luminous to match the observations. This would then necessitate the introduction of feedback mechanisms that would over-complicate our model.
Decoupling the hot halo gas is a new innovation that we feel vastly improves the ability of SPH to model fluids in which there are large density contrasts. Without it, hot gas particles have their density overestimated in the vicinity of cold gas and too much material cools to form the central galaxy. Normal cooling of the intracluster medium at temperatures above $`10^5`$K is still handled correctly and galaxy-galaxy mergers and viscous drag on the galaxies as they orbit within the halo are retained. As Pearce et al. (1999) show, such a procedure produces a set of galaxies that fit the local K-band number counts of Gardner et al. (1997). The brightest cluster galaxies contained within the largest halos are not excessively luminous for a volume of this size, unlike those found in previous work (Katz & White 1993, Lewis et al. 1999). The fraction of the baryonic material that cools into galaxies within the virial radius of the large halos in our simulation is listed in Tables 3 & 4 and is typically around 20 percent. This is much less than the unphysically high value of 40 percent reported by Katz & White (1993). Decoupling of the hot phase produces a galactic population and cold gas fraction that is well matched to the observations.
## 3 Results
### 3.1 Extracting objects
For the purposes of this paper we are interested in only the largest objects within each simulation, as only these contain sufficient mass to produce the deep potential wells required to retain hot, X-ray emitting gas. We centre each cluster on the peak of the hot gas density, a position that coincides with the centre of the X-ray emission. This prevents the introduction of an artificial constant-density core which may arise with any other choice of centre — for clusters with significant substructure the centre-of-mass can lie a long way from the centre of the X-ray emission.
The virial radius for each of our clusters was defined as the spherical region, surrounding each cluster centre, that enclosed an overdensity of 178 for SCDM and 324 for $`\mathrm{\Lambda }`$CDM (Eke, Cole & Frenk 1996). Each catalogue was then cleaned by ordering it in size and deleting the smaller of any overlapping clusters. The 20 most massive clusters in each of the catalogues were then used for the work presented here. The properties of the clusters are presented in Tables 2–4. The 16 largest clusters found in the non-cooling simulation are found in the list of the 20 largest clusters in the other two models. The index of the matching cluster from the non-cooling run is given in Tables 3 & 4.
Each of the extracted clusters was checked for substructure by comparing the centre of the X-ray emission to the median position of the particles within the virial radius, a statistic that has been shown to be a useful indicator of the presence of substructure by Thomas et al. (1998). The results of this test are shown in Tables 2–4. All the clusters with an offset of more than 7 percent of the virial radius (6 in each case) were noted and are shown as dotted lines on Figures 1, 3, 4, 5 & 6 and as open symbols on Figure 7.
### 3.2 Dark matter density profiles
The radial dark matter density profiles for the 20 largest objects in each cosmology are shown in Figure 1. Shown is the mean density within spherical shells, the innermost shell plotted in the Figure containing at least 64 particles. The dark matter profiles of those clusters without significant substructure are similar within each cosmology. In the non-cooling run, the innermost bin of each density profile shows a flattening. This is a resolution effect: the radial extent of the bin has to be very large in order to accommodate 64 particles.
Both gas and dark matter density profiles for the largest object in each of the runs are shown in Figure 2. The dark matter profile for the $`\mathrm{\Lambda }`$CDM run without cooling is reasonably well fit by an NFW profile (Navarro, Frenk & White 1997). With cooling the dark matter density profile of the clusters is not well fit by the NFW formula, because the asymptotic slope in the central regions is steeper than -1. This is because, once cooling has been implemented, the large galaxy that forms at the centre of each cluster acts to draw in more dark matter and steepen the profile significantly in the inner regions. A similar effect is seen for most of the other clusters, although in the cooling run several show a drop in density in the innermost bin. This indicates that the peak X-ray emissivity sometimes comes from a galaxy that is not located at the centre of the cluster.
### 3.3 Gas entropy profiles
Gas entropy profiles (and also density and temperature profiles, below) were obtained using only those particles with a temperature exceeding $`\mathrm{12\hspace{0.17em}000}K`$. Typical gas temperatures exceed $`10^7K`$ for halos in this mass range and we wish to exclude cold gas which lies within galaxies or recently tidally disrupted objects. If the cold material were included, there would be a large density spike at the centre of each of the clusters (which all have a central galaxy). This object does not contribute to the X-ray emission because the material it contains is very cold compared to the surrounding hot halo (although the increased depth of the local potential can help to confine dense, hot gas which can affect the bolometric X-ray emission). The specific entropy profile is shown in Figure 3. We plot the quantity $`(T/K)/(\rho /\overline{\rho })^{2/3}`$, where $`T`$ is the temperature and $`\rho `$ the density, measured in units of the mean gas density, $`\overline{\rho }`$.
Let us contrast the results for the $`\mathrm{\Lambda }`$CDM runs with and without cooling. Firstly, note that the entropy at the virial radii is very similar in each case—this is because cooling has had little effect at these large radii. Between the virial radius and about 0.2 times the virial radius (less for the largest cluster), the entropy profiles for the cooling run are shallower than in the non-cooling run. This confirms the hypothesis of Thomas & Couchman (1992) that cooling is able to *raise* the entropy of the intracluster medium by dragging in high-entropy material from the outer regions of the cluster.
Within about 0.2 virial radii, the entropy profiles again steepen—it is within this “cooling radius” that the cooling time is short enough to allow significant cooling of the gas within the lifetime of the cluster. By the time we get to the innermost bins in the Figure, there seems to be a spread in the entropy of the clusters in the cooling run, with some having higher entropy and some lower entropy than the corresponding clusters in the non-cooling run.
The SCDM run with cooling exhibits similar entropy profiles to the $`\mathrm{\Lambda }`$CDM run with cooling.
### 3.4 Gas density profiles
The radial gas density profiles are displayed in Figure 4. Without cooling these profiles look very similar to those obtained by Eke et al. (1998). Within about 0.1 virial radii, the gas profiles are significantly shallower than the corresponding dark matter profiles of Figure 1. This is indicative of the fact that the hot gas has a higher specific energy than the dark matter and, as previous authors have found (e.g. Navarro et al. 1995, Eke et al. 1998), there is more dark matter than gas (relative to the cosmic mean) within the virial radius. There is a general tendency for all the profiles to flatten considerably in the innermost bin. Once again, this is due to inadequate resolution and we do not attach any significance to it.
The effect of cooling is to *lower* the gas density at all radii within the virial radius. The suppression is greatest, a factor of three, at about 0.1 times the virial radius, roughly corresponding to the kink in the entropy profiles seen in Figure 3. Although the density gradients are shallower, they do not roll over into constant-density inner core regions. In fact, for the larger clusters, the density continues to rise further into the centre of the cluster than before, so that the central density is close to that in the non-cooling case.
The profile of the largest object in both gas and dark matter for each of the runs is shown in Figure 2. The arrow indicates a radius of 100 $`h^1`$kpc. Without cooling, the gas density is shallower than that of the dark matter within 0.1 times the virial radius, but this inner, resolved slope of the density profile is still $`\rho r^1`$ with no sign of a constant-density core. As the temperature is approximately constant within this region (see Figure 5), the X-ray luminosity is convergent and dominated by emission from around 200 $`h^1`$kpc (0.1 times the virial radius).
With cooling, the largest cluster exhibits a central density spike due to the presence of a massive central galaxy. This hot gas has a very steep radial density profile, $`\rho r^3`$, and would be classified observationally as a cooling flow of $`60h^2\mathrm{M}_{}`$/yr onto the central cluster galaxy. Between radii of about 40 $`h^1`$kpc and 1 $`h^1`$Mpc, the density profile is a power law, $`\rho r^{1.4}`$, steepening at larger radii. Thus the X-ray luminosity (excluding the cooling flow) comes from a much more extended region than in the non-cooling case.
In conclusion, the gas density has been reduced by the influx of high-entropy material, as expected. However, this has not given rise to constant-density inner cores. In fact, if anything, the density profiles now continue as a power-law closer into the centre of the clusters.
### 3.5 Radial temperature profiles
Radial temperature profiles are shown in Figure 5. The temperature profiles for the relaxed clusters from the $`\mathrm{\Lambda }`$CDM run without cooling are typical of those found in previous work (see for example Eke et al. 1998 and references therein). They rise inwards from the virial radius by about a factor of two, peaking at about 0.1 times the virial radius and then declining again, very gradually, in the cluster centre.
Cooling makes little difference to the temperature profiles, except that corresponding clusters in the $`\mathrm{\Lambda }`$CDM runs reach a *higher* peak temperature when cooling is implemented, due to the inflow of higher entropy gas. The temperatures are very similar at the virial radius, but are about 1.5 times higher at their peak than before. Two clusters show a precipitous decline in temperature in the cluster centre, one of these being the largest cluster—this is evidence for a cooling flow.
The SCDM results are very similar to those for $`\mathrm{\Lambda }`$CDM.
### 3.6 X-ray luminosity profiles
We follow Navarro et al. (1995) in using the following estimator for the bolometric X-ray luminosity of a cluster,
$$L_X=4\times 10^{32}\left(\frac{\rho _i}{\overline{\rho }}\right)\left(\frac{T_i}{K}\right)^{\frac{1}{2}}\mathrm{erg}\mathrm{s}^1$$
(1)
where the density is in units of overdensity (relative to the mean gas density in the box, 2.86$`\times 10^{31}`$g/cm<sup>3</sup>) and the sum extends over all the gas particles with temperatures above $`12000K`$.
The total X-ray luminosity within the virial radius of each of the clusters is listed in Tables 2–4. For the simulation with cooling the clusters are several times less luminous than those from the corresponding non-cooling run. This contradicts the previous results of Katz & White (1993), Suginohara & Ostriker (1998) and Lewis et al. (1999) who all found the X-ray luminosity increased if cooling was turned on. The reason for the discrepancy is, once again, the fact that we have decoupled the hot and cold gas, thus greatly suppressing the cooling of the inflowing, high-entropy gas in our simulations compared to previous ones. This causes a large reduction in the mass of the brightest cluster galaxy compared to those produced by previous work. Our galaxies have reasonable luminosities, mass-to-light ratios and number counts for a volume of this size.
Note that estimates of X-ray luminosity from the non-cooling run are not really meaningful. A radiation rate of this magnitude can only be sustained for a short time before depleting the intracluster medium of gas. The cooling runs produce a more physically self-consistent X-ray luminosity because the radiative effects are taken into account. The X-ray luminosity within the cooling radius is approximately equal to the enthalpy of the gas divided by the age of the cluster.
We plot these bolometric luminosities as a function of radius for each of our clusters in Figure 6. Clearly the relative contribution to the total X-ray emission from different radii is very different for the cooling and non-cooling simulations. Without cooling all the relaxed clusters show very similar emission profiles, with only a small contribution to the total emission coming from the very centre. These profiles are mostly well resolved, as claimed by Eke et al. (1998) for simulations of clusters with this particle number.
Once radiative cooling is turned on the radial emission profiles span a much broader range. For two of the clusters, a central cooling flow type emission is clearly visible — contributing 50 percent and 80 percent of the total X-ray flux. Although the cooling flow region is not well-resolved, it cannot be much larger without depleting the intracluster medium of even more gas. For each of the other clusters, the radius enclosing half of the total emission is much larger than that for the simulation without cooling.
### 3.7 $`L_X`$$`T_X`$ relation
There has been much debate in the literature centering on the X-ray cluster $`L_X`$ versus $`T_X`$ correlation. The emission weighted mean temperature is plotted against the bolometric luminosity within the virial radius for all our clusters in Figure 7. The filled symbols represent the relaxed clusters and the open symbols denote those clusters that show significant substructure.
The effect of cooling is, in general, to slightly raise the temperature but to greatly reduce the X-ray luminosity. Exceptions are the cooling flow clusters where the large amount of emission from gas cooling onto the central galaxy gives rise to a lower temperature than in the non-cooling case. There are two of these, easily visible on the plot, in the $`\mathrm{\Lambda }`$CDM run; we have plotted their new locations, when the central cooling flow is omitted, using crosses linked to the old location via arrows. In both cases less than 2 percent of the hot, X-ray emitting particles were excised to make this calculation (606 for cluster $`\mathrm{\Lambda }1`$ and 20 for cluster $`\mathrm{\Lambda }5`$). In the case of $`\mathrm{\Lambda }5`$ we have caught a transient event - a small amount of gas has been reheated by a merger between a satellite and the brightest cluster galaxy. This gas is in the process of rapidly cooling back onto the central object, emitting large amounts of X-rays.
All 3 sets of clusters display a positive correlation between $`L_X`$ and $`T_X`$, although there are insufficient numbers to tie the trend down very tightly. It is clear from the comments in the preceding paragraph that the nature of the correlation depends critically upon whether one removes the cooling flow emission or not. We believe that a clearer picture arises if this is done.
The regression line in Figure 7 is from Eke et al. (1998) and corresponds to $`L_XT_X^2`$. Our non-cooling clusters fit reasonably well with this relation. The cooling clusters lie below this line. Given that we expect cooling to be less important in the most massive clusters (the absolute value of the cooling time and the ratio of the cooling time to the dynamical time both increase with cluster mass), then we would expect the clusters with radiative cooling to lie closer to the regression line at higher $`T_X`$. Thus the effect of cooling should be to steepen the $`L_X`$$`T_X`$ relation. We hope to test this with more simulations of higher mass clusters.
Also plotted in Figure 7 are the observed data from David et al. (1995). Our clusters are smaller and cooler because they are not very massive (due to our relatively small computational volume), thus it is hard to assess whether the cooling or the non-cooling clusters give a better fit to the data. SCDM clusters, at least with the simulation parameters we have chosen in this paper, do not seem to provide such a good match to the data.
## 4 Conclusions and discussion
We have performed two N-body plus hydrodynamics simulations of structure formation within a volume of side $`100Mpc`$, including the effects of radiative cooling but neglecting star formation and feedback. By repeating one of the simulations without radiative cooling of the gas, we can both compare to previous work and study the changes caused by the cooling. A summary of our conclusions follows.
(a) Without cooling our clusters closely resemble those found by previous authors (Eke et al. 1998 and references therein), with dark matter density profiles that closely follow the universal formula proposed by Navarro et al. (1995).
(b) With cooling, the formation of a central galaxy within each halo acts to steepen the dark matter profile, supporting the conclusion of the lensing studies (e.g. Kneib et al. 1996) that the underlying potential that forms the lens has a small core. This galaxy may not be located exactly at the centre of the X-ray emission, sometimes being offset by up to $`50h^1kpc`$. The inner slope of the density profile is then steeper than that suggested by NFW, closer to that found by Moore et al. (1998) from high resolution N-body simulations (but note that these did not include cooling gas).
(c) We confirm the results of Eke et al. (1998) (and previous studies quoted therein) that without radiative cooling the gas density profile turns over at small radii, that the radial temperature profile drops by a factor of two between its peak value and that obtained at the virial radius and that the baryon fraction within the virial radius is lower than the cosmic mean.
(d) Cooling acts to remove low-entropy gas from near the cluster centre, triggering the inflow of higher entropy material. The entropy excess compared with the non-cooling run is greatest at about 0.2 times the virial radius, because radiative cooling lowers the entropy of the gas near the centre of the cluster.
(e) We stress the importance of correctly modelling the central cluster galaxy. The resultant X-ray properties of the cluster are very dependant upon the centre and if too much material cools into the base of the potential well large amounts of hot, dense gas can be confined, producing enormous X-ray fluxes. We have specifically tailored our models to both globally cool a reasonable fraction of material and to circumvent problems encountered by SPH when faced with large density jumps which can lead to high rates of gas cooling onto the central objects.
(f) In cooling clusters, the gas density is reduced, by a maximum of about a factor of three at 0.1–0.2 times the virial radius. The density profile more closely resembles a power-law than in the non-cooling run. A few clusters show a central density spike, indicative of a cooling flow onto the central cluster galaxy (e.g. Fabian, Nulsen & Canizares 1991).
(g) The temperatures of the cooling clusters show a significant fall between the point where the peak values are obtained (around 0.1 virial radii) and the virial radius. The observational evidence is somewhat divided here. Our results are in agreement with Markevitch et al. (1998) who find that the temperature profile of galaxy clusters are steeply falling. However, Irwin et al. (1999) recover isothermal temperature profiles out to the virial radius from an averaged sample of 26 ROSAT clusters. It is important to try to clear up this observational controversy as isothermal temperature profiles are not seen in our models but are often used for X-ray mass estimates, measurements of $`\mathrm{\Omega }_b`$ (and hence $`\mathrm{\Omega }`$) and theoretical arguments for the $`L_X`$$`T_X`$ relation.
(h) Cooling acts so as to *increase* both the mass-weighted and observed X-ray temperatures of clusters. The peak temperatures are raised by a factor of about 1.5 and the temperature gradient between the peak and the virial radius is correspondingly increased.
(i) The bolometric luminosity for the clusters with radiative cooling is around 3–5 times *lower* than for matching clusters without it. Except for the cooling flow clusters, the X-ray luminosity profile is less centrally concentrated than in the non-cooling case with a greater contribution coming from larger radii.
(j) Cooling flow clusters are easily distinguished from non-cooling flow clusters in the $`L_X`$$`T_X`$ plane. The former are more luminous and cooler than the latter (Fabian et al 1994, Allen & Fabian 1998). Some of this difference results from X-ray analysis methods (Allen & Fabian 1998) but may also be caused by actual physical differences in the mass distribution of clusters. We suggest that, while interpretation of this relation would be made simpler if the cooling flow were excluded before determining the X-ray properties of clusters, this should be done with caution.
(k) The clusters from the non-cooling simulation lie on the $`L_XT_X^2`$ regression line of Eke et al. (1988), whereas those from the cooling run lie below it. We suggest that, as cooling is likely to be less important in more massive clusters, the effect of cooling will be to steepen the relation. This remains to be tested with simulations of more massive clusters.
(l) The very large core radii ($`250h^1kpc`$) observed in some clusters are not seen in our simulations. It is possible that such events are rare, or occur preferentially in massive clusters, in which case our cluster sample may simply be too small to contain such objects. Smaller core radii of around $`50h^1kpc`$ are close to our resolution limit and so are not ruled out by our results, although we find no evidence for a constant-density region in the centre of any of our clusters.
At this point, we should remind the reader that our simulations are designed merely to investigate the effect of radiative cooling on the X-ray properties of the intracluster medium and do not set out to explore the complete range of physical processes going on in clusters. In particular, we ignore the possibility of energy injection into the intracluster medium. Early star formation at high redshift and the subsequent energy feedback from supernovae could act to preheat the gas which falls into the potential wells of galaxy clusters, effectively raising its entropy to such a level that it cannot reside at the high densities required to trace the dark matter into the base of the potential well, and giving rise to the constant-density cores that our simulated clusters lack. Ponman, Cannon & Navarro (1999) argue for such preheating by examining ROSAT observations of 25 clusters.
The inclusion of cooling into a cosmological hydrodynamics simulation has proved highly successful. We have highlighted the necessity of cooling reasonable amounts of gas onto the centre of each galaxy cluster if sensible X-ray luminosity estimates are to be obtained. We have achieved this by a judicious choice of mass resolution and decoupling the hot and cold gas phases. A rigorous method of doing this in SPH is under development. We next intend to simulate more massive clusters in the same way, in order to extend our predictions into regions more accessible to observation.
## Acknowledgments
The work presented in this paper was carried out as part of the programme of the Virgo Supercomputing Consortium using computers based at the Computing Centre of the Max-Planck Society in Garching and at the Edinburgh Parallel Computing Centre. The authors thank NATO for providing NATO Collaborative Research Grant CRG 970081 which facilitated their interaction and the anonymous referee for suggesting significant improvements. PAT is a PPARC Lecturer Fellow.
|
no-problem/9908/astro-ph9908357.html
|
ar5iv
|
text
|
# The Mass Function of an X-Ray Flux-Limited Sample of Galaxy Clusters
## 1 Introduction
Distribution functions of physical parameters of galaxy clusters can place important constraints on cosmological scenarios. Comparison of the mass function with analytical or numerical calculations can yield, e. g., the amplitude of the initial density fluctuations. Comparison of the X-ray luminosity, gas temperature or mass function in different redshift bins can give information about the cluster evolution.
Several authors have published an X-ray luminosity function, e. g., , , , , . Also a cluster temperature function has been determined, e. g., , . A cluster gas mass function has been given by Burns et al. (1996), , for an optically selected cluster sample. A gravitational mass function has previously been determined by Bahcall & Cen (1993), , Biviano et al. (1993), , and Girardi et al. . Bahcall & Cen used the galaxy richness and velocity dispersion to relate to cluster masses from the optical side and a temperature-mass relation to convert the temperature function of Henry & Arnaud (1991), , to a mass function from the X-ray side. Biviano et al. and Girardi et al. used velocity dispersions for an optically selected sample to determine the mass function.
Since there is now high quality X-ray data available for the sample selection – using the ROSAT All-Sky Survey (RASS) – and detailed cluster analysis – using ROSAT and ASCA pointed observations – we have derived for the first time the galaxy cluster gravitational mass function using individually determined X-ray masses.
*This article will be published in the Proceedings of the 19<sup>th</sup> Texas Symposium on Relativistic Astrophysics, held in Paris (1998), and is also available at: http://www.xray.mpe.mpg.de/$``$reiprich/act/publi.html*
## 2 The Sample
Completeness of a cluster sample is essential for the construction of the mass function. We compiled the clusters from RASS-based cluster surveys of high completeness (REFLEX, NORAS ) and compared also with other published catalogs , , , , . To avoid the high absorption and the crowded stellar field in which clusters are hardly recognized in the galactic plane, only clusters with a galactic latitude $`|b|20.0`$ have been included. For the same reasons the area around the Magellanic Clouds has been excluded. In addition the Virgo cluster region has been excluded here. The sky coverage is 26,720 deg<sup>2</sup>.
We reanalysed the clusters using mainly ROSAT PSPC pointed observations and determined the X-ray flux $`f_\mathrm{X}`$(0.1–2.4 keV). 63 clusters have a flux greater than or equal to our adopted flux limit $`f_{\mathrm{X}_{\mathrm{lim}}}`$(0.1–2.4 keV)$`=2.010^{11}\mathrm{erg}/\mathrm{s}/\mathrm{cm}^2`$. We call this cluster sample HiFluGCS (the Highest X-ray Flux Galaxy Cluster Sample). The distribution of HiFluGCS in galactic coordinates can be seen in Fig. 1.
## 3 Data Reduction and Analysis
We used mainly high exposure ROSAT PSPC pointed observations to determine the surface brightness profiles of the clusters, excluding obvious point sources. If no pointed PSPC observations were available in the archive or if clusters were too large for the field of view of the PSPC we used RASS data. To calculate the gas density profile the standard $`\beta `$-model , (equ. 1) has been used.
$$\rho _{\mathrm{gas}}(r)=\rho _{\mathrm{gas}}(0)\left(1+\frac{r^2}{r_\mathrm{c}^2}\right)^{\frac{3}{2}\beta }$$
(1)
$$S_\mathrm{X}(R)=S_\mathrm{X}(0)\left(1+\frac{R^2}{r_\mathrm{c}^2}\right)^{3\beta +\frac{1}{2}}$$
(2)
Fitting the corresponding surface brightness formula (equ. 2) to the observed surface brightness profiles gives the parameters needed to derive the gas density profile. To check if the often detected central excess emission (central surface brightness of a cluster exceeding the fit value) biases the mass determination we also fitted a double $`\beta `$-model of the form $`S_\mathrm{X}=S_{\mathrm{X}_1}+S_{\mathrm{X}_2}`$ and calculated the gas mass profile by $`\rho _{\mathrm{gas}}=\sqrt{\rho _{\mathrm{gas}_1}^2+\rho _{\mathrm{gas}_2}^2}`$. Comparison of the single and double $`\beta `$-model gas masses shows good agreement.
We compiled the values for the gas temperature from the literature, giving preference to temperatures measured by the ASCA satellite , , , . For clusters for which we did not find a published temperature we used the X-ray luminosity-temperature relation given by Markevitch (1998), .
Assuming hydrostatic equilibrium the gravitational masses for the clusters can be determined. Plugging equ. 1 into the hydrostatic equation and assuming the intracluster gas to be isothermal yields the gravitational mass profile
$$M_{\mathrm{tot}}(r)=\frac{3kT_{\mathrm{gas}}r^3\beta }{\mu m_\mathrm{p}G}\left(\frac{1}{r^2+r_\mathrm{c}^2}\right).$$
(3)
Having aquired the gravitational mass profiles for the clusters it is now important to determine the radius at which to determine the cluster mass. Simulations by Evrard et al. (1996), , have shown that the assumption of hydrostatic equilibrium is generally valid within a radius where the mean gravitational mass density is greater than or equal to 500 times the critical density $`\rho _\mathrm{c}=4.710^{30}\mathrm{g}\mathrm{cm}^3`$, as long as clusters undergoing strong merger events are excluded. This radius we call $`r_{500}`$. We calculated the gravitational mass at $`r_{500}`$ and also $`r_{200}`$ which is usually referred to as the virial radius. Using these definitions of the outer radius instead of a fixed length also allows the uniform treatment of clusters of different size. Using $`r_{500}`$ also saves us from an extrapolation much beyond the significantly measured cluster emission in general.
## 4 Results
In Fig. 2 we show the X-ray luminosity function of HiFluGCS compared to luminosity functions of other cluster samples. There is good agreement between these determinations, if anything than HiFluGCS shows a marginally higher density.
In Fig. 3 we show the gravitational mass function for HiFluGCS for different definitions of the outer radius. Also shown are the mass functions obtained by Bahcall & Cen (1993), , and Girardi et al. (1998), , for an outer radius of $`3h_{50}^1\mathrm{Mpc}`$. Comparing our $`r_{500}`$ mass function with that of Bahcall & Cen we find increasing discrepancy towards lower mass clusters up to a factor of 7–8. For the $`r_{200}`$ mass function this discrepancy becomes less for the lower mass clusters but a discrepancy arises towards the high mass end. In order to be able to directly compare the mass function for the clusters in HiFluGCS with the two others shown in Fig. 3 we determined the gravitational mass also at a fixed radius of $`3h_{50}^1\mathrm{Mpc}`$. Apart from the highest mass bin we almost exactly reproduce the mass function determined by Bahcall & Cen in this way. The value given by Girardi et al. lying a factor 3–4 higher.
## 5 Discussion
Two major points are of concern when deriving the mass function:
1) The sample completeness and 2) the reliability of the mass estimates.
1) We compiled the clusters from RASS-based cluster surveys. These surveys are complete at the 90 % level. The incompleteness of these surveys is likely to be highest at fluxes close to their adopted flux limit, which is much lower (factor $`5`$) than the flux limit adopted for HiFluGCS. Additionally we checked further published X-ray cluster catalogs. Besides the fact that still some clusters need to be checked in more detail we conclude that HiFluGCS is essentially complete.
2) For the first time we have determined a mass function with cluster masses determined individually and in a homogeneous way for each cluster using high quality X-ray data. However, galaxy clusters are generally not spherically symmetric and hydrostatic equilibrium may not always be reached. Simulations by Schindler (1996), , and Evrard et al. (1996), , however, have shown independently that the determined and true mass do not differ dramatically ($`20\%`$) if extreme merger clusters are excluded. Clusters also may not always be isothermal. For instance Markevitch et al. (1998), , find a general trend that the temperature decreases with increasing radius, with the effect that the assumption of isothermality leeds to an overestimation of the gravitational mass in the outer parts ($`6`$ core radii) of clusters by $`30\%`$. However, also average relative temperature profiles of cluster samples have been found which are consistent with being isothermal, e. g., Irwin et al. (1999), .
## 6 Conclusions
By reanalysing the brightest clusters of RASS-based galaxy cluster surveys we constructed a complete X-ray flux-limited sample of galaxy clusters (HiFluGCS), the sky coverage being roughly 2/3 of the entire sky. We determined global physical parameters for the clusters using mainly high exposure ROSAT PSPC pointed observations. The luminosity function for HiFluGCS agrees well with previous determinations. We determined the mass function for the first time by individually determining the gravitational mass of each cluster in a homogeneous way assuming hydrostatic equilibrium and isothermality. Comparison with previous determinations shows a strong discrepancy especially towards lower mass clusters which is mainly due to the definition of the outer radius. For comparison we also determined the cluster masses at a fixed radius of $`3h_{50}^1\mathrm{Mpc}`$ and apart from the highest mass bin we almost exactly reproduce the mass function determined by Bahcall & Cen. However, we suggest to use as outer boundary a radius which depends on the mean gravitational mass density, e. g. $`r_{500}`$, $`r_{200}`$, in order to treat clusters of different size in a comparable way.
Another consequence of the definition of the outer radius becomes visible when one integrates the mass function to determine the mass density bound in galaxy clusters $`\rho _{\mathrm{bound}}`$. We find $`\rho _{\mathrm{bound}}`$ relative to the critical density $`\rho _\mathrm{c}`$ to be 1.0 % for clusters of masses $`2.510^{13}h_{50}^1\mathrm{M}_{}`$ and higher using $`r_{500}`$. The fraction increases slightly to 1.6 % for clusters of masses $`4.110^{13}h_{50}^1\mathrm{M}_{}`$ and higher when we use $`r_{200}`$. We find a larger increase if we formally calculate the fraction for a fixed radius of $`3h_{50}^1\mathrm{Mpc}`$, which is 3.8 % for clusters of masses $`1.310^{14}h_{50}^1\mathrm{M}_{}`$ and higher. Despite these different results depending on the outer boundary, however, it is clear that only a small portion of the total mass in the Universe is bound in galaxy clusters as the largest collapsed entities, implying that most of the mass must be somewhere else.
## References
|
no-problem/9908/physics9908059.html
|
ar5iv
|
text
|
# Dynamic and geometric alignment of CS2 in intense laser fields of picosecond and femtosecond duration
## Abstract
CS<sub>2</sub> is identified as a molecule for which distinction can be made between dynamic and geometric alignment induced by intense laser fields. Measured anisotropic angular distributions of fragment ions arise from (i) dynamic alignment of the S-C-S axes along the laser polarization vector for 35-ps laser pulses and (ii) geometric alignment due to an angle-dependent ionization rate in the case of 100-fs pulses. Results of classical calculations of the alignment dynamics support our observations. By comparing mass spectra obtained with linearly- and circularly-polarized light it is not possible to distinguish between dynamic and geometric alignment.
Prevailing wisdom on the spatial alignment of molecules in intense laser fields has been challenged in two recent reports . The possibility of spatially aligning molecules using strong light has attracted much attention since the pioneering double-pulse experiments of Normand et al. and Dietrich et al. appeared to firmly establish that the field associated with intense, linearly-polarized, laser light of picosecond duration induces sufficiently strong torques on an initially-randomly oriented ensemble of linear diatomic molecules for reorientation of internuclear axis to occur. Experimental manifestation of such alignment is the anisotropic angular distribution of fragments produced upon subsequent dissociative ionization of molecules: ion intensities are maximum in the direction of the laser polarization vector and minimum (frequently zero) in the orthogonal direction. Earlier work on diatomic molecules has been extended to triatomics and polyatomics and recently for neutrals ; much of the interest in the alignment of the internuclear axis ( referred to in the current literature as dynamic alignment) in molecules has been generated because of tantalizing possibilities of pendular-state spectroscopy and coherent control experiments . Now, results of experiments conducted by Posthumus et al. and by Ellert and Corkum offer indications that when a linearly-polarized light field acts on molecules whose constituent atoms are heavy (such as iodine-containing diatomics and polyatomics), laser-induced dynamic alignment may not occur. The angular distributions of the products of dissociative ionization in such cases might be determined essentially by the dependence of the ionization rate on the angle made by the laser polarization (conventionally referred to as photoionization anisotropy) vector with the molecule’s symmetry axis . For a given value of laser intensity, the rate of ionization is largest for those molecules whose internuclear axis lies parallel to the direction of the laser polarization vector. The observed anisotropy of the fragment ion angular distribution is therefore determined by a purely geometric ffect - namely the angle made by the molecule with the light field direction. Moreover, this increase in ionization rate maximizes at a critical internuclear distance at which the least bound electron localizes on one atomic core and the field of the other core adds to the laser field, causing the elongated molecule to field ionize. Posthumus et al. have presented a classical model for such enhanced ionization in which it is not necessary to invoke dynamic alignment in order to account for anisotropic angular distributions of the products of laser-field-induced dissociative ionization of I<sub>2</sub>. In the light of these developments, it is clearly important to reassess the contribution of dynamic and geometric alignment to the observed anisotropy of angular distributions of fragment ions, especially when femtosecond duration light pulses are used to field ionize molecules.
Proper theoretical insight into the extent of alignment obtained when molecules are irradiated by intense laser fields is difficult to attain due, essentially, to unknown values of polarizabilities and hyperpolarizabilities of the gamut of electronic states that might be accessed in the course of dissociative ionization. As noted by Ellert and Corkum , it is of much importance to experimentally assess the extent to which the angular anisotropies measured in earlier picosecond and femtosecond experiments are actually due to dynamic alignment. To this end we report here the results of experiments on the linear triatomic, CS<sub>2</sub>, using picosecond laser beams (35 ps, 532 nm) in the intensity range 10<sup>13</sup> W cm<sup>-2</sup> and femtosecond beams (100 fs, 806 nm) in the range 10<sup>13</sup> W cm<sup>-2</sup>\- 10<sup>15</sup> W cm<sup>-2</sup>. CS<sub>2</sub>, along with its valence isoelectronic companion, CO<sub>2</sub>, is the archetypal triatomic system that has been subjected to many experimental studies. In the context of the present work, it also represents a species on the boundary between ‘heavy’ molecules (such as I<sub>2</sub> and its derivatives) on the one hand, and lighter species (like H<sub>2</sub>, N<sub>2</sub>) on the other. On the basis of our experiments, we identify CS<sub>2</sub> as the molecule that undergoes dynamic alignment when irradiated by long (35-ps) pulses; on the other hand, 100-fs pulses give rise to anisotropic fragment ion distributions that can be accounted for in terms to geometric alignment.
In our femtosecond experiments, light pulses (of wavelength 806 nm) were obtained from a high-intensity, Ti:S, chirped pulse amplified, 100-fs laser operating at 10 Hz repetition rate. The laser light was focused using a biconvex lens, of focal length 10 cm, in an ultrahigh vacuum chamber capable of being pumped down to a base pressure of 3$`\times `$10<sup>-11</sup> Torr. We used operating pressures of $``$8$`\times `$10<sup>-8</sup> Torr (i.e. well below the pressure at which space charge effects will alter the results). Ions produced in the laser-molecule interaction zone were analyzed by a two-field, linear time-of-flight (TOF) spectrometer. To study the spatial distribution of ions produced in the focal zone, apertures of different sizes were inserted before the detector in order to spatially limit the interaction volume being sampled. In the present series of measurements we used circular apertures, of 2 mm and 15 mm diameter, centered about the focal point. In the former case only the Rayleigh range (2.4 mm) was sampled while in the latter instance the lowest intensity accessed was 5$`\times `$10<sup>12</sup> W cm<sup>-2</sup> for a peak laser intensity of 2$`\times `$10<sup>15</sup> W cm<sup>-2</sup>. Details of our apparatus and methodology are presented elsewhere . Our picosecond experiments used the second harmonic from an Nd:YAG laser producing 35-ps long light pulses. Here, the ions formed were analyzed by either a quadrupole mass spectrometer or a TOF device. This apparatus has also been described in a number of earlier publications .
Typical angular distributions measured for S<sup>+</sup> and S<sup>2+</sup> fragments are shown in Fig.1. The polarization angle was varied by means of a halfwave plate, with on-line monitoring of the laser intensity to ensure a constant value in the course of measurements with different polarizations. The angular distributions for S<sup>+</sup> and S<sup>2+</sup> (and for higher charge states of S-ions that are not shown in the figure) are clearly very anisotropic, with many more ions being produced in the direction of the laser polarization vector than in an orthogonal direction. This holds for both 35-ps and 100-fs duration laser pulses. A priori, it is not possible to deduce whether the observed anisotropy is due to dynamic or geometric effects. Following the prescription articulated by Posthumus et al. , we distinguish between dynamic alignment on the one hand and the effects of angle-dependent ionization rates (geometric alignment) on the other by probing the ratio of fragment ion yields obtained with orthogonal laser polarizations over a range of laser intensities. Fig. 2 depicts the variation with laser intensity of the ratio of S$`{}_{}{}^{}{}_{}{}^{+}`$/S$`{}_{}{}^{}{}_{}{}^{+}`$ (and the corresponding ratio for S<sup>2+</sup> and S<sup>3+</sup> ions), where the subscripts $``$ and $``$ denote, respectively, the S<sup>+</sup> yield at angles of 0 and 90 between the laser polarization vector and the axis of the TOF spectrometer. In the case of geometric alignment, it would be expected that the $``$-component becomes enhanced as the laser intensity is increased. Consequently, the S$`{}_{}{}^{}{}_{}{}^{+}`$/S$`{}_{}{}^{}{}_{}{}^{+}`$ ratio would fall with increasing laser intensity. Our 100-fs results indeed indicate this: significant falls occur in the S$`{}_{}{}^{}{}_{}{}^{+}`$/S$`{}_{}{}^{}{}_{}{}^{+}`$, S$`{}_{}{}^{}{}_{}{}^{2+}`$/S$`{}_{}{}^{}{}_{}{}^{2+}`$ and S$`{}_{}{}^{}{}_{}{}^{3+}`$/S$`{}_{}{}^{}{}_{}{}^{3+}`$ ratios as the laser intensity is increased from 10<sup>13</sup> to 10<sup>15</sup> W cm<sup>-2</sup>. Geometric alignment clearly dominates the spatial alignment process in this case. However, Fig. 2 also shows that when 35-ps duration laser pulses are used, of intensity in the 10<sup>13</sup> W cm<sup>-2</sup> range, the opposite effect is observed. The S$`{}_{}{}^{}{}_{}{}^{+}`$/S$`{}_{}{}^{}{}_{}{}^{+}`$ ratio now increases with laser intensity. Similar observations were also made for S<sup>2+</sup> ions. Dynamic alignment of CS<sub>2</sub> clearly occurs when we use longer-duration (35-ps) laser pulses.
In order to gain some intuitive insight into the different behavior obtained with short and long pulses, we have carried out calculations of the alignment dynamics by solving the classical equation of motion for a rigid rotor in an electric field (see (1)), for different laser intensities and pulse durations. These calculations provide information on the nature of the torques that are experienced by the molecule in the time evolution of the laser pulse. The interaction of the radiation field with CS<sub>2</sub> is, in the first approximation, governed by the molecular polarizability, $`\alpha =\alpha _{}\alpha _{}`$, where the first and second terms refer, respectively, to polarizability components parallel to and perpendicular to the molecular bond. Following Landau and Lifshitz we express the angular acceleration of the internuclear axes as
$$\ddot{\theta }=\frac{\alpha ϵ^2}{2I}sin2\theta ,$$
(1)
where $`\theta `$ is the polar angle between the S-C-S axis and the light field, $`ϵ`$ is the field strength, and $`I`$ is the moment of inertia of the molecule about its centre of mass. We assume cylindrical symmetry and ignore higher-order terms involving $`\alpha ^2`$. There is no permanent dipole contribution since we take CS<sub>2</sub> to remain linear even in a strong external field. The alignment dynamics calculated by us are depicted in Fig. 3 for a range of laser intensities. The time-dependent light pulse is taken to be a gaussian multiplied by a cosine function (the intensity envelopes of our 100-fs and 35-ps pulses are shown as the solid lines in Fig. 3). As the light intensity increases, a torque is exerted on the molecule, causing reorientation along the polarization vector. Further increases in intensity lead to ionization, bond stretching and multiple electron ejection (and consequent dissociation). Since eqn. (1) only accounts for the first of these steps, the reorientation time that is obtained is to be regarded as a lower limit. Using 100-fs pulses at intensities $`<`$10<sup>14</sup> W cm<sup>-2</sup>, our results indicate that no significant reorientation of the S-C-S molecule occurs. For peak intensities in the 10<sup>14</sup> W cm<sup>-2</sup> range, there is significant reorientation; the angle changes from 0.75 to 0.45 radians in the time taken for the laser pulse to reach an intensity of $``$5$`\times `$10<sup>13</sup> W cm<sup>-2</sup>. Since this intensity is well above the S<sup>+</sup> appearance threshold (Fig. 2), the extent of reorientation is clearly overestimated in our calculations. For higher peak intensities ($``$10<sup>15</sup> W cm<sup>-2</sup>), our results indicate that although the torque generated by the laser field is sufficient to bring the molecule in line with the polarization vector, the angular velocity imparted in the process is large enough to cause oscillations about the polarization axis such that there is no overall alignment. Fig. 3b shows the corresponding calculations for 35-ps laser pulses. For peak intensities above 10<sup>12</sup> W cm<sup>-2</sup>, the molecular axis becomes coincident with the polarization vector very early in the time evolution of the laser pulse. Thereafter, small amplitude oscillations are seen. Such a system would be expected to show dynamic alignment, as is indeed seen in our experiments.
An obvious, but presently unavoidable, shortcoming in our calculations is in our use of $`\alpha `$-values that pertain to CS<sub>2</sub> in its ground electronic state. The value of $`\alpha _{}`$ will increase substantially with applied field, as the bond elongates and the electron density distribution is distorted from its ground-state morphology. No information exists on how, and to what extent, such enhancement occurs. In addition, contributions from the hyperpolarizabilities (which are expected to be substantial at these laser intensities) must also be considered in a proper description . Nevertheless, the qualitative insight that these model calculations yield is encouraging in that there is consistency with our experimental observations.
We note that the prescription used by Ellert and Corkum in order to distinguish between dynamic and geometric alignment in iodine and iodine-containing molecules was to measure the fragmentation pattern using linearly- and circularly-polarized light of intensities such that the same field strengths were obtained in both cases. Identical fragmentation patterns were taken as evidence against dynamic alignment. An assumption that is implicit in such an approach is that circularly-polarized light can be treated as a combination of two perpendicular, linearly-polarized components. However, on the basis of recent experiments, it is our contention that the dynamics resulting from irradiation of molecules by circularly-polarized light cannot, a priori, be expected to be a linear combination of the dynamical effects due to linearly polarized light aligned parallel and perpendicular to the molecular symmetry axis. Circularly-polarized light imparts angular momentum to the molecule that is being irradiated, whereas linearly-polarized light does not. How this might affect molecular dynamics in intense laser fields is an issue that has not been properly addressed. Experiments that we have recently conducted on intense-field-induced multiple ionization of N<sub>2</sub> reveal that the polarization state of the incident light affects the ionization spectrum in the following fashion: when using circularly-polarized light, we observe a distinct suppression of ionization channels compared to the situation pertaining to linearly-polarized light of the same field strength. Moreover, an enhancement of lower-energy pathways to dissociation is apparent in the case of circular polarization. We believe that this may reflect the importance of high angular momentum intermediate states that may be accessed when circularly-polarized light is used. Such states present “wider” centrifugal barriers to dissociative ionization; this manifests itself in the increasing importance of longer tunneling time pathways that our data on N$`{}_{}{}^{q+}{}_{2}{}^{}`$ ions indicate.
In the case of CS<sub>2</sub> molecules also, we observe significant differences in the pattern of dissociative ionization between circularly- and linearly-polarized light. By way of example, we show in Fig. 4 the fragmentation pattern obtained, at two laser intensities, using linearly-polarized light that is aligned parallel and perpendicular to the TOF axis, as well as with circularly-polarized light. The ion yield obtained with circularly- polarized light is uniformly lower than that obtained with linear polarization, parallel as well as perpendicular (the CS$`{}_{}{}^{+}{}_{2}{}^{}`$ peak at the higher laser intensity is not taken into account in this comparison as the ion signal was saturated). Note that fragment ion yields obtained with parallel polarization are higher than corresponding yields obtained with circular-polarization, even when the magnitudes of the electric field components in the latter are a factor of three larger than in the former. Hence, comparison of ion yields obtained with linear and circular polarization cannot give unambiguous evidence for or against dynamic alignment, and it is for this reason that we opt to rely on data shown in Fig. 2 to make deductions about geometric alignment being responsible for the anisotropic angular distributions that are obtained for fragment ions when CS<sub>2</sub> molecules are immersed in 100-fs-long laser pulses.
Hitherto, discussions of polarization effects in molecules have tended to focus only on classical aspects of spatial alignment resulting from induced dipole moments in intense light fields. The results shown in Fig. 4 indicate that the polarization state of light is also of fundamental importance in a quantum mechanical sense in that it affects molecular ionization yields and dissociation pathways. It is clear that such considerations need to be incorporated in development of adequate descriptions of molecular dynamics in intense light fields.
We thank the Department of Science and Technology for substantial financial support for our femtosecond laser system and Vinod Kumarappan for many useful discussions.
|
no-problem/9908/nucl-th9908027.html
|
ar5iv
|
text
|
# On the Origin of the Multiplicity Fluctuations in High Energy Heavy Ion Collisions
## I Introduction
Recently the subject of event-by-event fluctuations have attracted a significant interest. On the theory side, it was motivated by possible relation with thermodynamical observables toward understanding, or as a background to critical fluctuations, expected at the so called tricritical point . Experimentally, it was obviously stimulated by near-perfect Gaussian shapes of distribution observed by NA49 experiment at CERN .
As emphasized in ref., all observables can be divided into two broad classes: “intensive” (e.g. mean energy or $`p_t`$ per particle, and “extensive” (e.g. total particle multiplicity) ones. The latter are sensitive not only to $`finalstate`$ interaction effects like resonance production (discussed in detail in ) but also to the $`initialstate`$ effects: on general grounds their contributions can be comparable. The simplest of non-statistical effects is generated by pure geometry of the collision. the distribution over impact parameter b in a range between 0 and some $`b_{max}`$ (depending on trigger conditions). This particular effect recently discussed by Baym and Heiselberg , with the conclusion that it can account for the observed multiplicity fluctuations. The aim of this brief paper is to re-consider this calculation, and also include other non-statistical fluctuations originating at the initial stage.
The central quantity to be discussed is the following ratio
$`{\displaystyle \frac{<\mathrm{\Delta }N_{ch}^2>}{<N_{ch}>}}2.02.2`$ (1)
where the r.h.s. is the NA49 value for 5% centrality data to be used in this work. If secondary particle production be purely independent process govern by a Poisson distribution, the r.h.s. would be 1. Correlations coming from resonant decays estimated in lead to a value of about 1.5. The main question addressed in this work is whether fluctuations in the initial collision do or do not explain the remaining part of the dispersion.
## II The model
We discuss three sources of the initial-state fluctuations: (i) the range of impact parameters b as already mentioned; (ii) fluctuations in the number of participants due to punched through “spectators”; (iii) fluctuations of the individual nucleons, or of the NN cross section .
Nuclear distributions are parameterized as usual
$`n(r)={\displaystyle \frac{n_0}{exp[(rR)/a]+1}}`$ (2)
with usual parameters for Pb, $`n_0=.17fm^3`$ R=6.52 fm, a=0.53 fm. As we see below, the non-zero a is important for the (ii) component. The probability for a nucleon to go through and become spectator is the usual eikonal-type $`exp(\sigma _{NN}_{path}𝑑zn(z))`$ formula. We used both the mean $`\sigma _{NN}`$, or a fluctuating one<sup>*</sup><sup>*</sup>*We remind the reader that at high energies we consider, the nucleon has no time to reconfigure, and so all subsequent interactions of one nucleon take place with the same cross section. . In the latter case we use normal distribution, with the value of the dispersion taken from Fig.1 of . For collision energy $`E100GeV`$ it is $`\mathrm{\Delta }\sigma _{NN}.5<\sigma _{NN}>`$. We attribute $`1/\sqrt{2}`$ part of this dispersion to each nucleon, as they obviously fluctuate independently before collision.
We have simulated PbPb collisions and b interval corresponding to 5% of the total cross section: the resulting distribution of the number of participants $`N_{part}`$ is shown in Fig.1. Our main result (all effects included) is shown by the closed points: for comparison we have also plotted 3 other variants. If fluctuations of the cross section is switched off, we get the distribution shown by open points. Although its shape is similar, there is an overall shift to the maximal $`N_{part}`$: if nucleons do not fluctuate, it is more difficult to punch through. Similar thing happens if the surface thickness a is put to zero (the solid line). However purely geometric “triangular” distribution over b, from 0 to its maximum, have a different shape and width: it is this naive distribution which was used in . (We have found that for larger centrality cut our results converge toward geometric one, but for small 5% cut it is clearly inadequate.)
## III Results
Convoluting the distribution of the number of participants obtained above with the contribution of the $`finalstate`$ interaction effects, we get the final distribution over the observed number of charged particles $`N_{ch}`$:
$`{\displaystyle \frac{dN}{dN_{ch}}}={\displaystyle \frac{dN_{part}P(N_{part})}{(2\pi )^{1/2}\mathrm{\Delta }N_{ch}(N_{part})}e^{\frac{[N_{ch}<N_{ch}(N_{part})>]^2}{2\mathrm{\Delta }N_{ch}(N_{part})^2}}}`$ (3)
where the mean and dispersion are assumed to depend linearly on the number of participants $`N_{part}`$. In particular, we use
$`<N_{ch}(N_{part})>=CN_{part}`$ (4)
(with C determined from the mean observed multiplicity to be C=.75). For the dispersion we use the estimated effect of final state interaction in resonance gas , namely
$`\mathrm{\Delta }N_{ch}(N_{part})^2=1.5<N_{ch}(N_{part})>`$ (5)
The results are shown in Fig.2. One can see that the distribution we obtain reproduces data rather well, although it is somewhat more narrow. Let us also note, that because it is only the width of the $`N_{part}`$ matters, other model distributions shown in Fig.1 do equally well. The exception is the triangular one: due to its larger width it is closer to data than ours. (This explains why the authors of obtained good agreement in their total width, but of course does not justify it.)
## IV Summary, Discussion and Outlook
In summary, we have found that initial-state fluctuations contribute about 20% to the ratio (1), to be compared to 50% from random statistics and another 25% from final state (resonance) correlations . In sum, they do indeed explain data at about 10% level, which is we think is their accuracy level. Further progress would need much more work, including studies of the detector acceptance, etc.
The main physics conclusion is that out of three effects listed at the beginning of section 2 the dominant one is clearly (ii), namely the fluctuations in a number of punched-through spectators. In contrast to , we do not find that purely geometrical effect (i) is important for this particular data. We also found fluctuations in the NN cross section (iii) to be relatively unimportant for the width of final distribution, adding only few percent to it.
As a discussion item, one may consider remaining discrepancy between data and our calculation. Even more than the width, one may address the origin of the asymmetry of the multiplicity distribution, well seen in Fig.2: the right tail is larger than the left. However, before ascribing to this small change of width and/or small asymmetry any physical significance, the issue of acceptance should be better addressed.
For outlook, let us show that with the increasing multiplicity expected at RHIC/LHC (both because of larger multiplicity and larger detector coverage) the role of the initial-state fluctuations $`increases`$. The ratio (1) is constructed in such a way that statistical fluctuations always produce the same r.h.s at any multiplicity. However, non-statistical ones we discuss do not obey it. For example, if we assume the same collision but (quite arbitrarily) that the mean observed number of particles is 1000 (which means C=2.7 in (4)) we get from the exactly same calculation
$`{\displaystyle \frac{<\mathrm{\Delta }N_{ch}^2>}{<N_{ch}>}}2.57`$ (6)
to be compared with the the value about 1.9 above.
## V Acknowledgments
We thank Gunter Roland for useful discussion and for multiplicity data obtained by the NA49 collaboration we used. This work is supported by US DOE, by the grant No. DE-FG02-88ER40388.
|
no-problem/9908/cond-mat9908103.html
|
ar5iv
|
text
|
# Flowing sand - a physical realization of Directed Percolation
## 1 The Douady-Daerr experiment.
Glass beads (“sand”) of diameter $`250`$-$`425\mu `$ are poured uniformly at the top of an inclined plane (size $``$ $`1m`$), covered by a rough velvet cloth; the angle of inclination $`\phi _0`$ can be varied. As the beads flow down, a thin layer of thickness $`h=h_d(\phi _0)`$, consisting of several monolayers, settles and remains immobile. At this thickness the sand is dynamically stable; the value of $`h_d`$ decreases with increasing angle of inclination.
For each $`\phi _0`$ there exists another thickness $`h_s`$ with $`h_s(\phi _0)>h_d(\phi _0)`$, beyond which a static layer becomes unstable. Hence there exists a region in the $`(\phi ,h)`$ plane, in which a static layer is stable but a flowing one is unstable. We can now take the system, that settled at $`h_d(\phi _0)`$, and increase its angle of inclination to $`\phi =\phi _0+\mathrm{\Delta }\phi `$, staying within this region of mixed stability. The layer will not flow spontaneously, but if we disturb it, generating a flow at the top, an avalanche will propagate, leaving behind a layer of thickness $`h_d(\phi )`$. These avalanches had the shape of a fairly regular triangle with opening angle $`\theta `$. As the increment $`\mathrm{\Delta }\phi `$ decreases, the value of $`\theta `$ decreases as well, vanishing as $`\mathrm{\Delta }\phi 0`$. This calls for testing a power law behavior of the form
$$\theta (\mathrm{\Delta }\phi )^x.$$
(1)
If instead of increasing $`\phi `$ we lower the plane, i.e., go to $`\mathrm{\Delta }\phi <0`$, the thickness of our system, $`h_d(\phi _0)`$, is less than the present thickness of dynamic stability, $`h_d(\phi )`$. In this case an initial perturbation should not propagate, it will rather die out after a certain time (or beyond a certain size $`\xi _{}`$ of the transient avalanche). As $`|\mathrm{\Delta }\phi |0`$, we expect this decay length to grow with a power law:
$$\xi _{}(\mathrm{\Delta }\phi )^\nu _{}.$$
(2)
Hence by pouring sand at inclination $`\phi _0`$, DD produced a critical system, precisely at the borderline (with respect to changing the angle) between a stable regime $`\phi <\phi _0`$, in which perturbations die out, and an unstable one $`\phi >\phi _0`$, where perturbations persist and spread. The preparation procedure can be considered as a special kind of self-organized criticality (SOC) which differs from standard SOC models in which a slow driving force (acting on a time scale much smaller than that of the system’s dynamic response) causes evolution to a critical state. Here avalanches are started by hand one by one.
To associate this threshold phenomenon with DP, denote by $`p`$ the percolation probability and by $`p_c`$ its critical value. We associate the change in tilt with $`pp_c`$, i.e., assume that near the angle of preparation $`\phi _0`$ the behavior of the sand system is related to a DP problem with $`\mathrm{\Delta }\phi pp_c`$. The exponent $`\nu _{}`$ should be compared with the known values for DP and CDP. The exponent $`x`$ in Eq. (1) can also be measured and compared with
$$\mathrm{tan}\theta \xi _{}/\xi _{}(\mathrm{\Delta }\phi )^{\nu _{}\nu _{}}.$$
(3)
## 2 Definition of the model.
To write down a simple model based on the physics of the flowing sand, we adopt an observation made by DD, that in the regime of interest ($`\phi \phi _0`$) grains of the top layer rest on grains of the layers below (rather than on other grains of the top layer). Hence the lower layers provide for the top one a washboard potential, as shown in Fig. 1.
We place the grains of the top layer on the sites of a regular square lattice with row index $`t`$ and columns $`i`$. At any given time a grain $`G`$ may become active if at least one of its neighbors from the row above has been active at the previous time step. If $`\mathrm{\Delta }E(G)`$, the total energy transferred from these neighbors, exceeds the barrier $`E_b`$ of the washboard, $`G`$ becomes active, “rolls down” and collides with the grains of the next row. The energy it brings to these collisions is $`1+\mathrm{\Delta }E(G)`$, where 1 is our unit of energy, representing the potential energy due to the height difference between two consecutive rows (see Fig. 1). A fraction $`f`$ of its total energy is dissipated; the rest is divided stochastically among its three neighbors from the lower row.
The model is defined in terms of two variables; an activation variable $`S_i^t=0,1`$ and an energy $`E_i^t`$. The index $`t`$ denotes rows of our square lattice and time; at time $`t`$ we update the states of the grains that belong to row $`t`$. The model is controlled by two parameters: $`E_b`$, the barrier height, and $`f`$, the fraction of dissipated energy.
The dynamic rules of our model are as follows. For given activities $`S_i^t`$ and energies $`E_i^t`$ we first calculate the energy transferred to the grains of the next row $`t+1`$. To this end we generate for each active site three random numbers, $`z_i^t(\delta )`$ (with $`\delta =\pm 1,0`$) that add up to 1. The energy transferred to grain $`(t+1,i)`$, given by
$$\mathrm{\Delta }E_i^{t+1}=(1f)\underset{\delta =\pm 1,0}{}S_{i\delta }^tE_{i\delta }^tz_{i\delta }^t(\delta ),$$
(4)
determines its activation:
$$S_i^{t+1}=\{\begin{array}{cc}1& \text{active \hspace{0.33em}\hspace{0.33em} if }\mathrm{\Delta }E_i^{t+1}>E_b,\\ 0& \text{inactive if }\mathrm{\Delta }E_i^{t+1}E_b.\end{array}$$
(5)
Then the energies of the next row of grains are set:
$$E_i^{t+1}=S_i^{t+1}(1+\mathrm{\Delta }E_i^{t+1}).$$
(6)
The three random numbers $`z_i^t(\delta )`$ represent the fraction of energy transferred from the grain at site $`(t,i)`$ to the one at $`(t+1,i+\delta )`$. We add up the energy contributions from these active sites; the fraction $`1f`$ is not dissipated; if the acquired energy $`\mathrm{\Delta }E_i^{t+1}`$ exceeds $`E_b`$, site $`(t+1,i)`$ becomes active, rolls over the barrier and brings to the collisions (at time $`t+2`$) the acquired energy calculated above and its excess potential energy (of value 1).
## 3 Qualitative discussion of the transition and connection to the experiment.
Let us vary $`E_b`$ at a fixed value of the dissipation. For small values of $`E_b`$ an active grain will activate the grains below it with high probability; avalanches will propagate downhill and also spread sideways. For a strongly localized initial activation we should, therefore, see triangular shaped activated regions. As $`E_b`$ increases, the rate of activation decreases and the opening angle $`\theta `$ of these triangles should decrease, until $`E_b`$ reaches a critical value $`E_b^c`$, beyond which initial activations die out in a finite number of time steps (or rows). These expectations are indeed borne out by simulations of the model: the dependence of $`E_b^c`$ on the dissipation $`f`$ is shown in Fig. 2.
The physics of the process is captured by a simple mean-field type approximation, in which all stochastic variables are replaced by their average values. Consider an edge separating an active region from an inactive one. At time $`t`$ sites to the left of $`i`$ and $`i`$ itself are wet, whereas $`i+1,i+2,\mathrm{}`$ are dry. Will site $`i`$ be wet or dry at the next time step? In our mean-field estimate of the answer, assuming that all wet sites at time $`t`$ have the same energy $`E^t`$, the energy delivered to site $`i`$ at time $`t+1`$ is $`\mathrm{\Delta }E_i^{t+1}=\frac{2}{3}(1f)(1+\mathrm{\Delta }E^t)`$, where we set in Eq. (4) all $`z(\delta )=1/3`$. At the critical point we expect all energies to just suffice to go over the barrier; hence set $`\mathrm{\Delta }E_i^{t+1}=\mathrm{\Delta }E^t=E_b^c`$. Solving the resulting equation yields a rough estimate of the transition line,
$$E_b^c=2(1f)/(1+2f),$$
(7)
as shown in Fig. 2. It is easy to produce better mean-field type estimates of the transition and to compute the corresponding energy profile of the wet region .
To connect our model to the DD experiments note that the tilt angle $`\phi `$ tunes the ratio between the barrier height and the difference of potential energies between two rows. When the system is prepared at $`\phi _0`$, this ratio is precisely $`E_b^c`$. When one increases the tilt angle to $`\phi >\phi _0`$, $`E_b`$ (measured in units of the potential difference) decreases and we have $`E_b<E_b^c`$. As the tilt angle is now reduced, the size of $`E_b`$ increases, until it reaches its critical value precisely at $`\phi _0`$. Thus increasing $`E_b`$ in the model corresponds to lowering the tilt angle towards $`\phi _0`$ where the system is precisely at its boundary of dynamic stability.
Hence to reproduce the experiment we were looking for (a) fairly compact triangular regions of activation for $`E_b<E_b^c`$, and (b) an opening angle of these triangles which should go to zero as $`E_b`$ approaches $`E_b^c`$ from below.
We simulated the model defined in Eqs. (4)-(6) and found that it indeed reproduces these qualitative features of the experiment (see Fig. 3). The avalanches shown were produced for dissipation $`f=0.5`$, activating a single site at $`t=0`$, to which an initial energy of $`E_0=100`$ was assigned. As long as $`E_b`$ was not too close to $`E_b^c`$ the observed avalanches were compact, triangular and with fairly straight edges. The edges became rough only very close to $`E_b^c`$, such as the one shown on the right hand side of Fig. 3. The opening angle of the active regions $`\theta `$ decreased as $`E_b`$ increased towards $`E_b^c`$, as indicated in in the inset of Fig. 2. From these simulations we estimated $`E_b^c`$ and the exponent (see Eq. (3))
$$x=\nu _{}\nu _{}=0.98(5)1.$$
(8)
The linear variation of $`\mathrm{tan}(\theta )`$ with $`\mathrm{\Delta }\phi `$ is in agreement with experimental measurements . Our findings have to be compared with the mean-field theory suggested in Ref. which predicts a square root behavior.
## 4 Crossover to directed percolation.
The linear law (8) is consistent with the critical exponents of CDP
$$\nu _{}=2,\nu _{}=1,\beta =0.$$
(9)
These observations pose, however, a puzzle: since one believes that DP is the generic situation, one would expect to find non-compact active regions and DP exponents. In fact, according to the DP conjecture any continuous spreading transition from a fluctuating active phase into a single frozen state should belong to the universality class of directed percolation (DP), provided that the model is defined by short range interactions without exceptional properties such as higher symmetries or quenched randomness. The present model has neither special symmetries nor randomness ; it has a fluctuating active phase and exhibits a transition, characterized by a positive one-component order parameter, into a single absorbing state. Hence the phase transition of our model should belong to the DP universality class.
In order to understand this apparent paradox we performed high-precision Monte-Carlo simulations for dissipation $`f=0.5`$ (see for further details). We performed time-dependent simulations , i.e., we toppled a single grain in the center of the top row and measured the survival probability $`P(t)`$ and the number of active sites $`N(t)`$. At criticality, these quantities exhibit an asymptotic power law behavior
$$P(t)t^\delta ,N(t)t^\eta .$$
(10)
In the case of CDP these exponents are given by $`\delta =1/2`$ and $`\eta =0`$, whereas DP is characterized by the exponents $`\delta =0.1595`$ and $`\eta =0.3137`$. Detecting deviations from power-law behavior in the long-time limit we estimated $`E_b^c=0.385997(5)`$.
Numerical results, obtained from simulations at $`E_b^c`$, are shown in Fig. 4. After a short transient the system enters an intermediate regime, which extends up to several hundred time steps. Here the active sites form a single cluster and we observe power-law behavior with CDP exponents (dotted lines in Fig. 4). This intermediate regime is followed by a long crossover from CDP to DP, extending over almost two decades up to more than $`10^4`$ time steps, after which the system enters an asymptotic DP regime (indicated by dashed lines in Fig. 4).
Compared with ordinary DP lattice models, this crossover regime is extremely long. We observed that by increasing $`f`$ the crossover time can be reduced by more than one decade. Hence, for an experimental verification of DP, systems with high dissipation are more appropriate. The present experiments correspond to about 3000 time steps (rows of beads); increasing this to about $`10^4`$ by using a longer inclined plane and smaller beads should yield DP behavior, provided that deviations of the experiment from the model do not increase with system size.
The crossover from CDP to DP is illustrated in Fig. 5. Two avalanches are plotted on different scales. The left one represents a typical avalanche within the first few thousand time steps. As can be seen, the cluster appears to be compact on a lateral scale up to 100 lattice sites. However, as can be seen in the right panel of Fig. 5, after very long time the cluster breaks up into several branches, displaying the typical patterns of critical DP clusters. Thus, before measuring critical exponents, this feature has to be tested experimentally. To this end the DD experiment should be performed repeatedly at the critical tilt $`\phi =\phi _0`$. In most cases the avalanches will be small and compact. However, sometimes large avalanches will be generated which reach the bottom of the plate. If these avalanches are non-compact, we expect DP-type asymptotic critical behavior. Only then is it worthwhile to optimize the experimental setup and to measure the critical exponents quantitatively.
AJD thanks the CICPB and the UNLP, Argentina and the Weizmann Institute for financial support. HH thanks the Weizmann Institute and the Einstein Center for hospitality and financial support. ED thanks the Germany-Israel Science Foundation for support, B. Derrida for some most helpful initial insights and A. Daerr for communicating his results at an early stage.
|
no-problem/9908/hep-ph9908399.html
|
ar5iv
|
text
|
# Standard Model vs New Physics in Rare Kaon DecaysWork supported in part by the EEC-TMR Program, Contract N. CT98-0169.
## 1 Lepton-flavour violating modes
Decays like $`K_L\mu e`$ and $`K\pi \mu e`$ are completely forbidden within the SM, where lepton-flavour is conserved, but are also absolutely negligible if we simply extend the model by including only Dirac-type neutrino masses. A positive evidence of any of these processes would therefore unambiguously signal NP, calling for non-minimal extensions of the SM. Moreover, as long as the final state contains at most one pion in addition to the lepton pair, the experimental information on the decay rate can be easily translated into a precise information on the short-distance amplitude $`sd\mu e`$. In this respect we stress that $`K_L\mu e`$ and $`K\pi \mu e`$ provide a complementary information: the first mode is sensitive to pseudoscalar and axial-vector $`sd`$ couplings, whereas the second one is sensitive to scalar, vector an tensor structures.
In exotic scenarios, like $`R`$-parity violating SUSY or models with leptoquarks, the $`sd\mu e`$ amplitude can be generated already at tree level. In this case naive power counting suggests that limits on $`B(K_L\mu e)`$ or $`B(K\pi \mu e)`$ at the level of $`10^{11}`$ probe NP scales of the order of 100 TeV . On the other hand, in more “conservative” scenarios where the $`sd\mu e`$ transition can occur only at the one-loop level, it is more appropriate saying that the scale probed is around the (still remarkable !) value of 1 TeV. An interesting example of the second type of scenarios is provided by left-right models with heavy Majorana neutrinos .
## 2 $`K\pi \nu \overline{\nu }`$
These decays are particularly fascinating since on one side, within the SM, their small but non negligible rates are calculable with high accuracy in terms of the less known Cabibbo-Kobayashi-Maskawa (CKM) angles . On the other side, the flavour-changing neutral-current (FCNC) nature implies a strong sensitivity to possible NP contributions, even at very high energy scales.
Within the SM the $`sd\nu \overline{\nu }`$ amplitude is generated only at the quantum level, through $`Z`$–penguin and $`W`$–box diagrams. Separating the contributions to the amplitude according to the intermediate up-type quark running inside the loop, one can write
$$𝒜(sd\nu \overline{\nu })=\underset{q=u,c,t}{}V_{qs}^{}V_{qd}𝒜_q\{\begin{array}{cc}𝒪(\lambda ^5m_t^2)+i𝒪(\lambda ^5m_t^2)\hfill & (q=t)\hfill \\ 𝒪(\lambda m_c^2)+i𝒪(\lambda ^5m_c^2)\hfill & (q=c)\hfill \\ 𝒪(\lambda \mathrm{\Lambda }_{QCD}^2)\hfill & (q=u)\hfill \end{array}$$
(1)
where $`V_{ij}`$ denote the elements of the CKM matrix. The hierarchy of these elements would favor up- and charm-quark contributions, however the hard GIM mechanism of the parton-level calculation implies $`𝒜_qm_q^2/M_W^2`$, leading to a completely different scenario. As shown on the r.h.s. of (1), where we have employed the standard phase convention ($`\mathrm{}V_{us}=\mathrm{}V_{ud}=0`$) and expanded the CKM matrix in powers of the Cabibbo angle ($`\lambda =0.22`$) , the top-quark contribution dominates both real and imaginary parts.<sup>7</sup><sup>7</sup>7 The $`\mathrm{\Lambda }_{QCD}^2`$ factor in the last line of (1) follows from a naive estimate of long-distance effects. This structure implies several interesting consequences for $`𝒜(sd\nu \overline{\nu })`$: it is dominated by short-distance dynamics and therefore calculable with high precision in perturbation theory; it is very sensitive to $`V_{td}`$, which is one of the less constrained CKM matrix elements; it is likely to have a large $`CP`$-violating phase; it is very suppressed within the SM and thus very sensitive to possible NP effects.
The short-distance contributions to $`𝒜(sd\nu \overline{\nu })`$, within the SM, can be efficiently described by means of a single effective dimension-6 operator: $`O_{LL}^\nu =(\overline{s}_L\gamma ^\mu d_L)(\overline{\nu }_L\gamma _\mu \nu _L)`$. The Wilson coefficient of this operator has been calculated by Buchalla and Buras including next-to-leading-order QCD corrections (see also ), leading to a very precise description of the partonic amplitude. Moreover, the simple structure of $`O_{LL}^\nu `$ has two major advantages:
* the relation between partonic and hadronic amplitudes is quite accurate, since the hadronic matrix elements of the $`\overline{s}\gamma ^\mu d`$ current between a kaon and a pion are related by isospin symmetry to those entering $`K_{l3}`$ decays, which are experimentally well known;
* the lepton pair is produced in a state of definite $`CP`$ and angular momentum, implying that the leading SM contribution to $`K_L\pi ^0\nu \overline{\nu }`$ is $`CP`$ violating.
### 2.1 SM uncertainties
The dominant theoretical error in estimating $`B(K^+\pi ^+\nu \overline{\nu })`$ is due to the uncertainty of the QCD corrections to the charm contribution (see for an updated discussion), which can be translated into a $`5\%`$ error in the determination of $`|V_{td}|`$ from $`B(K^+\pi ^+\nu \overline{\nu })`$. This uncertainty can be considered as generated by ‘intermediate-distance’ dynamics; genuine long-distance effects associated to the up quark have been shown to be substantially smaller .
The case of $`K_L\pi ^0\nu \overline{\nu }`$ is even more clean from the theoretical point of view . Indeed, because of the $`CP`$ structure, only the imaginary parts in (1) -where the charm contribution is absolutely negligible- contribute to $`𝒜(K_2\pi ^0\nu \overline{\nu })`$. Thus the dominant direct-$`CP`$-violating component of $`𝒜(K_L\pi ^0\nu \overline{\nu })`$ is completely saturated by the top contribution, where the QCD uncertainties are very small (around 1%). Intermediate and long-distance effects in this process are confined only to the indirect-$`CP`$-violating contribution and to the $`CP`$-conserving one which are both extremely small. Taking into account also the isospin-breaking corrections to the hadronic matrix element , one can therefore write a very accurate expression (with a theoretical error around $`1\%`$) for $`B(K_L\pi ^0\nu \overline{\nu })`$ in terms of short-distance parameters :
$$B(K_L\pi ^0\nu \overline{\nu })_{\mathrm{SM}}=4.25\times 10^{10}\left[\frac{\overline{m}_t(m_t)}{170\mathrm{GeV}}\right]^{2.3}\left[\frac{\mathrm{}\lambda _t}{\lambda ^5}\right]^2.$$
(2)
The high accuracy of the theoretical predictions of $`B(K^+\pi ^+\nu \overline{\nu })`$ and $`B(K_L\pi ^0\nu \overline{\nu })`$ in terms of the modulus and the imaginary part of $`\lambda _t=V_{ts}^{}V_{td}`$ could clearly offer the possibility of very interesting tests of the CKM mechanism. Indeed, a measurement of both channels would provide two independent information on the unitarity triangle, which can be probed also by $`B`$-physics observables. In particular, as emphasized in , the ratio of the two branching ratios could be translated into a clean and complementary determination of $`\mathrm{sin}(2\beta )`$.
Taking into account all the indirect constraints on $`V_{ts}`$ and $`V_{td}`$ obtained within the SM, the present range of the SM predictions for the two branching ratios reads :
$`B(K^+\pi ^+\nu \overline{\nu })_{\mathrm{SM}}`$ $`=`$ $`(0.82\pm 0.32)\times 10^{10},`$ (3)
$`B(K_L\pi ^0\nu \overline{\nu })_{\mathrm{SM}}`$ $`=`$ $`(3.1\pm 1.3)\times 10^{11}.`$ (4)
Moreover, As pointed out recently in , a stringent and theoretically clean upper bound on $`B(K^+\pi ^+\nu \overline{\nu })_{\mathrm{SM}}`$ can be obtained using only the experimental information on $`\mathrm{\Delta }M_{B_d}/\mathrm{\Delta }M_{B_s}`$ to constraint $`|V_{td}/V_{ts}|`$. In particular, using $`(\mathrm{\Delta }M_{B_d}/\mathrm{\Delta }M_{B_s})^{1/2}<0.2`$ it is found
$$B(K^+\pi ^+\nu \overline{\nu })_{\mathrm{SM}}<1.67\times 10^{10},$$
(5)
which represents a very interesting challenge for the BNL-E787 experiment .
### 2.2 Beyond the SM: general considerations
As far as we are interested only in $`K\pi \nu \overline{\nu }`$ decays, we can roughly distinguish the extensions of the SM into two big groups: those involving new sources of quark-flavour mixing (like generic SUSY extensions of the SM, models with new generations of quarks, etc…) and those where the quark mixing is still ruled by the CKM matrix (like the 2-Higgs-doublet model of type II, constrained SUSY models, etc…). In the second case NP contributions are typically smaller than SM ones at the amplitude level (see e.g. for some recent discussions). On the other hand, in the first case it is possible to overcome the $`𝒪(\lambda ^5)`$ suppression of the dominant SM amplitude. If this is the case, it is then easy to generate sizable enhancements of $`K\pi \nu \overline{\nu }`$ rates (see e.g. and ).
Concerning $`K_L\pi ^0\nu \overline{\nu }`$, it is worthwhile to emphasize that if lepton-flavor is not conserved or right-handed neutrinos are involved , then new $`CP`$-conserving contributions could in principle arise.
Interestingly, despite the variety of NP models, it is possible to derive a model-independent relation among the widths of the three neutrino modes . Indeed, the isospin structure of any $`sd`$ operator bilinear in the quark fields implies
$$\mathrm{\Gamma }(K^+\pi ^+\nu \overline{\nu })=\mathrm{\Gamma }(K_L\pi ^0\nu \overline{\nu })+\mathrm{\Gamma }(K_S\pi ^0\nu \overline{\nu }),$$
(6)
up to small isospin-breaking corrections, which then leads to
$$B(K_L\pi ^0\nu \overline{\nu })<\frac{\tau _{_{K_L}}}{\tau _{_{K^+}}}B(K^+\pi ^+\nu \overline{\nu })4.2B(K^+\pi ^+\nu \overline{\nu }).$$
(7)
Any experimental limit on $`B(K_L\pi ^0\nu \overline{\nu })`$ below this bound can be translated into a non-trivial dynamical information on the structure of the $`sd\nu \overline{\nu }`$ amplitude.
### 2.3 SUSY contributions and the $`Z\overline{s}d`$ vertex
We will now discuss in more detail the possible modifications of $`K\pi \nu \overline{\nu }`$ decays in the framework of a generic low-energy supersymmetric extension of the SM, which represents a very attractive possibility from the theoretical point of view . Similarly to the SM, also in this case FCNC amplitudes are generated only at the quantum level, provided we assume unbroken $`R`$ parity and minimal particle content. However, in addition to the standard penguin and box diagrams, also their corresponding superpartners, generated by gaugino-squarks loops, play an important role. In particular, the chargino-up-squarks diagrams provide the potentially dominant non-SM effect to the $`sd\nu \overline{\nu }`$ amplitude . Moreover, in the limit where the average mass of SUSY particles is substantially larger than $`M_W`$, the penguin diagrams tend to dominate over the box ones and the dominant SUSY effect can be encoded through an effective $`Z\overline{s}d`$ coupling .
The flavour structure of a generic SUSY model is quite complicated and a convenient model-independent parameterization of the various flavour-mixing terms is provided by the so-called mass-insertion approximation . This consists of choosing a simple basis for the gauge interactions and, in that basis, to perform a perturbative expansion of the squark mass matrices around their diagonal. Employing a squark basis where all quark-squark-gaugino vertices involving down-type quarks are flavor diagonal, it is found that the potentially dominant SUSY contribution to the $`Z\overline{s}d`$ vertex arises from the double mixing $`(\stackrel{~}{u}_L^d\stackrel{~}{t}_R)\times (\stackrel{~}{t}_R\stackrel{~}{u}_L^s)`$ . Indirect bounds on these mixing terms dictated by vacuum-stability, neutral-meson mixing and $`bs\gamma `$ leave open the possibility of large effects . More stringent constraints can be obtained employing stronger theoretical assumptions on the flavour structure of the SUSY model . However, the possibility of sizable modifications of $`K\pi \nu \overline{\nu }`$ widths (including enhancements of more than one order of magnitude in the case of $`K_L\pi ^0\nu \overline{\nu }`$) cannot be excluded a priori.
Interestingly a non-standard $`Z\overline{s}d`$ vertex can be generated also in non-SUSY extensions of the SM (see e.g. ). It is therefore useful trying to constraint this scenario in a model-independent way. At present the best direct limits on the $`Z\overline{s}d`$ vertex are dictated by $`K_L\mu ^+\mu ^{}`$ , bounding the real part of the coupling, and $`\mathrm{}(ϵ^{}/ϵ)`$ , constraining the imaginary one. Unfortunately in both cases the bounds are not very accurate, being affected by sizable hadronic uncertainties. Concerning $`ϵ^{}/ϵ`$, it is worthwhile to mention that the non-standard $`Z\overline{s}d`$ vertex could provide an explanation for the apparent discrepancy between $`(ϵ^{}/ϵ)_{\mathrm{exp}}`$ and $`(ϵ^{}/ϵ)_{\mathrm{SM}}`$ , even if it is certainly too early to make definite statement in this respect . In the future the situation could become much more clear with precise determinations of both real and imaginary part of the $`Z_{\overline{s}d}`$ coupling by means of $`\mathrm{\Gamma }(K^+\pi ^+\nu \overline{\nu })`$ and $`\mathrm{\Gamma }(K_L\pi ^0\nu \overline{\nu })`$. Note that if we only use the present constraints from $`K_L\mu ^+\mu ^{}`$ and $`\mathrm{}(ϵ^{}/ϵ)`$, we cannot exclude enhancements up to one order of magnitude for $`\mathrm{\Gamma }(K_L\pi ^0\nu \overline{\nu })`$ and up to a factor $`3`$ for $`\mathrm{\Gamma }(K^+\pi ^+\nu \overline{\nu })`$ .
## 3 $`K\pi \mathrm{}^+\mathrm{}^{}`$ and $`K\mathrm{}^+\mathrm{}^{}`$
Similarly to $`K\pi \nu \overline{\nu }`$ decays, the short-distance contributions to $`K\pi \mathrm{}^+\mathrm{}^{}`$ and $`K\mathrm{}^+\mathrm{}^{}`$ are calculable with high accuracy and are potentially sensitive to NP effects. However, in these processes the size of long-distance contributions is usually much larger due to the presence of electromagnetic interactions. Only in few cases (mainly in $`CP`$-violating observables) long-distance contributions are suppressed and it is possible to extract the interesting short-distance information.
### 3.1 $`K\pi \mathrm{}^+\mathrm{}^{}`$
The single-photon exchange amplitude, dominated by long-distance dynamics, provides the largest contribution to the $`CP`$-allowed transitions $`K^+\pi ^+\mathrm{}^+\mathrm{}^{}`$ and $`K_S\pi ^0\mathrm{}^+\mathrm{}^{}`$. The former has been observed, both in the electron and in the muon mode, whereas only an upper bound of about $`10^6`$ exists on $`B(K_S\pi ^0e^+e^{})`$ . This amplitude can be described in a model-independent way in terms of two form factors, $`W_+(z)`$ and $`W_S(z)`$, defined by
$`i{\displaystyle d^4xe^{iqx}\pi (p)|T\left\{J_{\mathrm{elm}}^\mu (x)_{\mathrm{\Delta }S=1}(0)\right\}|K_i(k)}=`$
$`{\displaystyle \frac{W_i(z)}{(4\pi )^2}}\left[z(k+p)^\mu (1r_\pi ^2)q^\mu \right],`$ (8)
where $`q=kp`$, $`z=q^2/M_K^2`$ and $`r_\pi =M_\pi /M_K`$. The two form factors are non singular at $`z=0`$ and, due to gauge invariance, vanish to lowest order in Chiral Perturbation Theory (CHPT) . Beyond lowest order one can identify two separate contributions to the $`W_i(z)`$: a non-local term, $`W_i^{\pi \pi }(z)`$, due to the $`K3\pi \pi \gamma ^{}`$ scattering, and a local term, $`W_i^{\mathrm{pol}}(z)`$, that encodes the contributions of unknown low-energy constants (to be determined by data) . At $`𝒪(p^4)`$ the local term is simply a constant, whereas at $`𝒪(p^6)`$ also a term linear in $`z`$ arises. We note, however, that already at $`𝒪(p^4)`$ chiral symmetry alone does not help to relate $`W_S`$ and $`W_+`$, or $`K_S`$ and $`K^+`$ decays .
Recent results on $`K^+\pi ^+e^+e^{}`$ and $`K^+\pi ^+\mu ^+\mu ^{}`$ by BNL-E865 indicates very clearly that, due to a large linear slope, the $`𝒪(p^4)`$ expression of $`W_+(z)`$ is not sufficient to describe experimental data. This should not be consider as a failure of CHPT, rather as an indication that large $`𝒪(p^6)`$ contributions are present in this channel.<sup>8</sup><sup>8</sup>8 This should not surprise since in this mode sizable next-to-leading order contributions could arise due to vector-meson exchange. Indeed the $`𝒪(p^6)`$ expression of $`W_+(z)`$ seems to fit well data. Interestingly, this is not only due to a new free parameter appearing at $`𝒪(p^6)`$, but it is also due to the presence of the non-local term. The evidence of the latter provides a real significant test of the CHPT approach.
In the $`K_L\pi ^0\mathrm{}^+\mathrm{}^{}`$ decay the long-distance part of the single-photon exchange amplitude is forbidden by $`CP`$ invariance but it contributes to the processes via $`K_L`$-$`K_S`$ mixing, leading to
$$B(K_L\pi ^0e^+e^{})_{\mathrm{CPV}\mathrm{ind}}=3\times 10^3B(K_S\pi ^0e^+e^{}).$$
(9)
On the other hand, the direct-$`CP`$-violating part of the decay amplitude is very similar to the one of $`K_L\pi ^0\nu \overline{\nu }`$ but for the fact that it receives an additional short-distance contribution due to the photon penguin. Within the SM, this theoretically clean part of the amplitude leads to
$$B(K_L\pi ^0e^+e^{})_{\mathrm{CPV}\mathrm{dir}}^{\mathrm{SM}}=0.67\times 10^{10}\left[\frac{\overline{m}_t(m_t)}{170\mathrm{GeV}}\right]^2\left[\frac{\mathrm{}\lambda _t}{\lambda ^5}\right]^2,$$
(10)
and, similarly to the case of $`B(K_L\pi ^0\nu \overline{\nu })`$, it could be substantially enhanced by SUSY contributions . The two $`CP`$-violating components of the $`K_L\pi ^0e^+e^{}`$ amplitude will in general interfere. Given the present uncertainty on $`B(K_S\pi ^0e^+e^{})`$, at the moment we can only set the rough upper limit
$$B(K_L\pi ^0e^+e^{})_{\mathrm{CPV}\mathrm{tot}}^{\mathrm{SM}}\stackrel{<}{_{}}\mathrm{few}\times 10^{11}$$
(11)
on the sum of all the $`CP`$-violating contributions to this mode . We stress, however, that the phases of the two $`CP`$-violating amplitudes are well know. Thus if $`B(K_S\pi ^0e^+e^{})`$ will be measured, it will be possible to determine the interference between direct and indirect $`CP`$-violating components of $`B(K_L\pi ^0e^+e^{})_{\mathrm{CPV}}`$ up to a sign ambiguity. Finally, it is worth to note that an evidence of $`B(K_L\pi ^0e^+e^{})_{\mathrm{CPV}}`$ above the $`10^{10}`$ level, possible within specific supersymmetric scenarios , would be a clear signal of physics beyond the SM.
An additional contribution to $`K_L\pi ^0\mathrm{}^+\mathrm{}^{}`$ decays is generated by the $`CP`$-conserving processes $`K_L\pi ^0\gamma \gamma \pi ^0\mathrm{}^+\mathrm{}^{}`$ . This however does not interfere with the $`CP`$-violating amplitude and, as we shall discuss in the next section, it is quite small ($`\stackrel{<}{_{}}4\times 10^{12}`$) in the case of $`K_L\pi ^0e^+e^{}`$.
### 3.2 $`K_Ll^+l^{}`$
The two-photon intermediate state plays an important role in $`K_L\mathrm{}^+\mathrm{}^{}`$ transitions. This is by far the dominant contribution in $`K_Le^+e^{}`$, where the dispersive integral of the $`K_L\gamma \gamma l^+l^{}`$ loop is dominated by the term proportional to $`\mathrm{log}(m_K^2/m_e^2)`$. The presence of this large logarithm implies also that $`\mathrm{\Gamma }(K_Le^+e^{})`$ can be estimated with a relatively good accuracy in terms of $`\mathrm{\Gamma }(K_L\gamma \gamma )`$, yielding to the prediction $`B(K_Le^+e^{})9\times 10^{12}`$ which recently seems to have been confirmed by the four events observed at BNL-E871 .
More interesting from the short-distance point of view is the case of $`K_L\mu ^+\mu ^{}`$. Here the two-photon long-distance amplitude is not enhanced by large logs and it is almost comparable in size with the short-distance one, sensitive to $`\mathrm{}V_{td}`$ . Unfortunately the dispersive part of the two-photon contribution is much more difficult to be estimated in this case, due to the stronger sensitivity to the $`K_L\gamma ^{}\gamma ^{}`$ form factor. Despite the precise experimental determination of $`B(K_L\mu ^+\mu ^{})`$, the present constraints on $`\mathrm{}V_{td}`$ from this observable are not very stringent . Nonetheless, the measurement of $`B(K_L\mu ^+\mu ^{})`$ is still useful to put significant bounds on possible NP contributions. Moreover, we stress that the uncertainty of the $`K_L\gamma ^{}\gamma ^{}\mu ^+\mu ^{}`$ amplitude could be partially decreased in the future by precise experimental information on the form factors of $`K_L\gamma \mathrm{}^+\mathrm{}^{}`$ and $`K_Le^+e^{}\mu ^+\mu ^{}`$ decays, especially if these would be consistent with the general parameterization of the $`K_L\gamma ^{}\gamma ^{}`$ vertex proposed in .
## 4 Two-photon processes
$`K\pi \gamma \gamma `$ and $`K\gamma \gamma `$ decays are completely dominated by short distance dynamics and therefore not particularly useful to search for NP. However, these modes are interesting on one side to perform precision tests of CHPT, on the other side to estimate long-distance corrections to the $`\mathrm{}^+\mathrm{}^{}`$ channels (see e.g. and references therein).
Among the CHPT tests, an important role is played by $`K_S\gamma \gamma `$. The first non-vanishing contribution to this process arises at $`𝒪(p^4)`$ and, being generated only by a finite loop amplitude, is completely determined . Since in this channel vector meson exchange contributions are not allowed, and unitarity corrections are automatically included in the $`𝒪(p^2)`$ coupling , we expect that the $`𝒪(p^4)`$ result provides a good approximation to the full amplitude. This is confirmed by present data , but a more precise determination of the branching ratio is need in order to perform a more stringent test.
Similarly to the $`K_S\gamma \gamma `$ case, also the leading non-vanishing contribution to $`K_L\pi ^0\gamma \gamma `$ arises only at $`𝒪(p^4)`$ and is completely determined . However, in this case large $`𝒪(p^6)`$ corrections can be expected due to both unitarity corrections and vector meson exchange contributions. Indeed the $`𝒪(p^4)`$ prediction for $`B(K_L\pi ^0\gamma \gamma )`$ turns out to be substantially smaller (more than a factor 2) than the experimental findings . After the inclusion of unitarity corrections and vector meson exchange contributions, both spectrum and branching ratio of this decay can be expressed in terms of a single unknown coupling: $`a_V`$ . The recent KTeV measurement has shown that the determination of $`a_V`$ from both spectrum and branching ratio of $`K_L\pi ^0\gamma \gamma `$ leads to the same value, $`a_V=0.72\pm 0.08`$, providing an important consistency check of this approach.
As anticipated, the $`K_L\pi ^0\gamma \gamma `$ amplitude is also interesting since it produces a $`CP`$-conserving contribution to $`K_L\pi ^0\mathrm{}^+\mathrm{}^{}`$ . For $`\mathrm{}=e`$ the leading $`O(p^4)`$ contribution is helicity suppressed and only the $`O(p^6)`$ amplitude with the two photons in $`J=2`$ leads to a non-vanishing $`B(K_L\pi ^0e^+e^{})_{\mathrm{CPC}}`$ . Given the recent experimental result , this should not exceed $`4\times 10^{12}`$ . Moreover, we stress that the Dalitz plot distribution of $`CP`$-conserving and $`CP`$-violating contributions to $`K_L\pi ^0e^+e^{}`$ are substantially different: in the first case the $`e^+e^{}`$ pair is in a state of $`J=1`$, whereas in the latter has $`J=2`$. Thus in principle it is possible to extract the interesting $`B(K_L\pi ^0e^+e^{})_{\mathrm{CPV}}`$ from a Dalitz plot analysis of the decay. On the other hand, the $`CP`$-conserving contribution is enhanced and more difficult to be subtracted in the case of $`K_L\pi ^0\mu ^+\mu ^{}`$, where the helicity suppression of the leading $`O(p^4)`$ contribution (photons in $`J=0`$) is much less effective (see Heiliger and Sehgal in ).
## 5 Conclusions
Rare $`K`$ decays provide a unique opportunity to perform high precision tests of $`CP`$ violation and flavour mixing, both within and beyond the SM.
A special role is undoubtedly played by $`K\pi \nu \overline{\nu }`$ decays. In some NP scenarios sizable enhancements to the branching ratios of these modes are possible and, if detected, these would provide the first evidence for physics beyond the SM. Nevertheless, even in absence of such enhancements, precise measurements of $`K\pi \nu \overline{\nu }`$ widths will lead to unique information about the flavour structure of any extension of the SM.
Among decays into a $`\mathrm{}^+\mathrm{}^{}`$ pair, the most interesting one from the short-distance point of view is probably $`K_L\pi ^0e^+e^{}`$. However, in order to extract precise information from this mode an experimental determination (or a stringent upper bound) on $`B(K_S\pi ^0e^+e^{})`$ is also necessary.
|
no-problem/9908/cond-mat9908359.html
|
ar5iv
|
text
|
# Dilatancy and friction in sheared granular media
## I Introduction
Problems related to interfacial friction are very important from a practical and conceptual point of view , and in spite of its wide domain of application sliding friction is still not completely understood. In order to construct efficient machines in engineering science, or to understand geophysical events like earthquakes, it is necessary to understand several aspects of friction dynamics. Beside usual solid-solid contacts, the sliding interface can be lubricated with molecular fluids or filled by a granular gauge, and the problem becomes strongly dependent on the internal dynamics of the material itself. In experiments on lubricated surfaces with thin layers of molecular fluids the friction force depends on the thermodynamic phase of the lubricant, which depends on its turn on the shear stress . The friction force is thus directly related to the microscopic dynamics of the system and a description of sliding friction cannot be achieved without a good microscopic understanding of the problem. Understanding the frictional properties of granular matter turns out to be an even harder task, since basic problems like stress propagation in a static packing remain largely unsolved due to the disordered nature of the stress repartition inside the medium. Moreover, when a granular medium is sheared, it reorganizes modifing the geometrical disorder. The microscopic arrangement of the grains and their compaction have an important effect on the friction since, in order to deform the medium one has to overcome several geometrical constraints.
The understanding of sheared granular media has recently advanced thanks to experiments and numerical studies . The response to an external shear stress can be characterized by the dilatancy, which measures the modification of the compaction of the granular medium during the flow . We note that in both lubricated and granular interfaces the friction force has a dynamical origin. Since a sheared material modifies its own internal state fluidizing or changing structure, a natural approach to the problem is to describe phenomenologically this change of state and to relate it to the macroscopic friction force. As we discussed previously, a complete theoretical description of sheared granular media is still not available, so that the analysis should strongly rely on experimental data.
Recent experiments , focusing on the stick-slip instability induced by friction in sheared granular layers, helped to elucidate the role of compaction and the microscopic origin of slip events. In particular, accurate measurement of the friction force and of the horizontal and vertical positions of the slider have permitted to emphasize the connections between dilatancy and friction. The apparatus used was composed by a slider dragged at constant velocity by a spring whose elongation measured the applied shear stress. The surface of the slider was roughened in order to avoid slip at the surface of the medium and so that friction would crucially depend on the internal structure of the medium. At low velocity, a stick slip instability was observed and related to the modification in the granular compaction.
Friction of granular layers has been mainly studied in the framework of geophysical research using rate and state constitutive equations where the friction force is a function of an auxiliary variable describing the state of the interface. In this approach, one assumes that the microscopic events causing the movement of the slider are self-averaging and neglects the fluctuations. The quantities used in the constitutive equations are thus mean field-like. This assumption should be valid for sliding friction experiments on granular materials, where the size of the grain is much smaller than the length of the slider, so that the variables used in the model (velocity, displacement or friction force), are well defined macroscopical quantities.
The constitutive variable, related to the microscopic dynamics of the system, describes the dynamical history of the interface. In the case of solid-solid interfaces this variable was associated with the age of the contacts and described two opposite effects: the age of the contact increases the static friction force and the displacement of the slider renews the interface continuously so that the friction force decreases with velocity. Lubricated systems have been approached similarly using the rate of fluidization as a constitutive variable which captures two different effects. On one hand the confinement of the thin fluid layer induces a glassy transition resulting in a large static friction force. On the other hand an applied shear stress increases the temperature of the medium, favoring fluidization, thus reducing the friction force, which crucially depends on the ratio between the strength of the two effects.
In the case of granular media, a parameter suitable to characterize the frictional behavior is the compaction of the layers or the height of the slider which can be measured experimentally. Also in this case we can identify the competition between two opposite effects: the velocity of the slider keeps the layer dilated, lowering the friction force, and the weight of the slider induces recompaction. In this paper we present a model which includes these two effects in the framework of rate and state constitutive equations to describe typical effects like the stick-slip instability or the force-velocity hysteretic loop.
In Sec. II we concentrate on the description of the model, in Sec. III we describe the main results obtained by numerical integration of the model, in Sec. IV we present a stability analysis and the phase diagram. Finally, Sec. V presents a discussion and a summary of our results.
## II The model
Here we write rate and state constitutive equations in order to describe the frictional properties of granular media. The dynamics of the sliding plate is described by two constitutive equations. The first one is simply the equation of motion for the slider block driven by a spring of stiffness $`k`$ and submitted to a frictional force, which depends on velocity and dilatancy. The second equation is the evolution law for an auxiliary variable characterizing the dilatancy, which we will identify with the vertical position of the slider. This model could in principle be applied to geophysical situations, although in that case instead of a single elastic constant $`k`$, strain is mediated via the material bulk elasticity.
The frictional properties of a granular medium depend explicitly on its density: a dense granular medium submitted to a tangential stress tends to dilate, i.e to modify the granular packing and thus the friction force. It is not simple to measure granular density especially for non homogeneous systems, but global changes can be characterized by the vertical position of the sliding plate, which is thus an excellent candidate to describe the state of the system. Therefore, in agreement with Ref. , we write the equation of motion for the slider block as
$$m\ddot{x}=k(Vtx)F(z,\dot{x}),$$
(1)
where $`m`$ is the mass of the sliding plate, $`x`$ its position, $`k`$ the spring constant, $`V`$ the drag velocity, and $`F(z,\dot{x})`$ the friction force depending on the velocity $`\dot{x}`$ and on the height of the plate $`z`$.
If the slider is at rest, we need to apply a minimal constant force $`F_0`$ in order for it to move. When the force exceeds $`F_0`$, the slider moves and dilation will occur, reducing the friction. When the layer is fully dilated the friction force reduces to $`F_d<F_0`$. We assume that the friction force is velocity dependent when the layer is partially dilated ($`z<z_m`$), and becomes independent on velocity in the stationary state, when the granular medium is fully dilated ($`z=z_m`$).
In summary (in the case $`\dot{x}>0`$), we write the friction force as
$$F(z,\dot{x})=F_d\beta \frac{zz_m}{R}\nu \dot{x}\frac{zz_m}{R}.$$
(2)
The first two terms in Eq. (2) give the friction force at rest ($`\dot{x}=0`$) as function of $`z`$. In the fully expanded phase ($`z=z_m`$), the friction term is $`F=F_d<F_0`$, while in the compacted phase $`F_0=F_d+\beta z_m/R`$ ($`z=0`$). The velocity dependence is linear, mediated by the factor $`zz_m`$ which vanishes when the bed is fully dilated. These equations should be compared with those presented in Ref., where the second term in Eq. (2) is not present.
In Eq. (2) $`F(z,\dot{x})`$ depends explicitly on $`z`$, which describes the vertical displacement of the slider. In order to complete the description of the dynamics, we must specify the evolution equation for $`z`$. We write the law controlling the dilation of the granular medium during shear as
$$\dot{z}=\frac{z}{\eta }\dot{x}\frac{zz_m}{R}.$$
(3)
In Eq. (3) the second term dilates the support and can be seen as the response of the granular medium to the external tangential stress: when submitted to a shear rate $`\dot{x}`$, the medium dilates and $`z`$ increases. The factor ($`zz_m`$) reduces to zero when the bed is fully dilated and $`z_m`$ can be identified with the maximal height.
The first term allows for recompaction under the slider weight: in the case $`\dot{x}=0`$ the plate falls exponentially fast. At high velocity this term will not perturb significantly the system and the dynamics will be stationary. We are interested in the small velocity limit: Eqs. (2,3) will display an instability below a critical drag velocity $`V_c`$, as we will show in Sec. IV.
It is useful to rewrite the system of equations in terms of dimensionless variables
$`\stackrel{~}{t}=t{\displaystyle \frac{k}{\nu }},\stackrel{~}{\eta }=\eta {\displaystyle \frac{k}{\nu }},\stackrel{~}{x}={\displaystyle \frac{x}{R}},\stackrel{~}{z}={\displaystyle \frac{z}{R}},\stackrel{~}{z_m}={\displaystyle \frac{z_m}{R}},`$
$`\stackrel{~}{m}=m{\displaystyle \frac{k}{\nu ^2}},\stackrel{~}{V}=V{\displaystyle \frac{\nu }{Rk}},\stackrel{~}{v}=v{\displaystyle \frac{\nu }{Rk}},\stackrel{~}{F_d}={\displaystyle \frac{F_d}{Rk}},\stackrel{~}{\beta }={\displaystyle \frac{\beta }{Rk}}.`$
Defining $`\stackrel{~}{l}=\stackrel{~}{V}\stackrel{~}{t}\stackrel{~}{x}\stackrel{~}{F_d}`$, the system becomes
$`\dot{\stackrel{~}{l}}=\stackrel{~}{V}\stackrel{~}{v},`$ (4)
$`\stackrel{~}{m}\dot{\stackrel{~}{v}}=\stackrel{~}{l}+(\stackrel{~}{z}\stackrel{~}{z_m})(\stackrel{~}{v}+\stackrel{~}{\beta }),`$ (5)
$`\dot{\stackrel{~}{z}}={\displaystyle \frac{\stackrel{~}{z}}{\stackrel{~}{\eta }}}(\stackrel{~}{z}\stackrel{~}{z_m})\stackrel{~}{v}.`$ (6)
Assuming that these equation are valid for $`\dot{x}>0`$, we can analyze them for different spring constants, velocity.
## III Numerical simulations
We numerically solve the model (Eqs. (4-6)) using the fourth order Runge-Kutta method and assuming that the slider plate sticks when its velocity is zero. We concentrate our analysis on two sets of parameters. The first set corresponds to experiments carried out with a dry granular medium. We compute the typical force-velocity diagram in order to fix the parameters. Then using the same parameters we test the validity of our model calculating other quantities such as the slider velocity during a slip event, the spring elongation or the vertical displacement.
A second set of parameters is used to model wet granular media. We recover the instability at low velocity and low spring force and study the evolution of dilatancy and spring elongation before reaching the steady-state.
### A Dry granular media
Dry granular media exhibit stick-slip instabilities for relatively high velocities and it is difficult to achieve complete vertical displacement of the slider. For this reason the steady sliding regime has not been studied in detail in experiments. In order to quantitatively test our model we adjust the parameters to fit the experimental results. We present in Fig. 1 the force-velocity curve during slip comparing the experimental data from Ref. with the result of the integration of the model. The parameters used are given in the caption. The model is able to accurately describe the first part of the hysteretic loop (when the velocity increases), but slight deviations appear for small velocities for which also the experimental uncertainties are larger.
We numerically integrate the model using the previously obtained parameters, varying the spring constant and the driving velocity. For slow velocity and a small spring constant the system exhibits typical stick-slip dynamics. Fig. 2 shows the evolution of the variables of the model in this case: the first plot (Fig. 2(a)) shows the variation of the spring length which decreases abruptly at a regular frequency, when the horizontal plate position increases (Fig. 2(b)). Fig. 2(c) represents the velocity of the plate which is followed by an increase of the vertical position of the plate (Fig. 2(d)). We show in Fig. 3 a more detailed study of the slider velocity during a slip event. Near the transition between the stick-slip and the steady sliding regime the slider velocity appears to be almost independent on the driving velocity, in agreement with experiments. The stick-slip instability of the model is ruled by Eqs. (4-6) and we present in Sec. IV the dynamical phase diagram computed by a linear stability analysis. When the slider is driven slowly the energy injected into the granular medium cannot keep the layers dilated and the motion stops after a short change in the horizontal position (slip event).
If we increase $`V`$ or $`k`$, the energy induced by the shear is sufficient to maintain the granular layer dilated and the system evolves to a steady sliding state (cf. Fig. 4), which is stable with respect to small perturbations. This stationary state, corresponds to a stable fixed point of Eqs. (4-6) (see Sec. IV). If the drag velocity is very large the steady sliding state becomes unstable due to inertial effects ($`m0`$) and the slider oscillates harmonically with frequency $`\omega =\sqrt{k/m}`$. This effect was experimentally observed in Ref. . We have plotted the result in Fig. 5 for two different perturbations, in order to show that the amplitude of the cycle depends on the strength of the perturbation.
A typical measurement performed in the framework of geophysical research , is the variation of the friction force with respect to a rapid change of the driving velocity. We have simulated this effect, and we show the result in Fig. 6. An increase of the driving velocity is followed by an increase of the friction force which then relaxes to a smaller value.
The phase diagram corresponding to the three different dynamical behaviors can be calculated analytically. We present the result in Sec. IV, where we study the linear stability of the model.
### B Wet granular media
The analysis performed in Sec. III A can be repeated for wet granular materials. The dynamics in this case is more stable and the stick-slip regime is more difficult to obtain experimentally, since the instability occurs at very slow velocity. In the wet case, the presence of water changes the dynamics of the grains. Under shear, grains reorganize submitted to the fluid viscosity but here we neglect the small hydrodynamic effects and consider only the grain dynamics with suitable parameters. Using the new class of parameters, we solve numerically Eqs. (4-6) and identify two regimes: steady sliding at high $`V`$ and stick-slip instability otherwise (see Sec. IV for more details). In Fig. 7 we show a typical plot of the different quantities in the stick-slip regime. The period of the oscillations is bigger than in the dry case, and the fluctuations of the elongation smaller. One of the main difference with the dry case is the value of $`\eta `$ which governs the relaxation process and which is greater in the wet case as an effect of immersion. In Fig. 8 we show the steady state found at high velocity. It is interesting to remark that this behavior can be perfectly recovered with a simplified model, presented in Ref. , which however does not give rise to stick-slip instabilities. We will show in the next section that our model is equivalent to the model of Ref. for a given range of parameters. Fig. 9 represents the integration in the case $`\beta >\frac{\nu R}{\eta }`$ (the importance of the value of $`\beta `$ will be highlighted in Sec. IV).
Ref. also reported an experiment in which the slider was stopped abruptly, but the applied stress was not released. Under these conditions, the medium does not recompactify towards the initial state but remains dilated in an intermediate state. This feature cannot be captured by our model, since the evolution of $`z`$ does not explicitly depend on the applied stress but only on the horizontal velocity. In order to describe this effect, we modify Eq. (3) in order to explicitly include a stress dependence in the evolution of the dilatancy
$$\dot{z}=\frac{zAF_{ext}}{\eta }\dot{x}\frac{zz_m}{R},$$
(7)
where $`F_{ext}=k(Vtx)`$ is the applied force and $`A`$ is a constant. The behavior of this model is similar to the simpler model introduced in Section II, but the zero velocity fixed point explicitly depend on the applied stress (i.e. $`z^{}=AF_{ext}`$). Fig. 10 shows the solution of the model compared whith the experiment of Ref. .
## IV Linear stability
The simple form of Eqs. (4-6) allows us to study analytically the linear stability of the system. We first concentrate on the inertial case and describe the main results about the dynamics of our problem (fixed point, critical curve). Next we discuss the origin of the instability and the connections with other models. Finally we investigate the nature of the bifurcation.
### A Inertial case
All the numerical results presented above have been obtained including inertial effects. The system of Eqs.(4-6) has a simple fixed point
$$l_c=z_m\frac{V\nu +\beta }{Rk+\eta Vk},v_c=V,z_c=z_m\frac{\eta V}{R+\eta V}.$$
(8)
We see that $`z_c`$ tends to $`z_m`$ when $`V`$ tends to infinity, in agreement with experimental result. The critical line can also be computed explicitely in the framework of linear stability analysis. We skip the details of the calculations and just give the result
$$k^{}=\frac{z_m\nu ^2\eta R^2z_m\nu \eta ^2R\beta +mR^3\nu mR^2\eta \beta +2mR^2\eta V\nu 2mR\eta ^2V\beta +mR\eta ^2V^2\nu V^2m\eta ^3\beta }{\nu \eta ^2R^2(R+\eta V)}.$$
(9)
Fig. 11 and Fig. 12 show the phase diagram in the $`k,V`$ plane for the parameters used previously (in Sec. III A and Sec. III B). For both dry and wet granular layers we recover the stick-slip regime at sufficiently small $`k`$ and $`V`$. In the dry case the critical velocity is higher than in the wet case and we can also identify the inertial regime on the right hand side of the phase diagram (see Fig. 11).
### B Non inertial case
If we are interested only in low velocity displacements, the dynamical bifurcation line can be easily computed neglecting the mass of the slider
$$k^{}=(\frac{\beta }{R}\frac{\nu }{\eta })(\frac{z_m}{R+\eta V}).$$
(10)
Also in this case the dynamics is unstable for $`k`$ below the critical line, but there is no inertial regime. We have no experimental results to compare with this relation which links all the relevant parameter of the model.
Due to the simplicity of the non inertial case, we can write our system in the traditional form of a Hopf bifurcation , and calculate the coefficient which determines the nature of the transition. Without the inertial term this coefficient simply reduces to zero and therefore we have no information about the nature (super or subcritical) of the transition without pushing the caluclation to higher orders or including inertia. However, the calculation is particularly complex so we only analyze the problem numerically (see Sec. IVD).
### C Dynamical friction force
The stick-slip instability is due to the dependence of the friction coefficient on the velocity. Here we compute the friction force corresponding to the fixed point and show that the sign of $`\beta /R\nu /\eta `$ plays an important role to determine the presence of an instability. In the steady state the friction force is given by
$$F_c=F_d+z_m\frac{1}{R+\eta V}(\beta +\nu V).$$
(11)
For sufficiently high $`V`$, $`F_c`$ does not depend on $`V`$, in agreement with experiments, but for relatively small velocities $`F_c(V)`$ depends on $`V`$. The first derivative of the force is
$$\frac{dF_c(V)}{dV}=\frac{R\eta }{R+\eta V}(\frac{\beta }{R}\frac{\nu }{\eta }).$$
(12)
We can thus identify three cases:
$``$ if ($`\beta /R\nu /\eta `$) is positive then there is a $`k`$ verifying Eq. (10) and below this $`k`$ the system is unstable (the derivative in $`V`$ of $`F_c(V)`$ is negative $`i.eF_c(V)`$ decreases with $`V`$ ).
$``$ if ($`\beta /R\nu /\eta `$) is negative the system is always stable ($`k`$ cannot be negative).
$``$ if ($`\beta /R\nu /\eta =0`$) then $`F_c(V)`$ does not depend on $`V`$. In this case the system is stable and we can write the friction force as
$$F(z,\dot{x})=F_s+\nu \dot{z},$$
(13)
with $`F_s=F_d+\nu z_m/\eta `$. The form given in Eq. (13) for the friction force, together with Eq. (3), implies a friction coefficient independent on $`V`$ and a stable steady state for all the values of the parameters. In the limit $`\eta 1`$, and assuming Eq. (13), $`\beta `$ tends to 0 and the dilatancy rate is given by
$$\dot{z}=\dot{x}\frac{zz_m}{R},$$
(14)
which reproduces the model of Ref. .
### D Nature of the bifurcation line
The calculation in the non inertial case does not allow us to know the exact nature of the transition. Thus we investigate this problem numerically: the system is perturbed near its fixed point in the vertical position with different displacements ($`0.4\mu m0.002\mu m`$). Two final states can be obtained depending on the position in the phase diagram: the system can evolve to the steady state or be driven to the stick-slip cycle. In an intermediate zone depending on the strength of the perturbation, the system can recover both the fixed point or the stick-slip regime.
We identify three regimes, the first corresponds to the stick-slip regime where, independently on the amplitude of the perturbation the system falls into a periodic cycle. In the second regime, associated with high driving velocity, the system evolves to the stable fixed point. In the third intermediate regime, the final state depends on the initial perturbation: if the perturbation is sufficiently large the system falls into a periodic regime, while if the perturbation is weak it evolves towards the fixed point. The transition between the two regimes is discontinuous ($`i.e`$ subcritical). Fig. 13 shows the amplitude of the oscillations as a function of the driving velocity. It would be interesting to check experimentally the hysteretic nature of the bifurcation line.
The presence of an hysteretic transition line could be related to an underlying first-order phase transition in the layer density induced by the applied stress. Recently , analyzing the results of photoelastic disks in a two dimensional shear cell, it has been argued that the density of a the granular packing would be the order parameter of a second order phase transition induced by shear. It would be interesting to relate the different experimental phase transitions through a suitable microscopic model.
## V Discussion and open problems
We have introduced a model to describe the friction force of a sheared granular medium, treating explicitly the dilatancy during the slip, in the framework of rate and state constitutive equations. This approach allows us to include in the description the effect of the movement of the grains and the dependence of the friction coefficient on the dynamics of the layer. The variables used are mean-field like, since they represent macroscopic quantities like the position or the velocity but they are sufficient to describe phenomenologically the system. We have integrated the model for two sets of parameters, in order to make quantitative predictions for two different experimental configurations corresponding to dry and wet granular media.
The results are in good agreement with experiments. In particular, we recover the hysteretic dependence of the friction force on velocity and obtain a good fit to the experimental data recorded in dry granular media. The effect of the weight of the slider plate is included in the model and allows us to recover a stick-slip instability at low velocity. The physical origin of the instability is then directly related to the recompaction of the material under normal stress. The dynamical phase diagram is calculated analytically both in the inertial and non inertial cases and inertia is found to change only the high velocity part of this diagram. The equations used to model the dependence of the friction law on the external parameters include explicitly the effect of recompaction in the evolution of the vertical slider position.
The use of constitutive equations to model the friction force on complex interfaces is the simplest way to obtain quantitative results on the dynamics of the system. This approach provides good results in various fields, from geophysics or to nanotribology. In order to include the dynamics (or thermodynamics in the case of lubrication) of the material in the description, we need detailed informations about the material used. Our knowledge of sheared granular media is very poor due to the particulate and disordered nature of such materials and it is difficult to characterize the internal stress and strain rate. A precise description of the friction force for granular systems should include some information about the stress repartition inside the sheared material. This is a difficult problem which even for the simple case of a static pile cannot be solved completely. In the dynamical regime, the velocity depends on the precise nature of the contacts and on the friction force induced by them. Statistical models are needed to obtain a more complete macroscopic description based on the microscopic grain dynamics. In this respect, the analogy with phase transitions could be extremely fruitful.
Experiments on granular flow over a rough inclined plane display an interesting behavior , which is ruled by frictional properties. The dynamic stops abruptly when the drag force decreases and the system freezes with grains remaining in a static configuration. These phenomena can be related to the dependence of the friction force on the velocity of the grains: an increase of the friction force when the velocity of the layer decreases can produce an instability as in the system discussed here. It will be interesting to see if the methods discussed in this paper can be applied to this and other situations.
We thank J. S. Rice, and S. Roux for useful discussions and encouragements. We are grateful to J-C. Geminard for providing us with the data of his experiments and for interesting remarks. S. Z. is supported by EC TMR Research Network under contract ERBFMRXCT960062.
|
no-problem/9908/astro-ph9908266.html
|
ar5iv
|
text
|
# Possible spectral lines from the gaseous 𝛽Pictoris disk
## 1 Introduction
The $`\beta `$Pictoris disk is an evolved replenished circumstellar disk around a main sequence star, and appears to be a planetary (or cometary) disk (Vidal-Madjar et al. 1998). In addition to the dust disk, gas has been probed through absorption lines on the star line of sight (Lagrange et al. 1998). The observations of this gas component historically gave the most unexpected and important results with the detection of what appears to be the first extra-solar comets ever observed (Ferlet et al. 1987, Vidal-Madjar et al. 1998). However, non-radial motions are still unknown because observations are limited to the gaseous absorption lines on the star line of sight. Here we present the results of exploratory HST observations aimed to observe the emission lines of ions in the disk with the spectrograph slit off the star.
## 2 Observations and data analysis
### 2.1 Observations
The observations were made on February 9, 1996 with the HST/GHRS using the SSA slit (0.22″$`\times `$0.22″) and echelle spectroscopy at an expected resolution of 80 000 to get the highest chance of detection of lines with a priori unknown width. After a peak-up on the star $`\beta `$Pictoris and the acquisition of reference spectra, the telescope was shifted to the south-west part of the $`\beta `$Pictoris disk at a position angle of 31.5 degrees to a distance of 0.5 and 1.5″ from the star. The log of the observations is summarized in Table 1. The candidate lines are those from ions for which strong absorptions have been seen in the stable component due to the gas disk and in the variable components due to the infalling comets. On this basis, we expected the strongest fluorescence.
### 2.2 The method
The spectra observed on the disk present the same general shape as the spectra on the star but with a lower intensity level. Indeed, any spectrum of the disk obtained off the star ($`F_{obs}(\lambda )`$) is an addition of a “noise” component and a spectrum carrying information from the disk at the pointed position. The “noise” is the spectrum of the starlight scattered by the telescope ($`aF_{}(\lambda )`$, where $`F_{}(\lambda )`$ is the star spectrum) plus a background level ($`b`$) due to the fact that the background determined by the HST pipeline can be slightly miscalculated. The part of the spectrum due to the disk ($`F_{disk}`$) is the starlight scattered by the dust and the possible emission lines due to gaseous fluorescence. We thus consider that the additional noisy component ($`F_{noise}`$) is a linear combination of the stellar spectra ($`F_{}(\lambda )`$) obtained at the same wavelength a few orbits before:
$$F_{obs}(\lambda )=F_{noise}(\lambda )+F_{disk}(\lambda )aF_{}(\lambda )+b+F_{disk}(\lambda )$$
(1)
To put forward the potential presence of a component different from the dominant starlight scattered by the telescope, we divide the observed spectra by the stellar spectra. The presence of a disk contribution at a wavelength $`\lambda `$ will be detected if the ratio $`F_{obs}(\lambda )/(aF_{}(\lambda )+b)`$ is statistically significantly different from 1.
The main problem is thus the determination of $`a`$ and $`b`$, and their errorbars within a confidence level. Assuming that the disk spectra (dust scattered light plus gas emission lines) have a negligible contribution to the observed spectra ($`F_{noise}(\lambda )F_{disk}(\lambda )`$), we can determine $`a`$ and $`b`$ by a $`\chi ^2`$ minimization of
$$\chi ^2=\underset{\lambda _i}{}w_{\lambda _i}(aF_{}(\lambda _i)+bF_{obs}(\lambda _i))^2,$$
(2)
where $`w_{\lambda _i}=1/\sigma _{\lambda _i}^2`$ is the weight of each measurement at $`\lambda _i`$. This procedure gives not only the best determination of $`a`$ and $`b`$ but also intervals of confidence for these constants.
### 2.3 Results in the Fe ii lines
Spectra in the Fe ii wavelength range were obtained less than 10 minutes apart with the same instrument setting. The star and the disk spectra can be easily superimposed as shown in Fig. 1. An excess of flux clearly appears in the blue and the red part of the two strongest Fe ii lines at rest wavelength 2374.461 Å and 2382.765 Å. If $`a`$ and $`b`$ are determined as described in Sect 2.2, we get $`a^1=197\pm 3`$ and $`b=(3\pm 2)10^{15}`$ erg cm<sup>-2</sup> s<sup>-1</sup> Å<sup>-1</sup>. With these parameters, a plot of the ratio of the disk to the star spectra reveals an excess emission at 2$`\sigma `$ level (Fig. 2). The flux from the disk can be evaluated by the difference between the two spectra ($`F_{obs}(\lambda )(aF_{}(\lambda )+b)`$), and is $`F_{disk}10^{14}`$ erg cm<sup>-2</sup> s<sup>-1</sup> Å<sup>-1</sup> between 50 and 150 km s<sup>-1</sup> in the red part of the lines and around $``$50 km s<sup>-1</sup> in the blue part (Fig. 3). This emission is about 1 Å wide and stronger in the red part of the lines.
### 2.4 Discussion
The detection of apparent excess emission in the two Fe ii lines can be explained in different ways. We propose that this can be a real detection of the emission through the scattering of the starlight by the Fe ii ions in very high velocity motions. This will be discussed in detail in the next section. However other possibilities must be evaluated.
First, this feature obviously cannot be simply due to a possible miscalculation (underestimate) of $`b`$, the background correction. Indeed, by underestimating $`b`$, we could find false emission-like features at wavelength where the flux is low. But, one should increase the estimate of $`b`$ above its 4$`\sigma `$ upper limit ($`810^{15}`$ erg cm<sup>-2</sup> s<sup>-1</sup> Å<sup>-1</sup>) to explain the detected emission feature only by this effect. In addition, the emission is detected in both Fe ii lines, in particular in the Fe ii line at 2374.461 Å where the level is far above the zero level, and for which an error on the estimate of $`b`$ has almost no effect on the result. In fact, $`b`$ is well-determined by the very bottom of the strongest Fe ii lines where the level is clearly less than $`110^{14}`$ erg cm<sup>-2</sup> s<sup>-1</sup> Å<sup>-1</sup> (Fig. 1).
However, apparent emissions due to the addition of the statistical noise and a bad estimate of the background level might be possible. But, although not excluded, it is very unlikely that this coincidence can contribute to give apparent emissions in the two sides of the two strongest Fe ii lines.
The most important alternative to emission by Fe ii is a time variation of the $`\beta `$Pictoris spectrum between the observations of the template (the star) and the disk spectra. If the absorption component in the Fe ii lines had significantly decreased during the acquisition of the data, then the result is an apparent excess of emission in the second spectrum obtained with the slit off the star. Although the time between both spectra has been minimized, this possibility cannot be excluded without new observations, for example on the other side of the disk where the emission should be stronger in the blue.
## 3 The Fe ii emission lines
### 3.1 The dynamics of the Fe ii ions
If this detection is really due to emission by Fe ii ions, the lines width must be explained through the dynamics of these Fe ii ions in the disk. The Fe ii ions must be ejected from the $`\beta `$Pictoris system by the radiation pressure which is stronger than the gravitation by a factor $`\beta _{FeII}5`$ (Lagrange et al., 1998). After ejection, they rapidly reach a constant asymptotic velocity $`v_{\mathrm{}}`$. If they are ejected from a body on a circular orbit, $`v_{\mathrm{}}\sqrt{(2\beta 1)(GM/a_0)}`$, where $`a_0`$ is the radius of the orbit of the parent body. If they are ejected from a comet on a parabolic orbit, the final velocity is $`v_{\mathrm{}}\sqrt{(2\beta GM/a_0)}`$. In this simple scheme, the observed final velocity of about 100 km s<sup>-1</sup> (Fig. 3) corresponds to a production at about 1.5 AU from the star, or similarly to the absence of gas drag beyond that distance.
The emission lines are stronger in the red than in the blue. This is similar to the asymmetry already observed in the cometary absorption lines which are mainly redshifted (Beust et al. 1996). This last asymmetry is well-explained by the evaporation of comets with a small range of longitude of periastron (Beust et al. 1998). An alternative explanation for the observed asymmetry in the emission lines could be an extended shape for the dragging torus of gas needed to support the radiation pressure on the Ca ii and Fe ii ions observed at zero radial velocity in the stable gaseous disk (Lagrange et al. 1998, Beust et al. 1998). For both explanations, it is clear that the observed south-west branch of the disk must be the “red side” of the disk (Fig. 4). This provides an observational test : the north-east branch must present a larger emission in the blue lines.
### 3.2 The Fe ii density
The total brightness of Fe ii emission lines can be evaluated to be:
$$F_{emission}=\frac{\mathrm{\Omega }d^2sF_\nu ^{\beta \mathrm{Pic}}}{4\pi }n(r)/r^2𝑑x$$
(3)
in $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$, where $`\mathrm{\Omega }`$ is the solid angle covered by the spectrograph slit. The SSA slit (0.22″x0.22″) gives $`\mathrm{\Omega }=10^{12}`$. $`d`$ is the distance to $`\beta `$Pictoris ($`d=19.3`$ pc$`610^{19}`$cm). $`s`$ is the frequency integral of the cross section. $`F_\nu ^{\beta \mathrm{Pic}}`$ is the brightness per unit of frequency of $`\beta `$Pictoris seen from the Earth at the relevant wavelength. $`n(r)`$ is the density of the observed ion at a distance $`r`$ from the central star. $`dx`$ is the differential length along the line of sight. We can define a weighted integral equivalent to the column density by
$$\stackrel{~}{N}_{r_0}\frac{n(r)}{r^2}r_0^2𝑑x=\frac{4\pi r_0^2F_{emission}}{\mathrm{\Omega }d^2sF_\nu ^{\beta \mathrm{Pic}}},$$
(4)
where $`r_0`$ is the impact parameter of the line of sight ($`r_0(0.5\mathrm{})=10`$ AU). For the Fe ii 2382 Å line, $`s=810^3`$ cm<sup>2</sup> s<sup>-1</sup> and the observed emission is $`F_{emission}=F_\lambda ^{disk}\mathrm{\Delta }\lambda 10^{14}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1.`$ Finally, we have
$$\stackrel{~}{N}_{10\mathrm{A}\mathrm{U}}5.210^{26}\left(\frac{F_{emission}}{\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1}\right)\mathrm{cm}^2$$
(5)
$$\stackrel{~}{N}_{10\mathrm{A}\mathrm{U}}510^{12}\mathrm{cm}^2$$
(6)
This value is consistent with the Fe ii column density ($`N_{\text{Fe }\text{ii}}=n(r)𝑑r=310^{14}`$ cm<sup>-2</sup>) and the hypothesis that Fe ii should be gathered in the dragging torus around 1 AU and that they have an $`r^2`$ distribution beyond this torus (Lagrange et al. 1998). To evaluate the Fe ii volume density, $`n_0`$, at $`r_0`$, we can define the dimensionless quantity $`K`$ by $`Kn(r)r_0/(n_0r^2)𝑑x.`$ Then,
$$n_0=\frac{4\pi r_0F_{emission}}{\mathrm{\Omega }d^2sF_\nu ^{\beta \mathrm{Pic}}K}=\frac{\stackrel{~}{N}}{r_0K}.$$
(7)
If we make the assumption that $`n(r)=n_0(r_0/r)^\alpha `$, we have $`K_{\alpha =0}=\pi `$, $`K_{\alpha =1}=2`$, $`K_{\alpha =2}=\pi /2`$, …, $`K_{\alpha =8}=35\pi /128`$, et cetera. $`\alpha =2`$ would correspond to gas expelled by radiation pressure. For reasonable value of $`\alpha `$, within a factor of 2, $`K2`$. As a final result, we get
$$n_0210^2\mathrm{cm}^3,\mathrm{at}10\mathrm{AU}.$$
(8)
The calculation has been done with the hypothesis of an optically thin line. We also assume that the filling factor in the vertical direction is 1, thus the obtained density $`n_0`$ is in fact a lower limit. The other Fe ii line at 2374 Å has an oscillator strength 10 times smaller but the stellar flux is larger at this wavelength ($`410^{11}`$ erg cm<sup>-2</sup> s<sup>-1</sup> Å<sup>-1</sup>). It gives about the same value of density within a factor of 2.
The corresponding Fe ii production rate can be roughly estimated by the flux of material ($`v=100`$ km s<sup>-1</sup>) through the area observed at 10 AU with high $`H(0.22\mathrm{})=4.4`$ AU: $`Q(\text{Fe }\text{ii})=2\pi r_0Hvn_010^9`$ kg s<sup>-1</sup>. With solar abundances, this corresponds to the total disruption of about 10 asteroids per year with a radius of 30 km.
## 4 Need for confirmation
To confirm the tentative detection presented in this paper, new observations are really needed. The first obvious method will be to observe the other (north-east) side of the disk which should present the same feature except blueshifted instead of redshifted. We can also observe other lines which should present the same characteristics. The Fe ii lines at 2600 Å and Mg ii lines at 2800 Å are well-suited candidates. The Cr ii line at 2050 Å with an intermediate $`\beta _{\text{Cr }\text{ii}}3`$ is also an interesting target (Lagrange et al. 1998). STIS, with long-slit capability is technically better suited to this observation.
## 5 Conclusion
HST observations planned to detect the emission lines of ions in the $`\beta `$Pictoris disk with the spectrograph slit placed off the star gave a marginal detection of possible emission lines from Fe ii ions at 0.5″ from $`\beta `$Pictoris. This would suggest an Fe ii density of $`210^2`$cm<sup>-3</sup> at 10 AU.
If real, this indicates a large production rate of gas and dust equivalent to the total disruption of 10 bodies of 30 kilometers in radius per year. This corresponds to a production rate of about $`210^7`$M<sub>Earth</sub> per year. New observations are obviously needed to confirm this detection.
|
no-problem/9908/cond-mat9908053.html
|
ar5iv
|
text
|
# Many-body Theory vs Simulations for the pseudogap in the Hubbard model
## I Introduction
The two-dimensional Hubbard model is one of the key paradigms of many-body Physics and is extensively studied in the context of the cuprate superconductors. While there is now a large consensus about the fact that at half-filling $`\left(n=1\right)`$ the ground state has long-range antiferromagnetic (or spin-density wave) order, the route to this low-temperature phase is still a matter of controversy when the system is in the weak to intermediate coupling regime. In this regime, we know that the Mermin-Wagner theorem precludes a spin-density-wave phase transition at finite temperature but the issue of whether there is, or not, a precursor pseudogap at finite temperature in the single-particle spectral weight $`A(k_F,\omega )`$ is still unresolved. Different many-body approaches give qualitatively different answers to this pseudogap question. In particular, the widely used self-consistent Fluctuation Exchange Approximation (FLEX) does not find a pseudogap in the $`d=2`$ repulsive Hubbard model for any filling. A study of lattices of up to $`L=128`$ found that as the temperature is reduced the quasiparticle peak in $`A(k_F,\omega )`$ smears considerably while remaining maximum at $`\omega =0`$, signaling a deviation from the Fermi liquid behavior but no pseudogap. The same qualitative answer is found for attractive models. By contrast, the many-body approach that has given to date the best agreement with simulations of both static and imaginary-time quantities concludes to the existence of a precursor single-particle pseudogap in the weak to intermediate coupling regime, for both the attractive and repulsive $`d=2`$ Hubbard model, whenever the ground state has long-range order. While we will restrict ourselves to the $`d=2`$ repulsive model at half-filling, our results will be relevant to the more general question of the pseudogap since small changes in filling or changes from repulsive to attractive case do not generally necessitate fundamental changes in methodology. And the question of many-body methodology is one of our main concerns here. Further comments on the regime we do not address here, namely the strong-coupling regime, appear in the concluding paragraphs.
One may think that numerical results have already resolved the pseudogap issue defined above, but this is not so. Early Quantum Monte Carlo (QMC) data analytically continued by the Maximum Entropy method concluded that precursors of antiferromagnetism in $`A(𝐤,\omega )`$ were absent at any non-zero temperature in the weak to intermediate coupling regime ($`U<8t`$, $`U`$ is the Coulomb repulsion term and $`t`$ the hopping parameter). A subsequent study in which a singular value decomposition technique was used instead of Maximum Entropy, concluded to the opening of a pseudogap in $`A(k_F,\omega )`$ at low temperatures. Each of the two techniques has limitations. The singular value decomposition can achieve a better resolution at low frequencies, but we find that the quality of the spectra is influenced by the profile function introduced to limit the range of frequencies. Another difficulty is that it leads to negative values of $`A(𝐤,\omega )`$. As far as Maximum Entropy is concerned, recent advances , that we will use here, have made this method more reliable than the Classic version applied in Ref..
In this paper, we address the issue of the pseudogap in the $`d=2,`$ $`n=1`$ Hubbard model at weak to intermediate coupling, but it will be clear that the general conclusions are more widely applicable. We present QMC results and show that the finite-size behavior obtained for $`A(k_F,\omega )`$ is correctly reproduced by the method of Ref.. We also introduce a slight modification of the latter approach that makes the agreement even more quantitative. This many-body approach allows us to extrapolate to infinite size and show that the pseudogap persists even in lattices whose sizes are greater than the antiferromagnetic correlation length $`\xi `$, contrary to the statements made earlier. These sizes cannot be reached by QMC when the temperature is too low. We confirm that at low enough temperatures, the peak at $`\omega =0`$ at the Fermi wave vector is replaced by a minimum, corresponding to the opening of a pseudogap and by two side peaks that are precursors of the Bogoliubov quasiparticles. In contrast, we find that the $`A(k_F,\omega )`$ calculated by FLEX on small lattices are qualitatively different from those of QMC and do not have the correct size dependence. Since all many-body techniques involve some type of approximation, their reliability should be gauged by their capacity to reproduce, at least qualitatively, the Monte Carlo results in regimes where the latter are free from ambiguities. We thus conclude that Eliashberg-type approaches such as FLEX are unreliable in the absence of a Migdal theorem and that there is indeed a pseudogap in the weak to intermediate coupling regime at half-filling. It is likely, but not yet unambiguously proven, that consistency between the Green functions and vertices used in the many-body calculation is crucial to obtain the pseudogap.
## II Many-body approach
Many-body techniques of the paramagnon type do lead to a pseudogap but they usually have low-temperature problems because they do not satisfy the Mermin-Wagner theorem. No such difficulty arises in the approach of Ref.. This method proceeds in two stages. In the zeroth order step, the self-energy is obtained by a Hartree-Fock-type factorization of the four-point function with the additional constraint that the factorization is exact when all space-time coordinates coincide. Functional differentiation, as in the Baym-Kadanoff approach , then leads to a momentum- and frequency-independent irreducible particle-hole vertex for the spin channel that satisfies $`U_{sp}=Un_{}n_{}/\left(n_{}n_{}\right)`$. The irreducible vertex for the charge channel is too complicated to be computed exactly, so it is assumed to be constant and its value is found by requiring that the Pauli principle in the form $`n_\sigma ^2=n_\sigma `$ be satisfied. More specifically, the spin and charge susceptibilities now take the form $`\chi _{sp}^1\left(q\right)=\chi _0(q)^1\frac{U_{sp}}{2}`$ and $`\chi _{ch}^1\left(q\right)=`$ $`\chi _0(q)^1+\frac{U_{ch}}{2}`$ with $`\chi _0`$ computed with the Green function $`G_\sigma ^0`$ that contains the self-energy whose functional differentiation gave the vertices. This self-energy is constant, corresponding to the Hartree-Fock-type factorization. The susceptibilities thus satisfy conservations laws, the Mermin-Wagner theorem, as well as the Pauli principle $`n_\sigma ^2=n_\sigma `$ implicit in the following two sum rules
$`{\displaystyle \frac{T}{N}}{\displaystyle \underset{q}{}}\chi _{sp}\left(q\right)`$ $`=`$ $`\left(n_{}n_{}\right)^2=n2n_{}n_{}`$ (1)
$`{\displaystyle \frac{T}{N}}{\displaystyle \underset{q}{}}\chi _{ch}\left(q\right)`$ $`=`$ $`\left(n_{}+n_{}\right)^2n^2=n+2n_{}n_{}n^2`$ (2)
where $`n`$ is the density. We use the notation, $`q=(𝐪,iq_n)`$ and $`k=(𝐤,ik_n)`$ with $`iq_n`$ and $`ik_n`$ respectively bosonic and fermionic Matsubara frequencies. We work in units where $`k_B=1,`$ $`\mathrm{}=1,`$ lattice spacing and hopping $`t`$ are unity. The above equations, in addition to $`U_{sp}=Un_{}n_{}/\left(n_{}n_{}\right)`$, suffice to determine the constant vertices $`U_{sp}`$ and $`U_{ch}`$. This Two-Particle Self-Consistent approach will be used throughout this paper, unless we refer to FLEX calculations.
Once the two-particle quantities have been found as above, the next step of the approach of Ref., consists in improving the approximation for the single-particle self-energy by starting from an exact expression where the high-frequency Hartree-Fock behavior is explicitly factored out. One then substitutes in the exact expression the irreducible low frequency vertices $`U_{sp}`$ and $`U_{ch}`$ as well as $`G_\sigma ^0(k+q)`$ and $`\chi _{sp}(q),\chi _{ch}(q)`$ computed above. In the original approach the final formula reads
$$\mathrm{\Sigma }_\sigma ^{\left(\mathrm{}\right)}(k)=Un_\sigma +\frac{U}{4}\frac{T}{N}\underset{q}{}\left[U_{sp}\chi _{sp}(q)+U_{ch}\chi _{ch}(q)\right]G_\sigma ^0(k+q).$$
(3)
Irreducible vertices, Green functions and susceptibilities appearing on the right-hand side of this expression are all at the same level of approximation. They are the same as those used in the calculations of Eq.(1), hence they are consistent in the sense of conserving approximations. The resulting self-energy $`\mathrm{\Sigma }_\sigma ^{\left(\mathrm{}\right)}(k)`$ on the left hand-side though is at the next level of approximation so it differs from the self-energy entering the right-hand side.
There is, however, an ambiguity in obtaining the self-energy formula Eq.(3). Within the assumption that only $`U_{sp}`$ and $`U_{ch}`$ enter as irreducible particle-hole vertices, the self-energy expression in the transverse spin fluctuation channel is different. To resolve this paradox, consider the exact formula for the self-energy represented symbolically by the diagram of Fig.1. In this figure, the square is the fully reducible vertex $`\mathrm{\Gamma }(q,kk^{},k+k^{}q).`$ In all the above formulas, the dependence of $`\mathrm{\Gamma }`$ on $`k+k^{}q`$ is neglected since the particle-particle channel is not singular. The longitudinal version of the self-energy Eq.(3) takes good care of the singularity of $`\mathrm{\Gamma }`$ when its first argument $`q`$ is near $`(\pi ,\pi ).`$ The transverse version does the same for the dependence on the second argument $`kk^{}`$, which corresponds to the other particle-hole channel. One then expects that averaging the two possibilities gives a better approximation for $`\mathrm{\Gamma }`$ since it preserves crossing symmetry in the two particle-hole channels. Furthermore, one can verify that the longitudinal spin fluctuations in Eq.(3) contribute an amount $`Un_{}n_{}/2`$ to the consistency condition $`\frac{1}{2}\mathrm{Tr}\left(\mathrm{\Sigma }^{\left(\mathrm{}\right)}G^0\right)=`$ $`Un_{}n_{}`$ and that each of the two transverse spin components also contribute $`Un_{}n_{}/2`$ to $`\frac{1}{2}\mathrm{Tr}\left(\mathrm{\Sigma }^{\left(t\right)}G^0\right)=`$ $`Un_{}n_{}.`$ Hence, averaging Eq.(3) and the expression in the transverse channel also preserves rotational invariance. In addition, one verifies numerically that the exact sum rule $`𝑑\omega ^{}Im\left[\mathrm{\Sigma }_\sigma (𝐤,\omega ^{})\right]/\pi =U^2n_\sigma \left(1n_\sigma \right)`$ determining the high-frequency behavior is satisfied to a higher degree of accuracy. As a consistency check, one may also verify that $`\frac{1}{2}\mathrm{Tr}\left(\mathrm{\Sigma }^{\left(t\right)}G^{\left(t\right)}\right)`$ differs by only a few percent from$`\frac{1}{2}\mathrm{Tr}\left(\mathrm{\Sigma }^{\left(t\right)}G^0\right).`$ We will thus use a self-energy formula that we call “symmetric”
$$\mathrm{\Sigma }_\sigma ^{\left(s\right)}(k)=Un_\sigma +\frac{U}{8}\frac{T}{N}\underset{q}{}\left[3U_{sp}\chi _{sp}(q)+U_{ch}\chi _{ch}(q)\right]G_\sigma ^0(k+q).$$
(4)
$`\mathrm{\Sigma }_\sigma ^{\left(s\right)}(k)`$ is different from so-called Berk-Schrieffer type expressions that do not satisfy the consistency condition between one- and two-particle properties, $`\frac{1}{2}\mathrm{Tr}\left(\mathrm{\Sigma }G\right)=`$ $`Un_{}n_{}.`$
In comparing the above self-energy formulas with FLEX, it is important to note that the same renormalized vertices and Green function appear in both the conserving susceptibilities and in the self-energy formula Eq.(4). In the latter, one of the external vertices is the bare $`U`$ while the other is dressed ($`U_{sp}`$ or $`U_{ch}`$ depending on the type of fluctuation exchanged). This means that the fact that Migdal’s theorem does not apply here is taken into account. This technique is to be contrasted with the FLEX approximation where all the vertices are bare ones, as if there was a Migdal theorem, while the dressed Green functions appear in the calculation. The irreducible vertex that is consistent with the dressed Green function is frequency and momentum dependent, contrary to the bare vertex appearing in the FLEX self-energy expression. In this Eliashberg-type self-consistent approach then, the Green functions are treated at a high level of approximation while all the vertices are bare, zeroth order ones. In other words, the basic elements of the perturbation theory are treated at extremely different levels of approximation.
## III Monte Carlo vs many-body calculations
Our Monte Carlo results were obtained with the determinantal method using typically $`10^5`$ Monte Carlo updates per space-time point. The inverse temperature is $`\beta =5`$, the interaction strength is $`U=4`$ and periodic boundary conditions on a square lattice are used. Other details about the simulations may be found in the captions. Our detailed analysis is for the single-particle spectral weight $`A(𝐤,\omega )`$ at the wave-vector $`k=(0,\pi )`$ but other wave vectors will also be shown in the last figure of the paper. The Monte Carlo results are influenced by the statistical uncertainty, by the systematic error introduced through imaginary-time discretization, $`\mathrm{\Delta }\tau `$, and by the finite size, $`L`$, of the system. The two calculations with $`\mathrm{\Delta }\tau =1/10`$ in Fig.2a show that increasing the number of QMC sweeps (smaller $`\sigma `$, defined in Fig.2) leads to a more pronounced pseudogap. The same figure also shows calculations with the same $`\sigma `$ but different values of $`\mathrm{\Delta }\tau `$ (systematic error is of order $`\left(\mathrm{\Delta }\tau \right)^2`$). For $`\mathrm{\Delta }\tau 1/10`$, the decrease in pseudogap depth with decreasing $`\mathrm{\Delta }\tau `$ becomes less than the accuracy achievable by the maximum entropy inversion. If the pseudogap persists when $`L\mathrm{}`$ at fixed $`\sigma `$ and fixed $`\mathrm{\Delta }\tau =1/10`$ it should be even more pronounced with a larger number of QMC sweeps (smaller $`\sigma `$). The size analysis needs to be done however in more detail since increasing the system size $`L`$ at fixed $`\sigma `$ and $`\mathrm{\Delta }\tau `$ leads to a smaller pseudogap, as shown on the top left panel of Fig.3a.
It is customary to analytically continue imaginary time QMC using the Maximum Entropy algorithm. To provide a faithful comparison with the many-body approaches, we use the imaginary-time formalism for these methods and analytically continue them for the same number of imaginary-time points, using precisely the same Maximum entropy approach as for QMC. While the round off errors in the many-body approaches are very small, it is preferable to artificially set them equal to those in the corresponding QMC simulations to have the same degree of smoothing. Many-body results from the symmetric self-energy formula $`\mathrm{\Sigma }^{\left(s\right)},`$ Eq.(4), for an infinite system are shown in Fig.2b. The thin solid line is a direct real-frequency calculation in the infinite-size limit. Maximum Entropy inversions of the $`L\mathrm{}`$ value of the many-body $`G\left(\tau \right)`$ shown on the same figure illustrate that with increasing accuracy the real-frequency result is more closely approximated. This confirms that Maximum Entropy simply smooths the results when artificially large errors are introduced in the analytical results. For this parameter range, the effects are appreciable but do not change qualitatively the results. Even the widths of the peaks are not too badly reproduced by Maximum Entropy. The error bars are obtained from the Maximum-Entropy Bayesian probability for different regularization parameters $`\alpha .`$ They are clearly a lower bound.
In Fig.3, we show the spectra obtained for three techniques for system sizes $`L=4,`$ $`6,`$ $`8`$ and $`10`$. The left-hand panel is the QMC data, the middle panel is obtained from $`\mathrm{\Sigma }^{\left(s\right)}`$ Eq.(4) while the last panel is for FLEX. The latter results for much larger lattices are not much different from those for the $`8\times 8`$ system. Since $`G(𝐤,\tau )=\frac{d\omega }{2\pi }\frac{e^{\omega \tau }}{e^{\beta \omega }+1}A(𝐤,\omega ),`$ the nearly flat ($`\tau `$-independent) portion in $`G(𝐤,\tau )`$ of the lower right-hand panel leads, in FLEX, to a maximum in $`A(k,\omega )`$ at $`\omega =0,`$ contrary to the Monte Carlo results. By contrast, as can be seen by comparing the middle and left panels, the agreement between Eq.(4) and QMC is very good, except for the height of the peaks. The finite-size dependence of the pseudogap for both QMC and Eq.(4) is similar: as the size increases, the depth of the pseudogap decreases. Some of the finite-size effects are present in the vertices $`U_{sp}`$ and $`U_{ch}`$.
Fig.4a compares three results for the $`L=6`$ system: QMC (thick solid line), and the many-body approach of Ref. using either the symmetric $`\mathrm{\Sigma }^{\left(s\right)}`$ (Eq.(4), dotted line) or the longitudinal $`\mathrm{\Sigma }^{\left(\mathrm{}\right)}`$ (Eq.(3), thin solid line) self-energy formulas. In imaginary time, the agreement between QMC and $`\mathrm{\Sigma }^{\left(s\right)}`$ is striking. The position of the peaks in QMC also agrees better with the symmetric version $`\mathrm{\Sigma }^{\left(s\right)}`$, Eq.(4).
For the lattice sizes where the Monte Carlo data are qualitatively similar to those of Ref., and hence uncontroversial, Fig.3 has shown that there is a many-body approach that gives good agreement with the simulations. Although this many-body approach is not rigorous, especially deep in the pseudogap regime where it is mostly an extrapolation method, these tests suggest that it can give an understanding of finite-size effects in QMC data. There are two intrinsic lengths that are relevant, namely $`\xi `$ the antiferromagnetic correlation length, and $`\xi _{th}`$ the single-particle thermal de Broglie wavelength defined by $`v_F/T.`$ In simulations, $`\xi `$ may be estimated from the momentum-space width of the spin structure factor and $`\xi _{th}`$ from the Fermi velocity estimated from the maxima of $`A(𝐤,\omega )`$ at different wave vectors. For $`\beta =5,`$ and $`L=10`$ we have $`\xi 3.`$ At the $`(\pi ,0)`$ point, $`\xi _{th}`$ essentially vanishes since we are at the van Hove singularity, hence the condition $`L>\xi _{th}`$ is satisfied. If we had $`\xi _{th}>L,`$ one would be effectively probing the finite-size zero-temperature quantum regime. When the condition $`L>\xi _{th}`$ is satisfied, as is the case here, one has access to the finite temperature effects we are looking for. Once agreement on the pseudogap in QMC and the analytical approach has been established up to the regime $`\xi _{th}<L<\xi ,`$ the analytical approach can be used to reach larger lattice sizes $`\left(\text{such that }\xi _{th}<\xi <L\right)`$ with relatively modest computer effort. In Fig.4b
we show the spectra obtained by Eq.(4) for $`L=6`$ to $`64`$ and then for $`L=\mathrm{}`$ (obtained from numerical integration). We see that the size dependence of the pseudogap becomes negligible around $`L=32`$ and that the pseudogap is quite sizable even though it is smaller than that in the largest size available in QMC calculations $`\left(L=10\right).`$ The size dependence of the pseudogap is qualitatively similar when the longitudinal form of the self-energy is used. We thus conclude that the pseudogap exists in the thermodynamic limit, contrary to the conclusion of Ref.. The increase in QMC noise with increasing system size in the latter work may partly explain the different conclusion.
The last figure, Fig.5, shows $`A(𝐤,\omega )`$ obtained by Maximum Entropy inversion of Monte Carlo data (left panel) of the many-body approach Eq.(4) (middle panel) and of FLEX (right panel). Using the symmetry of the lattice and particle-hole symmetry, $`A(𝐤,\omega )=A(𝐤+(\pi ,\pi ),\omega )`$, one can deduce from this figure the results for all wave vectors of this $`8\times 8`$ lattice. The detailed agreement between Monte Carlo and the many-body approach is surprisingly good for all wave vectors, even far from the Fermi surface.
## IV Discussion
There are two interrelated conclusions to our work. First, detailed analysis of QMC results along with comparisons with many-body calculations show that there is a pseudogap in the $`n=1,d=2`$ Hubbard model, contrary to results obtained from previous Monte Carlo simulations and from self-consistent Eliashberg-type methods such as FLEX. Second, we have reinforced the case that the many-body methodology described here is an accurate and simple approach for studying the Hubbard model, even as we enter the pseudogap regime. While any self-energy formula that takes the form, $`\mathrm{\Sigma }_q\chi \left(q\right)G^0\left(k+q\right)`$ will in general extrapolate correctly to a finite zero-temperature gap, and hence show a pseudogap as long as $`\chi \left(q\right)`$ contains a renormalized classical regime, all other approaches we know of suffer from the following defects: they usually predict unphysical phase transitions, they do not satisfy as many exact constraints and in addition they do not give the kind of quantitative agreement with simulations that we have exhibited in Figs.3 to 5. Reasons why the mathematical structure of FLEX-type approaches fails to yield a pseudogap have been discussed before. The same arguments apply to the pseudogap problem away from half-filling and for the attractive Hubbard model as well. Since in the Hubbard model there is no Migdal theorem to justify the neglect of vertex corrections, it is likely, but unproven, that to obtain a pseudogap in FLEX-type approaches, one would need to include vertex-correction diagrams that are at the same level of approximation as the renormalized Green functions.
The physical origin of the pseudogap in the 2D Hubbard model has been discussed at great length previously: The precursors of antiferromagnetism in $`A(k_F,\omega )`$ are preformed Bogoliubov quasiparticles that appear as a consequence of the influence of renormalized classical fluctuations in two dimensions. They occur only in low dimension when the characteristic spin relaxation rate is smaller than temperature and when $`\xi /\xi _{th}>1`$. With perfect nesting (or in the attractive Hubbard model) they occur for arbitrarily small $`U.`$ The ground-state gap value (and corresponding single-particle pseudogap energy scale at finite $`T`$) depends on coupling in a BCS-like fashion.
The previous results show that strong-coupling local particle-hole pairs are not necessary to obtain a pseudogap. Such local particle-hole pairs are a different phenomenon. They lead to a single-particle Hubbard gap well above the antiferromagnetically ordered state, in any dimension but only when $`U`$ is large enough, in striking contrast with the precursors discussed in the present paper. The Hubbard gap also can exist without long-range order.
From a methodological point of view, the strong-coupling Hubbard gap is well understood, in particular within the dynamical mean-field theory or in strong-coupling perturbation expansion. However, the precursors of Bogoliubov quasiparticles discussed in the present paper are unobservable in infinite dimension, where dynamical mean-field theory is exact, because they are a low dimensional effect. It remains to be shown if $`1/d`$ expansions or other extensions of infinite-dimensional methods will succeed in reproducing our results.
Experimentally, one can distinguish a strong-coupling pseudogap from a precursor pseudogap (superconducting or antiferromagnetic) as follows. Ideally, if one has access experimentally to the critical quantity (spin or pair fluctuations) the difference between the two phenomena is clear since precursors occur only in the renormalized classical regime of these fluctuations. If one has access only to $`A(𝐤,\omega ),`$ there are also characteristic signatures. The precursors are characterized by a “dispersion relation” that is qualitatively similar to that in the ordered state. (However the intensity of the peaks in $`A(𝐤,\omega )`$ does not have the full symmetry of the ordered state). By contrast, a strong-coupling pseudogap does not show any signs of the symmetry of the ordered state at high enough temperature. Also, the temperature dependence of both phenomena is very different since precursors of Bogoliubov quasiparticles disappear at sufficiently high temperature in a manner that is strongly influenced by the Fermi velocity because of the condition $`\xi /\left(v_F/T\right)>1`$. Hence, even with isotropic interactions, the precursor pseudogaps appear at higher temperatures on points of the Fermi surface that have smaller Fermi velocity, even in cases when the zero temperature value of the gap is isotropic. This has been verified by QMC calculations for the attractive Hubbard model. By contrast, at sufficiently strong coupling, the Hubbard gap does not disappear even at relatively large temperatures, despite the fact that $`A(𝐤,\omega )`$ may rearrange over frequency ranges much larger than temperature.
The methods we have presented here apply with only slight modifications to the attractive Hubbard model case where superconducting fluctuations may induce a pseudogap in the weak to intermediate coupling regime relevant for the cuprates at that doping. Recent time-domain transmission spectroscopy experiments suggest that the renormalized classical regime for the superconducting transition in high-temperature superconductors has been observed. Concomitant peaks observed in photoemission experiments persist above the transition temperature in the normal state. They may be precursors of superconducting Bogoliubov quasiparticles. At exactly half-filling on the other hand, the paramagnetic state exhibits a strong-coupling (local particle-hole pairs) Hubbard gap.
S.M. benefited from a useful correspondence with S. R. White. We thank J. Deisz for extended correspondence on this subject. Contributions to the code from H. Touchette are gratefully acknowledged. Monte Carlo simulations were performed in part on an IBM-SP2 at the Centre d’Applications du Calcul Parallèle de l’Université de Sherbrooke. This work was supported by a grant from the Natural Sciences and Engineering Research Council (NSERC) of Canada and the Fonds pour la formation de Chercheurs et l’Aide à la Recherche (FCAR) of the Québec government.
|
no-problem/9908/nucl-th9908088.html
|
ar5iv
|
text
|
# Microcanonical studies concerning the recent experimental evaluations of the nuclear caloric curve
## I Introduction
Nuclear multifragmentation is presently intensely studied both theoretically and experimentally. Due to the similitude existent between the nucleon-nucleon interaction and the van der Waals forces, signs of a liquid-gas phase transition in nuclear matter are searched. While the theoretical calculations concerning this problem started at the beginning of 1980 , the first experimental evaluation of the nuclear caloric curve was reported in 1995 by the ALADIN group . A wide plateau situated at around 5 MeV temperature lasting from 3 to 10 MeV/nucleon excitation energy was identified. The fact was obviously associated with the possible existence of a liquid-gas phase transition in nuclear matter and generated new motivations for further theoretical and experimental work. Similar experiments of EOS and INDRA followed shortly. Using different reactions they obtained slightly different caloric curves, the plateau - like region being absent in the majority of cases. Factors contributing to these discrepancies are both the precision of the experimental measurements and the finite-size effects of the caloric curve manifested through the dependency of the equilibrated sources \[$`E^{}(A)`$\] sequence on the reaction type.
Concerning the first point of view, recent reevaluations of the ALADIN group concerning the kinetic energies of the emitted neutrons brought corrections of about 10 $`\%`$ (in the case of the reaction <sup>197</sup>Au+<sup>197</sup>Au, 600 MeV/nucleon). More importantly however it was proven that the energies of the spectator parts are growing with approximately 30 $`\%`$ in the bombarding energy interval 600 to 1000 MeV/nucleon. On the other side, the universality of the quantity $`M_{IMF}(Z_{bound})`$ subject to the bombarding energy variation (which was theoretically proven to be a signature of statistical equilibrium) suggests that for the above-mentioned reactions the equilibrated sources sequence \[$`E^{}(A)`$\] should be the same. Consequently, we deal with an important nonequilibrium part included in the measured source excitation energies which may belong to both pre-equilibrium or pre-break-up stages . The SMM calculations suggest a significant quantity of nonequilibrium energy even in the case of the 600 MeV/nucleon bombarding energy reaction .
Thus, the necessity of accurate theoretical descriptions of the break-up stage and of the sequential secondary particle emission appears to be imperative in order to distinguish between the equilibrium and nonequilibrium parts of the measured excitation energies. These approaches should strictly obey the constrains of the physical system which, in the case of nuclear multifragmentation, are purely microcanonic. As we previously underlined , in spite of their success in reproducing some experimental data, the two widely used statistical multifragmentation models (SMM and MMMC ) are not strictly satisfying the microcanonical rules.
The present paper describes some refinements and improvements brought to the sharp microcanonical multifragmentation model proposed in and also the employment of the model in its new version in the interpretation of the recent experimental data of the ALADIN group .
The improvements brought to the model are presented in Section II. Section III presents the new evaluations of temperature curves and the first evaluations (performed with this model) of heat capacities at constant volume ($`C_V`$) represented as a function of system excitation energy and temperature and also the comparison between the model predictions and the recent experimental HeLi isotopic temperature curve \[$`T_{HeLi}(Z_{bound})`$\] . Conclusions are drawn in Section IV.
## II Improvements brought to the microcanonical multifragmentation model
The improvements brought to the microcanonical multifragmentation model concerns both the break-up stage and the secondary particle emission stage.
(i) Primary break-up refinements
Comparing to the version of Ref. the present model has the following new features:
(a) The experimental discrete energy levels are replacing the level density for fragments with $`A6`$ (in the previous version of the model a Thomas Fermi type level density formula was used for all particle excited states). In this respect, in the statistical weight of a configuration and the correction factor formulas the level density functions are replaced by the degeneracies of the discrete levels, $`(2S_i+1)`$ (here $`S_i`$ denotes the spin of the $`i`$th excited level). As a criterion for level selection (i.e. the level life-time must be greater than the typical time of a fragmentation event) we used $`\mathrm{\Gamma }`$ 1 MeV, where $`\mathrm{\Gamma }`$ is the width of the energy level.
(b) In the case of the fragments with $`A>6`$ the level density formula is modified as to take into account the strong decrease of the fragments excited states life-time (reported to the standard duration of a fragmentation event) with the increase of their excitation energy. To this aim the Thomas Fermi type formula is completed with the factor $`\mathrm{exp}(ϵ/\tau )`$ (see Ref.):
$$\rho (ϵ)=\frac{1}{ϵ\sqrt{48}}\mathrm{exp}(2\sqrt{aϵ})\mathrm{exp}(ϵ/\tau ),$$
(1)
where $`a=A/\alpha `$, $`\alpha =4.7(1.625+ϵ/B(A,Z))`$ and $`\tau =9`$.
(ii) Inclusion of the secondary decay stage
For the $`A>6`$ nuclei it was observed that the fragments excitation energies are sufficiently small such as the sequential evaporation scheme is perfectly applicable. According to Weisskopf theory (extended as to account for particles larger than $`\alpha `$), the probability of emitting a particle $`j`$ from an excited nucleus is proportional to the quantity:
$$W_j=\underset{i=0}{\overset{n}{}}_0^{E^{}B_jϵ_i^j}\frac{g_i^j\mu _j\sigma _j(E)}{\pi ^2\mathrm{}^3}\frac{\rho _j(E^{}B_jϵ_i^jE)}{\rho (E^{})}E\text{d}E,$$
(2)
where $`ϵ_i`$ are the stable excited states of the fragment $`j`$ subject to particle emission (their upper limit is generally around 7 - 8 MeV), $`E`$ is the kinetic energy of the formed pair in the center of mass (c.m.) frame, $`g_i^j=2S_i+1`$ is the degeneracy of the level $`i`$, $`\mu _j`$ and $`B_j`$ are respectively the reduced mass of the pair and the separation energy of the particle $`j`$ and finally $`\sigma _j`$ is the inverse reaction cross-section. Due to the specificity of the multifragmentation calculations we considered the range of the emitted fragments $`j`$ up to the $`A=16`$ limit. For the inverse reaction cross-section we have used the optical model based parametrization from Ref. . The sequential evaporation process is simulated by means of standard Monte Carlo (see for example ).
For nuclei with $`4A6`$ (the only excited states of $`A=4`$ nuclei taken into consideration are few states higher than 20 MeV belonging to the $`\alpha `$ particle) depending on their amount of excitation we consider secondary break-up for $`ϵ>B(A,Z)/3`$ and Weisskopf evaporation otherwise (here $`ϵ`$ is the excitation energy of the fragment $`(A,Z)`$ and $`B(A,Z)`$ is its binding energy). The microcanonical weight formulas have the usual form excepting the level density functions which are here replaced by the discrete levels degeneracies. Due to the reduced dimensions of the $`A<6`$ systems, the break-up channels are countable (and a classical Monte Carlo simulation is appropriate) when a mean field approach is used for the Coulomb interaction energy. In this respect, the Wigner-Seitz approach is employed for the Coulomb interaction:
$$U_C=\frac{3}{5}\frac{Z_0^2e^2}{R}\underset{i}{}\frac{3}{5}\frac{Z_i^2e^2}{R_{A_iZ_i}^C},$$
(3)
where $`A`$ and $`Z`$ denotes the mass and the charge of the source nucleus, the resulting fragments have the index $`i`$, $`R_{A_iZ_i}^C/R_{A_iZ_i}=(1+\kappa )^{1/3}\left[(Z_i/A_i)/(Z/A)\right]^{1/3}`$ and $`V=(1+\kappa )V_0`$. Here $`V`$ denotes the break-up volume and $`V_0`$ the volume of the nucleus at normal density. It should be added that $`R`$ is the radius of the source nucleus at break-up and $`R_{A_iZ_i}`$ is the radius of fragment $`i`$ at normal density.
For each event of the primary break-up simulation, the entire chain of evaporation and secondary break-up events is Monte Carlo simulated.
## III Results
Using the improved version of the microcanonical multifragmentation model, the caloric curves corresponding to two freeze-out radii (R=2.25 A<sup>1/3</sup> and R=2.50 A<sup>1/3</sup> fm) are reevaluated for the case of the source nucleus (70, 32) (the microcanonical caloric curves evaluated with the initial version of the model are given in Ref. ). These are presented in Fig. 1 (a). One can observe that the main features of the caloric curve from Refs. are reobtained. Thus, one can recognize the liquid-like region at the beginning of the caloric curve, then a large plateau-like region and finally the linearly increasing gas-like region. One may also notice that the caloric curve behavior at the freeze-out radius variation is maintained: The decrease of the freeze-out radius leads to a global lifting of the caloric curve.
As it is well known, the curves of the constant volume heat capacity ($`C_V`$) as a function of system excitation energy ($`E^{}`$) and as a function of temperature ($`T`$) may provide important information concerning the transition region and the transition order. For this reason the curves $`C_V(E^{})`$ and $`C_V(T)`$ have been evaluated (see Fig. 1 (a) and Fig. 1 (b)). We remind that the constant volume heat capacity ($`C_V`$) is calculable in the present model using the formula :
$$C_V^1=1T^2\left[\left(\frac{3}{2}N_C\frac{5}{2}\right)\frac{1}{K}\right]^2+T^2\left(\frac{3}{2}N_C\frac{5}{2}\right)\frac{1}{K^2}.$$
(4)
It can be observed that the $`C_V(E^{})`$ curve has a sharp maximum around 4.5 MeV/nucleon excitation energy for both considered freeze-out radii. This suggests that a phase transition exists in that region. The transition temperatures can be very well distinguished by analyzing the $`C_V(T)`$. One can observe two sharp-peaked maxima pointing the transition temperatures corresponding to the two considered freeze-out radii.
In order to make a direct comparison between the calculated HeLi isotopic temperature and the recent experimental results one has to deduce the sequence of excitation energy as a function of the system dimension \[$`E^{}(A)`$\]. This is done as in Refs. using as matching criterion the simultaneously reproduction of the $`M_{IMF}(Z_{bound})`$ and $`a_{12}(Z_{bound})`$ curves. This couple of curves can fairly well identify the dimension and the excitation of the equilibrated nuclear source . Here $`M_{IMF}`$ stands for the multiplicity of intermediate mass fragments and is defined as the number of fragments with $`3Z30`$ from a fragmentation event while $`a_{12}`$ denotes the charge asymmetry of the two largest fragments and, for one fragmentation event is defined as $`a_{12}=(Z_{max}Z_2)/(Z_{max}+Z_2)`$ with $`Z_{max}Z_22`$ where $`Z_{max}`$ is the maximum charge of a fragment and $`Z_2`$ is the second largest charge of a fragment in the respective event. $`Z_{bound}`$ represents the bound charge in one fragmentation event and is defined as the sum of the charges of all fragments with $`Z2`$.
The simultaneous fit of the calculated curves $`M_{IMF}(Z_{bound})`$ and $`a_{12}(Z_{bound})`$ on the corresponding experimental data (<sup>197</sup>Au+<sup>197</sup>Au at 1000 MeV/nucleon) is given in Fig. 2. The agreement is very good. The equilibrated source sequence \[$`E^{}(A)`$\] we used for this purpose is given in Fig. 3 together with the experimental evaluations of the excitation energies as a function of source dimension for the reaction <sup>197</sup>Au+<sup>197</sup>Au at 600, 800 and 1000 MeV/nucleon. The theoretically obtained sequence is relatively close to the experimental line corresponding to 600 MeV/nucleon bombarding energy. The deviations between the calculated equilibrated source sequence and the three experimental lines suggest that the experimental evaluations contain a quantity of non-equilibrium energy which grows with increasing the bombarding energy. As suggested in Ref. , its origin may be situated in both the pre-equilibrium and pre-break-up stage. These deviations are exclusively due to the neutron kinetic energies which, reevaluated from the 1995 data , are much larger.
It should also be pointed that apart from the SMM predictions , the quantity of non-equilibrium energy predicted by the present model is smaller and thus the model predicted equilibrated source sequence is closer to the experimental line of the 600 MeV/nucleon bombarding energy reaction.
After evaluating the sequence of the equilibrated sources a direct comparison the HeLi calculated isotopic temperature curve with the ones recently evaluated by the ALADIN group is performed. To this purpose the uncorrected Albergo temperature is used: $`T_{HeLi}=13.33/\mathrm{ln}\left[2.18\left(Y_{{}_{}{}^{6}Li}/Y_{{}_{}{}^{7}Li}\right)/\left(Y_{{}_{}{}^{3}He}/Y_{{}_{}{}^{4}He}\right)\right]`$, the experimental predictions being divided by $`f_T=1.2`$ (which is the factor used in the ALADIN evaluation of the HeLi caloric curve chosen as to average the QSM, GEMINI and MMMC models predictions). The result is represented in Fig. 4 as a function of $`Z_{bound}`$. It can be observed that the agreement between the calculated $`T_{HeLi}(Z_{bound})`$ and the experimental data corresponding to the <sup>197</sup>Au+<sup>197</sup>Au reaction at 600 and 1000 MeV/nucleon bombarding energy is excellent on the entire range of $`Z_{bound}`$. In comparison, the SMM model predicts in the region $`Z_{bound}25`$ a curve steeper than the experimental data.
## IV Conclusions
Sumarizing, the microcanonical multifragmentation model from Ref. is improved by refining the primary break-up part and by including the secondary particle emission. The caloric curve rededuced with the new version of the model preserves its general aspect manifesting an important plateau-like region. The transition regions are clearly indicated by the sharp maxima of the $`C_V(E^{})`$ and $`C_V(T)`$ curves. The model proves the ability of simultaneously fitting the ”definitory” characteristics of the nuclear multifragmentation phenomenon $`M_{IMF}(Z_{bound})`$ and $`a_{12}(Z_{bound})`$. Evaluating the equilibrated source sequence $`E^{}(A)`$ \[by using the criterion of reproducing both $`M_{IMF}`$ and $`a_{12}`$ versus $`Z_{bound}`$\], a nonequilibrium part of the experimentally evaluated excitation energy growing with the increase of the bombarding energy is identified. The direct comparison of the calculated HeLi caloric curve shows an excellent agreement with the experimental HeLi curves recently evaluated by the ALADIN group.
|
no-problem/9908/chao-dyn9908006.html
|
ar5iv
|
text
|
# Free Decay of Turbulence and Breakdown of Self-Similarity
## References
<sup>1</sup>M. Lesieur, Turbulence in Fluids, 3rd ed. (Kluwer, Dordrecht, 1997).
<sup>2</sup>T. von Kármán and L. Howarth, “On the statistical theory of isotropic turbulence,” Proc. Roy. Soc. Lond. A 164, 192-215 (1938).
<sup>3</sup>S. N. Gurbatov, S. I. Simdyankin, E. Aurell, U. Frisch, and G. Tóth, “On the decay of Burgers turbulence,” J. Fluid Mech. 344, 339-374 (1997).
<sup>4</sup>G. L. Eyink and J. Xin, “Ideal turbulent decay in the Kraichnan model of a passive scalar,” preprint.
<sup>5</sup>U. Frisch, private communication.
<sup>6</sup>I. Proudman and W. H. Reid, “On the decay of a normally distributed and homogeneous turbulent velocity field,” Phil. Trans. Roy. Soc. Lond A 247, 163-189 (1954).
<sup>7</sup>U. Frisch, Turbulence. (Cambridge U.P., Cambridge, 1995).
<sup>8</sup>T. T. Clark and C. Zemach, “Symmetries and the approach to statistical equilibrium in isotropic turbulence,” Phys. Fluids 10, 2846-2858 (1998).
<sup>9</sup>D. J. Thomson, “Backwards dispersion of particle pairs and decay of scalar fields in isotropic turbulence,” in preparation.
|
no-problem/9908/hep-ex9908014.html
|
ar5iv
|
text
|
# First Observation of the Decay 𝑩→𝑱/𝝍ϕ𝑲
## Abstract
We present the first observation of the decay $`BJ/\psi \varphi K`$. Using $`9.6\times 10^6`$ $`B\overline{B}`$ meson pairs collected with the CLEO detector, we have observed 10 fully reconstructed $`BJ/\psi \varphi K`$ candidates, whereas the estimated background is $`0.5\pm 0.2`$ events. We obtain a branching fraction of $`(BJ/\psi \varphi K)=(8.8_{3.0}^{+3.5}[\mathrm{stat}]\pm 1.3[\mathrm{syst}])\times 10^5`$. This is the first observed $`B`$ meson decay requiring the creation of an additional $`s\overline{s}`$ quark pair.
An observation of a $`B`$ meson decay requiring the creation of an additional $`s\overline{s}`$ quark pair in the final state would enhance our understanding of strong interactions in the final states of $`B`$ decays. Previous studies of such processes involved searches for the “lower vertex” $`\overline{B}D_s^+X`$ transitions , however no signal was observed. The decay $`BJ/\psi \varphi K`$ can occur only if an additional $`s\overline{s}`$ quark pair is created in the decay chain besides the quarks produced in the weak $`bc\overline{c}s`$ transition. The $`BJ/\psi \varphi K`$ transition most likely proceeds as a three-body decay (Fig. 1). Another possibility is that the $`BJ/\psi \varphi K`$ decay proceeds as a quasi-two-body decay in which the $`J/\psi `$ and $`\varphi `$ mesons are daughters of a hybrid charmonium state .
We searched for $`B^+J/\psi \varphi K^+`$ and $`B^0J/\psi \varphi K_S^0`$ decays, reconstructing $`J/\psi \mathrm{}^+\mathrm{}^{}`$, $`\varphi K^+K^{}`$, and $`K_S^0\pi ^+\pi ^{}`$. Both $`e^+e^{}`$ and $`\mu ^+\mu ^{}`$ modes were used for the $`J/\psi `$ reconstruction. The data were collected at the Cornell Electron Storage Ring (CESR) with two configurations of the CLEO detector, called CLEO II and CLEO II.V. The components of the CLEO detector most relevant to this analysis are the charged particle tracking system, the CsI electromagnetic calorimeter, the time-of-flight system, and the muon chambers. In CLEO II, the momenta of charged particles are measured in a tracking system consisting of a 6-layer straw tube chamber, 10-layer precision drift chamber, and 51-layer main drift chamber, all operating inside a 1.5 T solenoidal magnet. The main drift chamber also provides a measurement of the specific ionization loss, $`dE/dx`$, used for particle identification. For CLEO II.V, the innermost wire chamber was replaced with a three-layer silicon vertex detector . Muon identification system consists of proportional counters placed at various depths in the steel absorber.
The results of this search are based upon an integrated luminosity of 9.1 $`\mathrm{fb}^1`$ of $`e^+e^{}`$ data taken at the $`\mathrm{{\rm Y}}(4S)`$ energy and 4.4 $`\mathrm{fb}^1`$ recorded 60 MeV below the $`\mathrm{{\rm Y}}(4S)`$ energy. The simulated event samples used in this analysis were generated using GEANT-based simulation of the CLEO detector response. Simulated events were processed in a similar manner as the data.
When making requirements on such kinematic variables as invariant mass or energy, we took advantage of well-understood track and photon-shower covariance matrices to calculate the expected resolution for each combination. Therefore we extensively used normalized variables, which allowed uniform candidate selection criteria to be used for the data collected with the CLEO II and CLEO II.V detector configurations.
The normalized invariant mass distributions for the $`J/\psi \mathrm{}^+\mathrm{}^{}`$ signal in data are shown in Fig. 2. We required the normalized invariant mass to be from $`10`$ to $`+3`$ (from $`4`$ to $`+3`$) for the $`J/\psi e^+e^{}`$($`J/\psi \mu ^+\mu ^{}`$) candidates. The resolution in the $`\mathrm{}^+\mathrm{}^{}`$ invariant mass is about 10 MeV$`/c^2`$. To improve the energy and momentum resolution of a $`J/\psi `$ candidate, we performed a fit constraining the mass of each $`J/\psi `$ candidate to the world average value .
Electron candidates were identified based on the ratio of the track momentum to the associated shower energy in the CsI calorimeter and specific ionization loss in the drift chamber. The internal bremsstrahlung in the $`J/\psi e^+e^{}`$ decay as well as the bremsstrahlung in the detector material produce a long radiative tail in the $`e^+e^{}`$ invariant mass distribution and impede efficient $`J/\psi e^+e^{}`$ detection. We recovered some of the bremsstrahlung photons by selecting the photon shower with the smallest opening angle with respect to the direction of the $`e^\pm `$ track evaluated at the interaction point, and then requiring this opening angle to be smaller than $`5^{}`$. The addition of the bremsstrahlung photons resulted in a relative increase of approximately 25% in the $`J/\psi e^+e^{}`$ reconstruction efficiency without adding more background.
For the $`J/\psi \mu ^+\mu ^{}`$ reconstruction, one of the muon candidates was required to penetrate the steel absorber to a depth greater than 3 nuclear interaction lengths. We relaxed the absorber penetration requirement for the second muon candidate if it was not expected to reach a muon chamber either because its energy was too low or because it pointed to a region of the detector not covered by the muon chambers. For these muon candidates we required the ionization signature in the CsI calorimeter to be consistent with that of a muon. Muons typically leave a narrow trail of ionization and deposit approximately 200 MeV of energy in the crystal calorimeter. Hadrons, on the other hand, quite often undergo a nuclear interaction in the CsI crystals that have a depth of 80% of a nuclear interaction length. Compared to imposing the absorber penetration requirement on both muon candidates, this procedure increased the $`J/\psi \mu ^+\mu ^{}`$ reconstruction efficiency by 20% with 80% increase of background.
We required that the charged kaon candidates have $`dE/dx`$ and, if available, time-of-flight measurements that lie within 3 standard deviations of the expected values.
If for the $`BJ/\psi \varphi K`$ decays we assume a uniform Dalitz distribution and isotropic decays of $`J/\psi `$ and $`\varphi `$ mesons, then the expected efficiency of the combined $`dE/dx`$ and time-of-flight selection is approximately 90% per kaon candidate. The $`dE/dx`$ measurements alone provide the $`K/\pi `$ separation of more than 4 standard deviations for 92% of the $`\varphi `$ daughter kaons and for 64% of the “bachelor” kaons from $`B`$ decay. We selected $`\varphi K^+K^{}`$ candidates by requiring the $`K^+K^{}`$ invariant mass to be within 10 MeV/$`c^2`$ of the $`\varphi `$ mass . We did not use the normalized $`K^+K^{}`$ invariant mass because the mass resolution (1.2 MeV/$`c^2`$) is smaller than the $`\varphi `$ width (4.4 MeV) .
The $`K_S^0`$ candidates were selected from pairs of tracks forming well-measured displaced vertices. The resolution in $`\pi ^+\pi ^{}`$ invariant mass is about 4 MeV$`/c^2`$. We required the absolute value of the normalized $`\pi ^+\pi ^{}`$ invariant mass to be less than 4, then we performed a fit constraining the mass of each $`K_S^0`$ candidate to the world average value .
The $`BJ/\psi \varphi K`$ candidates were selected by means of two observables. The first observable is the difference between the energy of the $`B`$ candidate and the beam energy $`\mathrm{\Delta }EE(J/\psi )+E(\varphi )+E(K)E_{\mathrm{beam}}`$. The resolution in $`\mathrm{\Delta }E`$ for the $`BJ/\psi \varphi K`$ candidates is approximately 6 MeV. The second observable is the beam-constrained $`B`$ mass $`M(B)\sqrt{E_{\mathrm{beam}}^2p^2(B)}`$, where $`p(B)`$ is the absolute value of the $`B`$ candidate momentum. The resolution in $`M(B)`$ for the $`BJ/\psi \varphi K`$ candidates is about 2.7 MeV/$`c^2`$; it is dominated by the beam energy spread. The distributions of the $`\mathrm{\Delta }E`$ vs $`M(B)`$ for $`B^+J/\psi \varphi K^+`$ and $`B^0J/\psi \varphi K_S^0`$ are shown in Fig. 3. We used the normalized $`\mathrm{\Delta }E`$ and $`M(B)`$ variables to select the $`BJ/\psi \varphi K`$ candidates and defined the signal region as $`|\mathrm{\Delta }E/\sigma (\mathrm{\Delta }E)|<3`$ and $`|(M(B)M_B)/\sigma (M(B))|<3`$. We observed 8(2) events in the signal region for the $`B^+J/\psi \varphi K^+`$ ($`B^0J/\psi \varphi K_S^0`$) mode. Considering that $`K^0`$ can decay as $`K_S^0`$ or as $`K_L^0`$, and also taking into account $`(K_S^0\pi ^+\pi ^{})`$ and the difference in reconstruction efficiencies, we expect to observe on average 4.3 $`B^+J/\psi \varphi K^+`$ candidates for every $`B^0J/\psi \varphi K_S^0`$ candidate.
The Dalitz plot and the cosine of helicity angle distributions for the 10 $`BJ/\psi \varphi K`$ signal candidates are shown in Figs. 4 and 5. The helicity angle for $`J/\psi \mathrm{}^+\mathrm{}^{}`$ decay is defined as the angle between a lepton momentum in the $`J/\psi `$ rest frame and the $`J/\psi `$ momentum in the $`B`$ rest frame. An analogous definition was used for the $`\varphi K^+K^{}`$ decay. No conclusion can be drawn yet either about the $`J/\psi `$ and the $`\varphi `$ polarizations or about the resonant substructure of the $`BJ/\psi \varphi K`$ decay. If the $`J/\psi `$ and $`\varphi `$ mesons are the products of the hybrid charmonium $`\psi _g`$ decay, then the $`J/\psi \varphi `$ invariant mass is expected to be below the $`DD^{}`$ threshold (4.3 GeV/$`c^2`$) because $`\psi _gDD^{}`$ decay is likely to dominate above the threshold . The $`J/\psi \varphi `$ invariant mass is above 4.3 GeV/$`c^2`$ for all 10 $`BJ/\psi \varphi K`$ candidates thus difavoring the hybrid charmonium dominance scenario.
The background can be divided into two categories. First category is the combinatorial background from $`\mathrm{{\rm Y}}(4S)B\overline{B}`$ and continuum non-$`B\overline{B}`$ events. Second category is the background from non-resonant $`BJ/\psi K^+K^{}K`$ decays.
The combinatorial background from $`\mathrm{{\rm Y}}(4S)B\overline{B}`$ events was estimated using a sample of simulated events approximately 32 times the data sample; events containing a $`BJ/\psi K^+K^{}K`$ decay were excluded. We estimated the background from $`\mathrm{{\rm Y}}(4S)B\overline{B}`$ decays to be $`0.25_{0.08}^{+0.10}`$ events. In addition, we specifically considered $`BJ/\psi K^{}\pi ^+`$ with $`K^{}K\pi ^{}`$ and $`BJ/\psi \rho ^0K`$ decays because the beam-constrained $`B`$ mass distribution for these modes is the same as for the $`BJ/\psi \varphi K`$ decays. Using data and simulated events, we verified that those backgrounds are rendered negligible by the kaon identification, $`\varphi `$ mass, and $`\mathrm{\Delta }E`$ requirements. The combinatorial background from the continuum non-$`B\overline{B}`$ events was estimated using simulated events and the data collected below $`B\overline{B}`$ threshold. We found the continuum background to be negligible.
To estimate the background contribution from the non-resonant $`BJ/\psi K^+K^{}K`$ decays, we reconstructed $`B^+J/\psi K^+K^{}K^+`$ and $`B^0J/\psi K^+K^{}K_S^0`$ candidates in data requiring $`|M(K^+K^{})M_\varphi |>20`$ MeV/$`c^2`$ to exclude $`BJ/\psi \varphi K`$ events. We observed 7 $`BJ/\psi K^+K^{}K`$ candidates with the estimated $`B\overline{B}`$ combinatorial background of 2.8 events. We estimated the mean background from $`BJ/\psi K^+K^{}K`$ decays for the $`BJ/\psi \varphi K`$ signal to be $`0.27_{0.17}^{+0.21}`$ events; we assumed that $`BJ/\psi K^+K^{}K`$ decays according to phase space.
In summary, the estimated total background for the combined $`BJ/\psi \varphi K`$ signal is $`0.52_{0.19}^{+0.23}`$ events.
We evaluated the reconstruction efficiency using a sample of simulated $`BJ/\psi \varphi K`$ decays. We assumed a uniform Dalitz distribution and isotropic decays of $`J/\psi `$ and $`\varphi `$ mesons; these assumptions are consistent with data (Figs. 4 and 5). The reconstruction efficiency, which does not include branching fractions of daughter particle decays, is $`(15.5\pm 0.2)\%`$ for the $`B^+J/\psi \varphi K^+`$ mode and $`(10.3\pm 0.2)\%`$ for the $`B^0J/\psi \varphi K_S^0`$ mode. The reconstruction efficiency is close to zero at the edges of phase space where either $`\varphi `$ or $`K`$ meson is produced nearly at rest in the laboratory frame. Thus, the overall detection efficiency would be much smaller than the above values if the $`BJ/\psi \varphi K`$ decay is dominated by either a $`J/\psi K`$ resonance with a mass around 4.3 GeV/$`c^2`$ or a $`J/\psi \varphi `$ resonance with a mass around 4.8 GeV/$`c^2`$. No such resonances are expected. To assign the systematic uncertainty due to the decay model dependence of the reconstruction efficiency, we generated two additional samples of simulated $`BJ/\psi \varphi K`$ events. One sample was generated with a uniform Dalitz distribution for $`BJ/\psi \varphi K`$ and $`100\%`$ transverse polarization for $`J/\psi `$ and $`\varphi `$. The other sample was generated assuming the $`\varphi `$ and $`K`$ mesons to be daughters of a hypothetical spin-0 resonance with mass 1.7 GeV/$`c^2`$ and width 100 MeV. We estimated the relative systematic uncertainty due to the decay model dependence of the reconstruction efficiency extraction to be 7%.
For the branching fraction calculation we assumed equal production of $`B^+B^{}`$ and $`B^0\overline{B}^0`$ pairs at the $`\mathrm{{\rm Y}}(4S)`$ resonance and $`(B^+J/\psi \varphi K^+)=(B^0J/\psi \varphi K^0)=(BJ/\psi \varphi K)`$. We did not assign any systematic uncertainty due to these two assumptions. We used the world average values of $`(J/\psi \mathrm{}^+\mathrm{}^{})`$, $`(\varphi K^+K^{})`$, and $`(K_S^0\pi ^+\pi ^{})`$ . We used the tables in Ref. to assign the 68.27% C.L. intervals for the Poisson signal mean given the total number of events observed and the known mean background. The resulting branching fraction is $`(BJ/\psi \varphi K)=(8.8_{3.0}^{+3.5}[\mathrm{stat}]\pm 1.3[\mathrm{syst}])\times 10^5`$.
The systematic error includes the uncertainty in the reconstruction efficiency due to decay modeling plus the uncertainties in track finding, track fitting, lepton and charged-kaon identification, $`K_S^0`$ finding, background subtraction, uncertainty in the number of $`B\overline{B}`$ pairs used for this measurement, statistics of the simulated event samples, and the uncertainties on the daughter branching fractions $`(J/\psi \mathrm{}^+\mathrm{}^{})`$ and $`(\varphi K^+K^{})`$ . We estimated the total relative systematic uncertainty of the $`(BJ/\psi \varphi K)`$ measurement to be 15%.
In conclusion, we have fully reconstructed 10 $`BJ/\psi \varphi K`$ candidates with a total estimated background of 0.5 events. Assuming equal production of $`B^+B^{}`$ and $`B^0\overline{B}^0`$ pairs at the $`\mathrm{{\rm Y}}(4S)`$ resonance and $`(B^+J/\psi \varphi K^+)=(B^0J/\psi \varphi K^0)=(BJ/\psi \varphi K)`$, we have measured $`(BJ/\psi \varphi K)=(8.8_{3.0}^{+3.5}[\mathrm{stat}]\pm 1.3(\mathrm{syst}))\times 10^5`$. This is the first observed $`B`$ meson decay requiring the creation of an additional $`s\overline{s}`$ quark pair.
We gratefully acknowledge the effort of the CESR staff in providing us with excellent luminosity and running conditions. This work was supported by the National Science Foundation, the U.S. Department of Energy, the Research Corporation, the UTPA-Faculty Research Council Program, the Natural Sciences and Engineering Research Council of Canada, the A.P. Sloan Foundation, the Swiss National Science Foundation, and the Alexander von Humboldt Stiftung.
|
no-problem/9908/cond-mat9908367.html
|
ar5iv
|
text
|
# Gibbs Entropy and Irreversible Thermodynamics
## 1 Introduction
In recent years, important connections have been made between the theory of chaotic dynamical systems and the statistical mechanics of systems in nonequilibrium stationary states. This is based on the widely accepted belief that the dynamics of the microscopic constituents of matter is chaotic, as also formally expressed by the following :
Chaotic Hypothesis (Gallavotti-Cohen, 1995): A reversible $`N`$-particle system in a stationary state can be regarded as a transitive Anosov system, for the calculation of its macroscopic properties.
Although the dynamical systems methods have led to many interesting insights of physical interest, their application to elucidate the behavior of macroscopic systems, as done in statistical mechanics or (Irreversible) Thermodynamics has lead to difficulties which, in our opinion, have not yet been fully resolved. There seems to be, then, a qualitative difference between pure dynamics and thermodynamics (see, e.g. for some facet of this difference not considered here).
In this paper, we will try to clarify some aspects of the recently developed attempts to incorporate Irreversible Thermodynamics (IT) into the framework of dynamical systems theory. In this connection we will concentrate on the interesting recent works by Gaspard (G) ; by Breymann, Tél and Vollmer (BTV) ; and especially by Gilbert and Dorfman (GD) , who extensively investigated the connection between a coarse-grained “entropy” and IT in nonequilibrium states. Early works relevant to our discussion had already appeared in 1996, cf. .
The concept of coarse-grained entropy in the study of nonequilibrium systems has been discussed in the past. As a matter of fact, Gibbs himself introduced a coarse grained entropy to circumvent the difficulty that, the Gibbs entropy $`S_G`$ (cf. eq.(5) below), does not change during the time evolution of a Hamiltonian system . Similarly, the final goal of introducing the coarse graining in could be stated as that of circumventing certain difficulties which affect the Gibbs entropy of nonequilibrium systems, thus building a complete description of all quantities occurring in IT (cf. Eq.(1) below) in purely dynamical terms. The guiding idea in this endeavor is the identification of the irreversible entropy production rate with a special form of loss of information rate, to be defined below (cf. subsection 3.1). We begin our analysis with a description of the results obtained so far with the coarse grained approach, and then consider the difficulties which we find with it. This way, we indicate what might have to be considered further, in order to obtain a consistent theory of IT.
We note that a coarse-grained description –both in space and time– is also at the basis of IT itself . Indeed, the basic equation for the entropy change in IT is \[10(a)\]
$$\frac{\mathrm{\Delta }_{tot}S}{\tau }=\frac{1}{\tau }\left[\mathrm{\Delta }_eS+\mathrm{\Delta }_iS\right],$$
(1)
where we have divided by a small but finite time $`\tau `$ to obtain the rate of entropy change. Here, $`\mathrm{\Delta }_eS`$ is the entropy exhanged by the system with its surroundings, while $`\mathrm{\Delta }_iS`$ is, respectively, the entropy produced inside the system, in a time $`\tau `$. This relation can be re-written in the more usual local differential form as :
$$\frac{\rho s}{t}=\text{div }𝐉_{s,tot}+\sigma ;\sigma 0$$
(2)
where $`\rho `$ is the density of the system, $`s`$ is the entropy per unit mass, $`𝐉_{s,tot}`$ is the total entropy flow rate per unit area corresponding to the term $`\mathrm{\Delta }_eS/\tau `$, and $`\sigma `$ is the entropy production rate per unit volume. In particular, for a diffusive system, the term $`\sigma `$ can be related to the gradients in space of the densities of the various diffusing substances. Therefore, space derivatives of various quantities appear in the expressions for the entropy flow and entropy production rates. Equation (2) can also be written as
$$\rho \frac{ds}{dt}=\text{div }𝐉_s+\sigma ,𝐉_s=𝐉_{s,tot}𝐉_{s,c}$$
(3)
where $`𝐉_{s,c}=\rho s𝐯`$ is the convective flow, and $`𝐯`$ is the fluid velocity.
## 2 Gibbs entropy and (nonequilibrium) dynamical systems
We begin with a dynamical system $`(X,\varphi ^t)`$ representing the dynamics of an $`N`$-particle system in $`3`$ dimensions. Then, $`XIR^{6N}`$ is the phase space of the system, and $`\varphi ^t`$ is an invertible transformation of $`X`$ into itself for all times $`tIR`$. Given a probability measure $`\mu _0`$ on $`X`$ at time $`0`$, the dynamics of the system induces an evolution which in terms of the measurable sets $`AX`$ can be expressed by
$$\mu _t(A)=\mu _0(\varphi ^tA),tIR.$$
(4)
This expression defines the time evolution of the probability distribution in phase space, such that the “mass” in the set $`A`$ at time $`t`$, $`\mu _t(A)`$, is the same as it was in $`\varphi ^tA`$ at time zero, $`\mu _0(\varphi ^tA)`$. The measure $`\mu _t`$ can be seen as characterizing the state of the particle system at time $`t`$, in the sense that the expectation values of the “observables” $`𝒪`$ of the system (e.g. smooth functions of phase, $`𝒪:XIR`$) are given as averages of such functions with respect to $`\mu _t`$.
Gibbs Entropy: If $`\mu _t`$ has a density $`\rho _t`$ on $`X`$, i.e. $`\mu _t(dx)=\rho _t(x)dx`$, the Gibbs entropy of the system at time $`t`$ is defined by the quantity
$$S_G(t)=k_B_X\rho _t(x)[\mathrm{log}\rho _t(x)1]𝑑x$$
(5)
where $`k_B`$ is Boltzmann’s constant.<sup>1</sup><sup>1</sup>1The constant “$`1`$” in the integrand of Eq.(5) is introduced only for consistency with the definitions of . We refer to $`S_G`$ as to a fine grained quantity to emphasize its difference from the coarse grained quantities defined below (e.g. Eq.(13)), in the sense that its definition involves an integral over $`X`$ instead of a sum over a partition by finite-volume sets of $`X`$.
Unfortunately, the stationary states of the current models of nonequilibrium physical systems, seen as dynamical systems, are represented by singular measures $`\mu `$ for which eq.(5) does not make sense . It is then argued (cf. , Section 8.6) that coarse grained entropies should be used to characterize these nonequilibrium stationary states “… especially if we want to keep the operational interpretation of entropy as a measure of disorder.” In the following subsections, we describe two classes of models, showing how singular measures arise.
### 2.1 Thermostatted systems
Consider an $`N`$-particle system whose equations of motion contain the action of an external force field $`𝐅^e`$ and compensating (“thermostatting”) terms, which eliminate the increase of the (dissipative) energy of the system due to the work performed on the particles by $`𝐅^e`$, so that the system will finally reach a nonequilibrium stationary state . The equations of motion of one such system are:
$$\{\begin{array}{ccc}\dot{𝐪}_i\hfill & =& 𝐩_i/m\hfill \\ \dot{𝐩}_i\hfill & =& 𝐅_i^i+𝐅_i^e\alpha (x)𝐩_i\hfill \end{array}i=1,\mathrm{},N;x(q,p)XIR^{6N},$$
with periodic boundary conditions, so that $`X`$ can be assumed to be compact. Here $`m`$ is the mass of the particles; $`𝐅_i^i`$ and $`𝐅_i^e`$ are the forces on particle $`i`$ due to the other particles in the system and to the external field, respectively; $`x(q,p)(𝐪_i,𝐩_𝐢)`$, $`i=1,\mathrm{},N`$, stands for the collection of all the positions and momenta of the particles; and $`\alpha (x)𝐩_i`$ represents the effect of the “thermostat” on the system. The thermostatting function $`\alpha (x)`$ is obtained from Gauss’ principle of minimum constraint and is usually chosen in such a way that either the kinetic or the total energy of the system remain constant in time. We refer to such systems as thermostatted systems. For constant total energy \[isoenergetic (IE) constraint\], one obtains:
$$\alpha (x)=\alpha _{IE}(p)=\frac{_{i=1}^N\frac{𝐩_i}{m}𝐅_i^e}{_{i=1}^N\frac{𝐩_i^2}{m}},$$
(6)
which shows that $`\alpha (x)`$ is of order $`O(1)`$ and is related to the dissipation or the (generalized) entropy production rate in the system.<sup>2</sup><sup>2</sup>2We speak of generalized entropy and of kinetic temperature below because our systems are not necessarily close to equilibrium. Our definition of kinetic temperature can be modified to involve the peculiar velocities, if the center of mass of the system is not at rest . Indeed, if we define the current (flux in IT) at time $`t`$ as $`𝐉_t=_i𝐩_i/m_t`$ (an average with respect to the time dependent distribution $`\mu _t`$) and similarly set the average $`_{i=1}^N\frac{𝐩i^2}{m}_t`$ equal to $`3Nk_BT_t`$, where $`T_t`$ is the kinetic temperature of the system at time $`t`$, then, for a constant external field $`𝐅^e`$ (force in IT) and for a large system (large $`N`$) , we can write:
$$\alpha _{IE}_t=\frac{𝐉_𝐭𝐅_𝐞}{3Nk_BT_t},$$
(7)
which yields the IT entropy production rate per degree of freedom at time $`t`$.
Starting from a distribution $`\mu _0`$ on $`X`$ with density $`\rho _0`$, the time evolution of the dissipative system Eqs.(2.1),(6) gradually rearranges the distribution, concentrating it on sets of smaller and smaller volume in phase space. This produces what is usually called a phase space contraction, together with a sequence of more and more irregular densities $`\{\rho _t\}_{t>0}`$. In the long time limit, a singular distribution is obtained, which assigns a probability of one to sets of zero phase space volume. These sets are, in general, dense in $`X`$ if the external field is not too large, but with decreasing fractal dimension for increasing fields, till they are not dense anymore at high fields (cf. for the Lorentz gas).
The rate of variation of $`S_G`$ for all $`t>0`$ is (, p.252):
$$\dot{S}_G(t)\frac{dS_G}{dt}(t)=3Nk_B\alpha _t+O(k_B\alpha _t),$$
(8)
since the divergence of the equations of motion, Eqs.(2.1), is given by
$$\text{div}\dot{x}=3N\alpha (x)+O(\alpha (x)).$$
(9)
We note that $`\dot{S}_G(t)`$ converges to a negative constant value, $`\dot{S}_G(t)3Nk_B\alpha _{ss}`$ for large $`N`$ and large $`t`$, where the subscript $`ss`$ in $`\alpha _{ss}`$ indicates the steady state value. The result is that $`S_G(t)`$ diverges to $`\mathrm{}`$ as $`t\mathrm{}`$ , and it does so in an approximately linear fashion after a given relaxation time. Equation (8) is similar in content to Eq.(16) of Goldstein, Lebowitz and Sinai for positive times.
This dynamical description of a system in a nonequilibrium state yields the IT expression for the irreversible entropy production rate at any instant of time $`t>0`$, which is obtained from Eq.(7). Surprisingly, $`\dot{S}_G(t)`$ is observed to equal precisely the negative of this irreversible entropy production at all times $`t`$, cf. Eqs.(7,8). Thus, although so far it has not been possible to identify a quantity representig the entropy of the system, a connection between IT and an appropriately constructed function, somehow related to $`S_G`$, has been discovered. This, however, is not sufficient to imply that the entropy of the system should be linked with $`S_G`$. On the contrary, as discussed below in Section 4, the asymptotic divergence of $`S_G`$ suggests in fact that attempts to find such a link are likely to fail.
### 2.2 Multibaker maps with flux boundaries
A different class of nonequilibrium models is represented by finite multibaker chains coupled at both ends to infinite “reservoirs” , i.e. chains with flux boundary conditions. These models give rise, in the “macroscopic limit”,<sup>3</sup><sup>3</sup>3We put in quotes “reservoirs” and “macroscopic limit”, as they are crucial for a connection with IT as explained below. to stationary states characterized by singular measures in phase space, and are thought to behave similarly, on some respects, to certain ideal gas systems, such as the Lorentz gas considered by G in Chapter 8 of .
Several variations of these multibaker systems have been considered. We follow G’s definitions first. The space of the multibaker map with flux boundaries $`X`$ is made of a chain of squares $`B_n=[0,1]\times [0,1]`$, $`n\{\mathrm{}2,1,0,1,2,\mathrm{}\}`$ each placed at one site of an infinite one-dimensional lattice, as depicted in Fig. 1. The central section of the chain, whose squares are labelled by $`n=0,1,\mathrm{},L`$, represents a system coupled to two reservoirs: one at its left boundary (the squares labelled by $`1,2,\mathrm{}`$), and the other at its right boundary (the squares labelled by $`L+1,L+2,\mathrm{}`$). Each $`B_n`$ contains a certain number of points, thought to represent noninteracting particles, distributed according to a given distribution $`\mu (n,x,y)`$ defined on $`X`$, whose time evolution is defined in different ways for the system and the reservoirs respectively. In practice, one time step moves the point $`(n,x,y)`$ (the point $`(x,y)`$ of $`B_n`$) to the point $`\varphi (n,x,y)`$, where
$$\varphi (n,x,y)=\{\begin{array}{ccc}(n1,2x,\frac{y}{2}),\hfill & 0x<1/2,& \hfill 1nL+1\\ (n+1,2x,\frac{y+1}{2}),\hfill & 1/2x1,& \hfill 1nL1\\ (n1,x,y),\hfill & 0x<1/2,& \hfill n0,nL+2\\ (n+1,x,y),\hfill & 1/2x1,& \hfill n2,nL\end{array}$$
(10)
as depicted in Fig. 1. This dynamics is area preserving. Starting with appropriate initial point distributions in the infinite chain, one obtains a system of points coupled to two reservoirs, which feed points into the system at the fixed densities, $`\rho _+`$ (the left reservoir) and $`\rho _{}`$ (the right reservoir). By density we simply mean the number of points per unit area, in each region of $`X`$. During the time evolution a certain density profile is created, possibly converging to an invariant distribution in the long time limit.
Because infinitely many points are required for the reservoirs, the measure $`\mu `$ is not normalized. However, a probability distribution $`\chi `$ (cf. Eq.(39) below) can still be given for this system, considering the Poisson suspension measure associated with $`\mu `$ . In this case, the phase space $``$ of the system of “independent” points is part of the power set $`𝒫(X)`$ of the multibaker space $`X`$. In $``$, a “Gibbs entropy” can be defined as usual, if $`\chi `$ is not singular.
In the stationary state considered in the density profile is made of two kinds of strips only: those having density $`\rho _+`$ and those having density $`\rho _{}`$, which are separated by straight line segments. In the squares which are closer to the left reservoir, the strips with density $`\rho _+`$ dominate, while those with density $`\rho _{}`$ dominate at the other end of the system, so that $`\mu (B_n)`$ is linear in the squares’ label $`n`$, for $`0nL`$. As long as $`L`$ is finite, the corresponding Poisson measure $`\chi `$ is not singular. However, in order to obtain results which serve the purpose of nonequilibirum statistical mechanics, singular measures are needed in G’s approach (cf. p. 384). These are obtained in through a “macroscopic limit”, defined by $`L\mathrm{}`$ and $`(\rho _+\rho _{})/L=`$constant. In this limit, the invariant distribution $`\mu `$ becomes singular: the strips with the two different densities become thinner and thinner and more and more numerous, while $`\rho _+`$ grows without bounds. The corresponding Poisson measure $`\chi `$ is also singular, hence $`S_G`$ cannot be defined.
An interesting generalization of G’s model was proposed by BTV . The baker space $`X`$ now consists of a chain of identical rectangles of sides $`a`$, in the horizontal direction and $`b`$ in the vertical direction, respectively, Fig. 2. The boundary conditions can still be implemented by two infinite reservoirs as above. The dynamics are also slightly more general (Fig. 2). Each rectangle is divided in three vertical strips of horizontal widths $`la`$, $`sa`$ and $`ra`$ (from left to right), respectively, where $`l,s,r0`$, and $`l+s+r=1`$. Each rectangle is also subdivided into three horizontal strips of width $`a`$ and heights $`rb`$ (bottom strip), $`sb`$ (central strip) and $`lb`$ (top strip). The leftmost strip of rectangle $`m`$ is compressed and expanded and moved to fit the bottom horizontal strip of rectangle $`m1`$ (nearest left neighbour); the central vertical strip of rectangle $`m`$ remains in rectangle $`m`$, but is stretched and compressed so that it fits in the central horizontal strip; the rightmost vertical strip of rectangle $`m`$ is stretched and compressed to fit the top horizontal strip of rectangle $`m+1`$ (right nearest neighbour). The same procedure is applied to each rectangle of the system, while the points of the reservoirs are merely translated to the left and the right, without volume compression or expansion, like in .<sup>4</sup><sup>4</sup>4G’s original model then corresponds to taking $`l=r=1/2`$, and $`s=0`$.
Accordingly, after one time step, the distribution has changed in the chain, and with that the density of points in each rectangle as well as in each strip has changed. Let $`\varrho _m`$ be the density in rectangle $`m`$. This density evolves in the system like
$$\varrho _m(t+\tau )=(1rl)\varrho _m(t)+r\varrho _{m1}(t)l\varrho _{m+1}(t).$$
(11)
Again, the invariant distribution $`\mu `$ in the baker space $`X`$ and the associated Poisson measure $`\chi `$, are singular and $`S_G`$ is not defined. However, the mechanism through which the singularities are created is not by taking a macroscopic limit like in , but by a combination of phase space contraction and boundary effects.
## 3 The coarse-grained approach
To avoid the fact that $`S_G`$ is not defined in the current models of nonequilibrium stationary states, as discussed in Subsections 2.1 and 2.2, several attempts have been made to replace $`S_G`$ by a coarse grained information entropy<sup>5</sup><sup>5</sup>5To clearly distinguish between the physical entropy of IT and dimensionless information related entropies, we follow Nicolis and Daems , and choose to call information entropy a dimensionless entropy-like quantity. which takes finite values in the case of both non-singular and singular distributions . This approach is invoked in order a) to give a precise meaning to the concept of nonequilibrium entropy ; b) to properly handle the singularities of the stationary states, without giving up the interpretation of entropy as a measure of disorder p.370; c) to have a microscopic definition of the entropy production rate which agrees with that \[Eqs.(1)-(3)\] of IT . This would amend the restriction encountered in the usual thermostatted systems approach, where only the irreversible entropy production appears and not the complete description of IT, as given in Eq.(1). Here, we will follow GD’s approach and notation, which generalizes to some extent the previous ones, and emends some aspects of the original definitions of .
GD first consider a generating partition, $`𝒜`$, for the phase space $`X`$. Then a discretization of the time evolution by time steps of length $`\tau `$, is introduced to produce finer and finer partitions $`𝒜_{\mathrm{},k}`$:
$$𝒜_{\mathrm{},k}=\varphi ^{l\tau }(𝒜)\varphi ^{(l1)\tau }(𝒜)\mathrm{}𝒜\mathrm{}\varphi ^{(k1)\tau }(𝒜)$$
(12)
by taking the intersections of the cells of $`𝒜`$ evolved by the dynamics of $`\varphi ^\tau `$ up to $`k1`$ time steps forward in time and up to $`\mathrm{}`$ time steps backwards in time.<sup>6</sup><sup>6</sup>6These partitions are intended to be rigid frames into which the phase space $`X`$ is subdivided once and for all. They are therefore not affected by the time evolution of the system, although the dynamics has been used to construct the partitions. Thus, once the partition $`𝒜_{\mathrm{},k}`$ has been made, it remains in place without any change, independently of the dynamical evolution of the system which takes place “through it”. The symbol $``$ indicates the intersection of all the sets of a given partition with those of another one. In particular, we have $`𝒜_{\mathrm{}+1,k}=\varphi ^\tau 𝒜_{\mathrm{},k}𝒜_{\mathrm{},k}`$. Also, GD indicate by $`\mu _t`$ the phase space distribution and by $`\nu `$ the Liouville measure.
GD information entropy: Consider all the sets of the form $`B=_iE_i`$, with $`E_i𝒜_{\mathrm{},k}`$, i.e. all the sets which are unions of the cells of $`𝒜_{\mathrm{},k}`$. On these sets the GD coarse-grained information entropy $`S_{\mathrm{},k}^{GD}(B,t)`$ is defined by
$$S_{\mathrm{},k}^{GD}(B,t)=\underset{A𝒜_{\mathrm{}+1,k}B}{}\mu _t(A)\left[\mathrm{log}\frac{\mu _t(A)}{\nu (A)}1\right].$$
(13)
where the sum is carried out over all $`A𝒜_{\mathrm{}+1,k}`$ whose union is $`B`$.
The relation between $`S_{\mathrm{},k}^{GD}`$ and $`S_G`$, in the case that $`\mu _t`$ has a density $`\rho _t`$, is then given by:
$$S_G(t)k_BS_I(t)=k_B\underset{\mathrm{},k\mathrm{}}{lim}S_{\mathrm{},k}^{GD}(X,t),$$
(14)
where we also defined the fine grained information entropy $`S_I`$. Hence, for regular measures, the coarse grained entropies approximate better and better their fine-grained counterparts, when the graining of phase space is made finer and finer. On the contrary, if $`\mu _t`$ is singular, $`S_G`$ and $`S_I`$ do not exist, while $`S_{\mathrm{},k}^{GD}`$ for any $`\mathrm{},kIN`$ does.
The total rate of information entropy change in a time $`\tau `$ is then defined by:
$`{\displaystyle \frac{\mathrm{\Delta }_{tot}S_{\mathrm{},k}^{GD}(B,t)}{\tau }}`$ $`=`$ $`{\displaystyle \frac{1}{\tau }}\left[S_{\mathrm{},k}^{GD}(B,t+\tau )S_{\mathrm{},k}^{GD}(B,t)\right]`$ (15)
$`=`$ $`{\displaystyle \underset{A𝒜_{\mathrm{}+1,k}B}{}}\left[{\displaystyle \frac{\mu _t(\varphi ^\tau A)}{\tau }}\mathrm{log}{\displaystyle \frac{\mu _t(\varphi ^\tau A)}{\nu (A)}}{\displaystyle \frac{\mu _t(A)}{\tau }}\mathrm{log}{\displaystyle \frac{\mu _t(A)}{\nu (A)}}\right]`$
where one has used Eq.(4), $`\mu _{t+\tau }(A)=\mu _t(\varphi ^\tau A)`$, to get the second equality. This rate of change is then decomposed by GD into a sum of three terms:
$$\frac{\mathrm{\Delta }_{tot}S_{\mathrm{},k}^{GD}}{\tau }=\frac{1}{\tau }\left[\mathrm{\Delta }_eS_{\mathrm{},k}^{GD}+\mathrm{\Delta }_{th}S_{\mathrm{},k}^{GD}+\mathrm{\Delta }_iS_{\mathrm{},k}^{GD}\right],$$
(16)
where, $`\mathrm{\Delta }_eS_{\mathrm{},k}^{GD}(B)`$ is called the change in information entropy due to the flow between $`B`$ and its environment, $`\mathrm{\Delta }_{th}S_{\mathrm{},k}^{GD}(B)`$, the change in information entropy due to a thermostat in contact with the system, and $`\mathrm{\Delta }_iS_{\mathrm{},k}^{GD}(B)`$, that due to irreversible information entropy production in the system. This separation is based on an interpretation of thermostatted equations of motion for particle systems such as Eqs.(2.1), where the thermostatting term is seen as representing a real thermostat. We remark that $`S_{\mathrm{},k}^{GD}`$ is defined in terms of the phase space distribution $`\mu _t`$, hence changes of this distribution in phase space imply changes in $`S_{\mathrm{},k}^{GD}`$.
In particular, the information entropy change rate due to flow is defined by GD as originally done by G by:
$`{\displaystyle \frac{\mathrm{\Delta }_eS_{\mathrm{},k}^{GD}(B,t)}{\tau }}`$ $`=`$ $`{\displaystyle \frac{1}{\tau }}\left[S_{\mathrm{},k}^{GD}(\varphi ^\tau B,t)S_{\mathrm{},k}^{GD}(B,t)\right]`$ (17)
$`=`$ $`{\displaystyle \underset{A𝒜_{\mathrm{}+1,k}B}{}}\left[{\displaystyle \frac{\mu _t(\varphi ^\tau A)}{\tau }}\mathrm{log}{\displaystyle \frac{\mu _t(\varphi ^\tau A)}{\nu (\varphi ^\tau A)}}{\displaystyle \frac{\mu _t(A)}{\tau }}\mathrm{log}{\displaystyle \frac{\mu _t(A)}{\nu (A)}}\right],`$
while the change due to the thermostat is defined by:
$`{\displaystyle \frac{\mathrm{\Delta }_{th}S_{\mathrm{},k}^{GD}(B,t)}{\tau }}`$ $`=`$ $`{\displaystyle \frac{1}{\tau }}\left[S_{\mathrm{},k}^{GD}(B,t+\tau )S_{\mathrm{}+1,k1}^{GD}(\varphi ^\tau B,t)\right]`$ (18)
$`=`$ $`{\displaystyle \frac{1}{\tau }}{\displaystyle \underset{Aϵ𝒜_{\mathrm{}+1,k}B}{}}\mu _t(\varphi ^\tau A)\mathrm{log}{\displaystyle \frac{\nu (\varphi ^\tau (A))}{\nu (A)}}.`$
In the first equality of Eq.(18), the partition $`𝒜_{\mathrm{},k}`$ is compared with its preimage under $`\varphi ^\tau `$, i.e. with $`𝒜_{\mathrm{}+1,k1}=\varphi ^\tau 𝒜_{\mathrm{},k}`$, which should correspond to a different degree of resolution of the phase space. The term $`\mathrm{\Delta }_iS_{\mathrm{},k}^{GD}(B,t)/\tau `$ in Eq.(16) is then deduced from Eq.(16) itself, once the other terms have been defined by the Eqs.(15),(17),(18).
### 3.1 Gilbert–Dorfman results
We first discuss the connection of GD’s theory with IT. For that, the term $`\mathrm{\Delta }_{th}S_{\mathrm{},k}^{GD}(B,t)/\tau `$ is crucial. Consider thereto the case in which $`B=X`$, and the system is in a stationary state characterized by the natural (invariant) measure $`\mu `$. If we denote by $`𝒥`$ the Jacobian determinant of the transformation $`\varphi ^\tau `$, we can write
$$\nu (\varphi ^\tau (A))=_{\varphi ^\tau (A)}𝑑x=\frac{\nu (A)}{𝒥(\varphi ^\tau (x__A))}$$
(19)
where, under the assumption that the dynamics $`\varphi ^\tau `$ are smooth, $`x__A`$ is determined by the mean value theorem. Now, letting the graining of phase space become infinitely fine (i.e. letting $`\mathrm{},k\mathrm{}`$), we obtain:
$$\frac{\mathrm{\Delta }_{th}S_I}{\tau }\underset{\mathrm{},k\mathrm{}}{lim}\frac{\mathrm{\Delta }_{th}S_{\mathrm{},k}^{GD}(X)}{\tau }=_X\mathrm{ln}𝒥(x)\mu (dx)=\underset{j=1}{\overset{6N}{}}\lambda _j$$
(20)
where the $`\lambda _j`$’s are the Lyapunov exponents determined by the dynamics $`\varphi ^\tau `$. The sum of the Lyapunov exponents is negative and $`\mu `$ is singular with respect to the Lebesgue measure if the system is strictly dissipative . Hence, $`\mathrm{\Delta }_{th}S_{\mathrm{},k}^{GD}(X)/\tau `$, for sufficiently large $`\mathrm{}`$ and $`k`$, will also be negative in such a case.
Combining this with the assertion that $`\mathrm{\Delta }_eS_{\mathrm{},k}^{GD}(X)`$ and $`\mathrm{\Delta }_{tot}S_{\mathrm{},k}^{GD}(X)`$ both vanish in the stationary state, allows GD to set:
$$\frac{\mathrm{\Delta }_iS_I}{\tau }\underset{l,k\mathrm{}}{lim}\frac{\mathrm{\Delta }_iS_{\mathrm{},k}^{GD}(X)}{\tau }=\underset{l,k\mathrm{}}{lim}\frac{\mathrm{\Delta }_{th}S_{\mathrm{},k}^{GD}(X)}{\tau }=\underset{j=i}{\overset{6N}{}}\lambda _j.$$
(21)
so that the irreversible entropy production $`\mathrm{\Delta }_iS_{\mathrm{},k}^{GD}`$ has the proper positive sign for dissipative systems. The irreversible production is here obtained as the “loss of information” about the probability distribution in going from one level of resolution (that of $`𝒜_{\mathrm{}+1,k}`$) to another (that of $`𝒜_{\mathrm{}+2,k1}`$) level of resolution in the graining of phase space.<sup>7</sup><sup>7</sup>7We put in quotes “loss of information” to stress the fact that this does not directly correspond to the usual (Kolmogorov-Sinai) loss of information of dynamical systems theory. In fact, the second level of resolution is not necessarily coarser than the first, it is merely different. If $`𝒜`$ is a Markov partition, then $`𝒜_{\mathrm{}+2,k1}`$ is coarser than $`𝒜_{\mathrm{}+1,k}`$ in the stable directions . We remark that in the definition of $`\mathrm{\Delta }_eS_{\mathrm{},k}^{GD}`$, Eq.(17), and hence of $`\mathrm{\Delta }_iS_{\mathrm{},k}^{GD}`$, the term $`S_{\mathrm{},k}^{GD}(\varphi ^\tau B,t)`$ appears. However, there may be no collection of cells $`A`$ of $`𝒜_{\mathrm{},k}`$ whose union is the set $`\varphi ^\tau B`$. For this reason the finer partition $`𝒜_{\mathrm{}+1,k}`$ had to be introduced in the definition of the GD information entropy Eq.(13) (cf. Figure 3).
### 3.2 Gaspard’s results
In G’s book a review of his previous work is given, in which a special kind of partitions was considered: partitions whose cells all have the same phase space volume $`\epsilon `$. This was only for simplicity and does not change the substance of the results. Therefore, we will denote G’s partition with the symbol $`𝒜`$.
Gaspard only considers systems of independent points coupled to infinite reservoirs (thought to represent driven systems of noninteracting particles) and proceeds with the construction of a Poisson suspension measure $`\chi `$, from which a coarse grained entropy can be defined. One can see that this coarse grained entropy reduces to the GD information entropy plus a rest term (cf. Eq.(41) below). Gaspard then argues that this rest term can be made small with respect to the GD information entropy by taking the size of the partition cells sufficiently small. Therefore, the rest term may be neglected, and G’s calculations are then equivalent to GD’s calculations. In particular, the term called $`\epsilon `$-entropy flow by G, Eq.(8.105) of , is nothing other than GD’s $`\mathrm{\Delta }_eS_{0,1}^{GD}`$, if one starts from a partition $`𝒜`$ made of equal cells of size $`\epsilon `$ and takes $`𝒜_{\mathrm{},k}`$ with $`\mathrm{}=0,k=1`$. The same holds for the $`\epsilon `$-entropy production, Eq.(8.106) of , which equals GD’s $`\mathrm{\Delta }_iS_{0,1}^{GD}`$. On the other hand, Gaspard only uses dynamics which are phase space volume preserving , and he does not consider the term $`\mathrm{\Delta }_{th}S_{\mathrm{},k}^{GD}`$.
Applying G’s theory to the case of multibaker dynamics one obtains the following relation, Eq.(8.125) of :
$$\mathrm{\Delta }_iS_{0,1}^{GD}=D\frac{(\rho )^2}{\rho }+\text{ higher order terms}$$
(22)
where $`D`$ is the diffusion coefficient in the multibaker space $`X`$, $`\rho `$ is the density of the points moving through $`X`$ via baker-dynamics, while $`\rho `$ is the corresponding stationary state gradient of $`\rho `$ imposed by the presence of the unequal density in the boundary reservoirs. Note that the independence of the points, which allows the construction of a Poisson suspension, is crucial here to pass from a description in the phase space $``$ to the “1-point” (thought to be 1-particle) space $`X`$, making the operator $``$ a gradient in real space.
The quantity $`\mathrm{\Delta }_iS_{0,1}^{GD}`$ then turns out to have the desired form expected from IT for diffusion, which can be related to the (baker map) diffusion coefficient $`D`$ by, Eq.(8,126) of :
$$\underset{\epsilon 0}{lim}\underset{(\rho /\rho )0}{lim}\underset{L\mathrm{}}{lim}\frac{\rho }{(\rho )^2}\mathrm{\Delta }_iS_{0,1}^{GD}=D>0,$$
(23)
where $`L`$ is the size of the system between the two reservoirs. Equations (22,23) represent the first instance in which IT-like expressions for diffusive systems were derived from an area-preserving map.
### 3.3 The Breymann-Tél-Vollmer results
In the more general multibaker model considered by BTV, two kinds of coarse grained information entropies were defined: one, $`S_m^{BTV,c}`$, using the densities of points in each rectangle of area $`ab`$, and another one, $`S_m^{BTV,C}`$, using the single horizontal strips (of area $`alb`$, $`asb`$ and $`arb`$ respectively) of each rectangle, Fig.2. The collection of baker rectangles constitutes one of the two partitions of the system considerd by BTV ($`𝒜_{\mathrm{},k}`$ in GD’s notation), while the collection of the three horizontal strips of all rectangles constitutes the other partition ($`𝒜_{\mathrm{}+1,k}`$). The two coarse grained quantities are
$`S_m^{BTV,c}=ab\varrho _m\mathrm{log}{\displaystyle \frac{\varrho _m}{\varrho ^{}}}`$ (24)
$`S_m^{BTV,C}=arb\varrho _{m,b}\mathrm{log}{\displaystyle \frac{\varrho _{m,b}}{\varrho ^{}}}asb\varrho _{m,c}\mathrm{log}{\displaystyle \frac{\varrho _{m,c}}{\varrho ^{}}}alb\varrho _{m,t}\mathrm{log}{\displaystyle \frac{\varrho _{m,t}}{\varrho ^{}}},`$ (25)
where $`\varrho ^{}`$ is a constant reference density, $`\varrho _{m,b}`$ is the coarse-grained density on the bottom horizontal strip of cell $`m`$, $`\varrho _{m,c}`$ is the coarse-grained density on the central strip and $`\varrho _{m,t}`$ is the coarse-grained density on the top strip of cell $`m`$. The variation in time of the coarse grained entropy $`S_m^{BTV,c}`$ is split into two terms: the flow term
$$\mathrm{\Delta }_eS_m^{BTV,c}(t)=S_m^{BTV,C}(t+\tau )S_m^{BTV,C}(t),$$
(26)
which is assumed to be the same as the total variation $`S_m^{BTV,C}`$; and the irreversible information entropy production term
$$\mathrm{\Delta }_iS_m^{BTV,c}(t)=\left(S_m^{BTV,c}(t+\tau )S_m^{BTV,C}(t+\tau )\right)\left(S_m^{BTV,c}(t)S_m^{BTV,C}(t)\right).$$
(27)
The sum of $`\mathrm{\Delta }_eS_m^{BTV,c}`$ and $`\mathrm{\Delta }_iS_m^{BTV,c}`$ is then the total variation of $`S_m^{BTV,c}`$ in one time step.
The next important ingredient of BTV’s approach is the macroscopic limit for these multibaker models. This uses an expansion up to second order derivatives in terms of the horizontal coordinate $`x`$ for the density:
$$\varrho (x\pm a)=\varrho (x)\pm a_x\varrho (x)+\frac{a^2}{2}_x^2\varrho (x).$$
(28)
Furthermore, the system is seen as a biased random walk on the line, so that one can attribute to it a given drift velocity $`v`$ and a given diffusion coefficient $`D`$ for each choice of $`r`$ and $`l`$. The quantities $`r,l,v,D`$ can then be used to define a scaling for the parameters $`a`$ and $`\tau `$:
$$r=\frac{\tau D}{a^2}\left(1+\frac{av}{2D}\right),l=\frac{\tau D}{a^2}\left(1\frac{av}{2D}\right)$$
(29)
so that a meaningful fine grained limit is obtained, in which both $`a`$ and $`\tau `$ tend to zero. Then, the following expression results for the irreversible information entropy production :
$$\sigma ^{BTV}=\frac{\varrho }{D}\left(vD\frac{\varrho }{\varrho }\right)^2$$
(30)
which, in the case with $`r=l=1/2`$, i.e. $`v=0`$, reduces to the same formula as given by Gaspard, Eq.(22) above, except for the higher order terms present in Eq.(22).
Three observations are in order here. 1) The special choice of partitions made by BTV is not strictly necessary to obtain the BTV results. Different choices which are closer to GD’s $`𝒜_{\mathrm{},k}`$ are possible, as explained in the Appendix of . 2) It is possible to keep higher order corrections in the calculations sketched above, so that terms corresponding to G’s higher order terms of Eq.(22) can also be found within the BTV approach (cf. Ref.). 3) The scaling given by Eqs.(29) is such that the relaxation time difficulty discussed below in Section 4.2 does not appear in the BTV’s approach. This scaling has been recently adopted also by Gaspard and Tasaki in .
## 4 Difficulties
The results presented in the previous sections, give rise to a number of questions some of which we will discuss here, concentrating on their implications for a consistent dynamical theory of IT. In particular we will try to identify the range of validity of the results obtained, pointing out which problems should in our opinion still be clarified or overcome.
### 4.1 The phase space difficulty
Relations such as Eqs.(22),(30) for multibaker maps look similar to those obtained in IT where real-space gradients appear. However, this is somewhat misleading and due to the simplicity of the map, whose phase space has in practice only one active dimension, (the direction of the density gradient in or the direction of “transport” in ), and to the assumption that the multibaker dynamics are valid substitutes for independent particle systems. In that case, indeed, there can be two situations: a) the system is infinite and can be described by a Poisson distribution; b) the system is finite, and the many-particle distribution factorizes. In both situations we are allowed to go from a description in the phase space $``$ to a description in the 1-point space $`X`$, where there is only one active (real-space) dimension. Then, entropy-like quantities can only flow in this direction, giving necessarily rise to real-space expression such as Eqs.(22),(30).<sup>8</sup><sup>8</sup>8Note that if the $`1`$-dimensional chain of baker maps is replaced by a $`d`$-dimensional lattice of baker maps, the active dimensions are $`d`$, and flows only occur in the $`d`$-dimensional real-space. This point will be examined further in Subsection 4.4.
Some difficulty emerges when this approach has to be applied to a wider clss of models than that of multibaker maps. In particular, interacting particle systems are not compatible with this approach. To study these situations some improvement of the presently developed approach is required. Indeed, following the general definitions and derivations given in , one immediately realizes that in principle the flows and the gradients computed there are all in terms of phase-space variables, and not in terms of real-space variables: $`\mathrm{\Delta }_eS_{\mathrm{},k}^{GD}(B)`$ represents a flow through the phase space volume $`B`$. Therefore, $`\mathrm{\Delta }_eS_{\mathrm{},k}^{GD}`$ given by Eq.(13) could be seen as the substitute for the flow $`𝐉_S^G`$ (Eq.(8.85) in )
$$𝐉_S^G=\left(\rho \mathrm{log}\rho \right)\dot{x},$$
(31)
which takes place in phase space, in the case that the state of the system is represented by a singular measure, for which the Gibbs entropy is not defined. In turn, $`𝐉_S^G`$ is reminiscent of the convective entropy flow of IT, cf. $`𝐉_{s,c}`$ defined below Eq.(3), so that $`\mathrm{\Delta }_eS_{\mathrm{},k}^{GD}(B)`$ could be thought of as representing $`𝐉_{s,c}`$.<sup>9</sup><sup>9</sup>9In the case that the space of the system and of the reservoirs are combined, as for multibaker models, this flow term would account for the total entropy flow. However, the IT entropy flow takes place in real space, not in phase space, and the phase space cannot be reduced to real space if there are interacting particles, or if there is a flow in momentum space. Therefore, the diffusion coefficient $`D`$ present e.g. in Eqs.(23),(30), in a more general context would concern diffusion in phase space rather than in real space. It is not clear, then, how the Gibbs entropy flow or its coarse grained substitute introduced in , could be related to the IT entropy flow.
### 4.2 The relaxation times difficulty
In IT, the relaxation times of given processes, i.e. their approach from an initial state to a stationary state, are directly related to the transport coefficients. Obvioulsy, speaking of relaxation times, one should first indicate which physical quantities are observed to relax, and which tolerance is accepted in assessing the relaxation. In general, when dealing with particle systems, the relaxation time is intended to be determined by the relatively short Maxwell relaxation time, $`\tau _M`$, which is typically the time of a few collisions per particle. In fact, the main physical observables approximate within measurable errors their stationary values in such a time. In any case, given the observables in which one is interested (e.g. smooth functions of phase) and the relaxation tolerance, the times needed for these observables to approach precisely enough the relevant limiting values are determined by the dynamics alone. On the contrary, the coarse grained quantities discussed in have relaxation times which strongly depend on the size of the cells of the coarse graining partitions.
This is similar to the problem of portraying the relaxation to equilibrium of an Hamiltonian system, by means of a coarse grained version of the Gibbs entropy . However, in our case the situation appears worse. Indeed, in equilibrium the Gibbs entropy equals the physical entropy of the system, at least. On the contrary, in the case of systems evolving towards nonequilibrium stationary states, characterized by singular measures, $`S_G`$ is not even defined in the stationary state. Hence, a coarse grained version of the Gibbs entropy in the study of relaxation towards nonequilibrium stationary states is at risk of being even less meaningful than in the case of relaxation towards an equilibrium state. We illustrate these facts with a simple example.
Why doesn’t the Gibbs entropy exist in the nonequilibrium stationary states of systems such as those described in Section 2? We have already seen that, starting from an initial state represented by a regular measure, hence with a given initial value of the Gibbs entropy, the time evolution is such that $`S_G(t)\mathrm{}`$ as $`t\mathrm{}`$. But, we could see more in detail why this happens, considering a simplified model of a thermostatted system of the kind discussed in section 2.1. The idea remains valid in general. Let the initial state be an equilibrium state, described by the microcanonical ensemble in a volume of size $`1`$, and let the stationary distribution be confined to a small (not dense) fractal region of phase space. This situation corresponds to a case with high forcing and consequent high dissipation. Let us take a phase space partition $`𝒜_{\mathrm{},k}`$, made of $`M`$ cells of equal size $`ϵ=1/M`$. The corresponding initial coarse grained information entropy then verifies
$$S_{\mathrm{},k}^{GD}(X,0)=1.$$
(32)
In the following time evolution leading to a nonequilibrium stationary state, the overall phase space contraction due to dissipation makes the probability distribution gradually concentrate on smaller and smaller regions of phase space,<sup>10</sup><sup>10</sup>10Because we assume the attractor not to be dense in $`X`$, if the graining is sufficiently fine, there are cells of the partition which contain parts of the attractor and other cells which do not. until it differs from zero only on a number $`L=L(M)<M`$ of cells of the partition. Assuming for simplicity that the probability to find the system in each of these $`L`$ cells is the same, we then get
$$S_{ss}(M)\underset{t\mathrm{}}{lim}S_{\mathrm{},k}^{GD}(X,t)=L\left[\frac{1}{L}\mathrm{log}\left(\frac{1/L}{1/M}\right)+1\right]=\mathrm{log}\left(\frac{L(M)}{M}\right)+1.$$
(33)
Therefore, since the fraction $`L(M)/M`$ tends to zero when $`M`$ tends to infinity, we have $`S_{ss}(M)\mathrm{}`$ for $`(1/M)=ϵ0`$. In other words, the fact that the phase space probability distribution is rearranged by the time evolution, so that sets of zero volume take a probability of $`1`$ in the stationary state, makes the Gibbs entropy diverge to $`\mathrm{}`$, indicating that there is no connection between the Gibbs entropy and the physical entropy of the system. This is still true even if the dissipation is small, and all sets of measure $`1`$ are dense in phase space.
Let us consider then, in more general terms, one thermostatted system evolving towards a nonequilibrium stationary state, whose initial state is represented by a regular measure $`\mu _0`$, for which the Gibbs entropy $`S_G(0)`$ is defined. In the following evolution, $`S_G(t)`$ gradually diverges to $`\mathrm{}`$, but at any positive time $`t`$ the distribution $`\mu _t`$ remains regular, and the corresponding Gibbs entropy can be approximated better and better by finer and finer coarse grained entropies, as in Eq.(14). Because of the divergence of $`S_G`$ and because the size of the partition cells needed in the definition of $`S_{\mathrm{},k}^{GD}`$ can be taken arbitrarily small, the total information entropy change, Eq.(15), can be kept different from zero during arbitrarily long times. Indeed, by taking finer and finer partitions, $`\mathrm{\Delta }_{tot}S_{\mathrm{},k}^{GD}(X,t)/\tau `$ will approach better and better, and for longer and longer times, the rate of decrease of the fine grained information entropy $`S_I=S_G/k_B`$, given by Eq.(8), which has a definite negative value of order $`O(N)`$ ($`3N\alpha _{ss}`$). Now, for every fixed $`t0`$ (which could exceed $`\tau _M`$ by any amount), the state of the system is represented by a probability measure $`\mu _t`$ which has a density $`\rho _t`$. Hence, given any tolerance $`\delta >0`$, and any time increment $`\tau >0`$, there will be an $`ϵ_{\delta ,\tau }>0`$ such that (cf. Fig.4):
$$\left|S_{\mathrm{},k}^{GD}(X,t)S_G(t)/k_B\right|<\delta \text{and }\left|S_{\mathrm{},k}^{GD}(X,t+\tau )S_G(t+\tau )/k_B\right|<\delta ,$$
(34)
if the size of the cells of the partition $`𝒜_{\mathrm{},k}`$ is smaller than $`ϵ_{\delta ,\tau }`$. It follows that
$$\frac{\mathrm{\Delta }_{tot}S_{\mathrm{},k}^{GD}(X,t)}{\tau }=\frac{S_G(t+\tau )S_G(t)}{k_B\tau }+O\left(\frac{\delta }{\tau }\right)=\frac{1}{k_B}\frac{dS_G}{dt}+O(\tau )+O\left(\frac{\delta }{\tau }\right)=O(N),$$
(35)
instead of $`\mathrm{\Delta }_{tot}S_{\mathrm{},k}^{GD}(X,t)/\tau 0`$, since we can take that $`O(\delta /\tau )`$ is $`O(1)`$ or less, because $`\tau `$ is fixed a priori.
Therefore, unphysical, partition dependent, relaxation times have been introduced through the coarse-graining procedure, which are extraneous to the dynamics of the system.
The approach of BTV seems to avoid the problem of the relaxation times, because of the way its authors defined their macroscopic limit, cf. subsection 3.3. In this approach one does not take finer and finer partitions of each baker map rectangle; one only increases the number of rectangles between the reservoirs reducing their side $`a`$ and the length of the time step $`\tau `$, in such a way that Eqs.(29) are verified. Then, the fact that the number of time steps $`n`$ has to increase in order for the entropy to reach its stationary value could be balanced by the decrease of $`\tau `$, so that $`n\tau `$ may converge to a finite value. From this point of view, then, the BTV macroscopic limit should be preferred.
### 4.3 The difficulty of unphysical definitions
The relaxation times problem points out further difficulties: the very definition of the entropy flow and irreversible entropy production could be flawed. Indeed, the total rate of variation of the real IT entropy, $`\mathrm{\Delta }_{tot}S`$, relaxes to its stationary value zero in the relatively short time $`\tau _M`$, implying that this rate of variation becomes (and remains) smaller than a small $`\delta `$ within a time of order $`O(\tau _M)`$. Therefore, for any arbitrarily chosen time $`t`$ larger than $`\tau _M`$, if the cells of the partition are smaller than $`ϵ_{\delta ,\tau }`$, Eq.(35) yields:
$$k_B\frac{\mathrm{\Delta }_{tot}S_{\mathrm{},k}^{GD}(X,t)}{\tau }\frac{\mathrm{\Delta }_{tot}S(t)}{\tau }=k_BO(N),$$
(36)
where the second term on the l.h.s. is $`O(\delta )`$ or less, and the second is of order $`k_BO(N)`$. Assuming with GD that $`\mathrm{\Delta }_eS_{\mathrm{},k}^{GD}(X,t)=0`$, one can rewrite Eq.(36), with (16) and (1), in the form
$$\frac{k_B}{\tau }\left[\mathrm{\Delta }_{th}S_{\mathrm{},k}^{GD}(X,t)+\mathrm{\Delta }_iS_{\mathrm{},k}^{GD}(X,t)\right]\frac{1}{\tau }\left[\mathrm{\Delta }_eS(t)+\mathrm{\Delta }_iS(t)\right]=k_BO(N).$$
(37)
In particular, consider a value $`t`$ which is not necessarily exceedingly large, but larger than $`\tau _M`$. Without taking an extremely fine partition, and recalling that then $`k_BO(N)`$ is approximately equal to $`\mathrm{\Delta }_iS/\tau `$, as seen in subsection 2.1, we can write
$$\frac{k_B}{\tau }\mathrm{\Delta }_{th}S_{\mathrm{},k}^{GD}(X,t)\frac{1}{\tau }\mathrm{\Delta }_eS(t)\frac{k_B}{\tau }\mathrm{\Delta }_iS_{\mathrm{},k}^{GD}(X,t).$$
(38)
But this contradicts IT, according to which the quantity on the left hand side of Eq.(38), being the overall coarse grained entropy flow, should approximately equal only the first term on the right hand side. Therefore, at least either $`\mathrm{\Delta }_iS_{\mathrm{},k}^{GD}`$ or $`\mathrm{\Delta }_{th}S_{\mathrm{},k}^{GD}`$ cannot be correct. For, if $`\mathrm{\Delta }_iS_{\mathrm{},k}^{GD}`$ is of order $`O(N)`$, as the irreversible entropy production should be, then $`\mathrm{\Delta }_{th}S_{\mathrm{},k}^{GD}`$ is not the external entropy flow, while it should be, and if $`\mathrm{\Delta }_iS_{\mathrm{},k}^{GD}`$ is not of order $`O(N)`$, then it cannot be an irreversible entropy production.
Therefore, the agreement between $`\mathrm{\Delta }_iS_{\mathrm{},k}^{GD}`$ and $`\mathrm{\Delta }_{th}S_{\mathrm{},k}^{GD}`$ in the stationary state with their IT counterparts achieved only after a coarse graining dependent relaxation time, appears accidental. This suggests that the very definitions of the various terms on the right hand side of Eq.(16) cannot be physically correct. Again, the macroscopic limit of BTV may fix this problem.
### 4.4 The multibaker space difficulty
Simple dynamical systems such as the multibaker maps are very useful in understanding many aspects of chaotic dynamics. In a sense we could say that they play in this context a role similar to that of exactly soluble models in equilibrium statistical mechanics. However, the solvability often goes at the expense of the degenerately simple nature of the models themselves, which, in the case of multibaker chains, becomes cause of concern when one wants to identify certain features of the multibaker dynamics with known IT properties of real systems.
In particular, in order to speak of a quantity in some way related to the Gibbs entropy, one would need a phase space, in which (at least up to canonical transformations) half of the dimensions represent the “configurations” of the system in space and the other half represents the “momenta”. Then, in that phase space a coarse grained information entropy can be defined, which multiplied by $`k_B`$ and in the limit of fine graining becomes the Gibbs entropy itself (if it exists). To do this in a multibaker chain one has to identify position and momentum variables. These are assumed by BTV to be represented by the horizontal (along the chain) direction, and by the vertical (along the thickness of the chain) direction, respectively. If this identification is correctly carried out, then the multibaker phase space, being $`2`$-dimensional, could only be a substitute for a $`1`$-particle model in one dimension. Alternatively, the $`1`$-particle distribution could be used to describe a gas of identical noninteracting particles, perhaps in the presence of obstacles, as in the Lorentz gas. However, even the picture of the gas of independent particles is at odds with the BTV interpretation of the multibaker dynamics: points at different heights along the vertical direction of the baker rectangles can move in exactly the same way under the baker dynamics of the model, while points which are at the same height can move in totally different directions. Therefore, the vertical direction has nothing to do with momentum space.
This problem could perhaps be fixed by interpreting the baker dynamics and phase space differently. Like in , one could assume that the multibaker phase space mimics a Poincaré section of a particle system such as the Lorentz gas. The dynamics is followed from rectangle to rectangle like a moving particle in the Lorentz gas is followed from collision to collision. In that case, the problem of identifying the momentum variables is not so important anymore. However, two new problems emerge. In the first place, the coarse grained entropy of the system should be expressed in terms of all the phase space variables, and not just in terms of the variables of the Poincaré section. Therefore, the contributions due to the direction of the flow need to be worked out. However, perhaps more importantly, with this interpretation one would also lose the possibility of taking the BTV macroscopic limit, because one cannot assume that particle collisions occur at a rate which is coarse graining dependent. This because in G’s interpretation one time step is the time elapsed between two collisions, but the time step goes to zero as the graining is made finer and finer in the BTV macroscopic limit.
### 4.5 Factorizability, entropy and entropy production
We now try to understand under which conditions the results obtained for multibaker models in the 1-point space are valid for independent many-points systems. This leads us also to an analysis of the relationship between the definitions of entropy and entropy production rate, given in -. We follow , also in order to point out some subtleties of that approach.
Consider a distribution $`\mu `$ in the 1-point baker space. The associated Poisson suspension, corresponding to a “gas” of infinitely many independent points , is characterized by the probability measure
$$\chi (C_{B,N})=\frac{\mu (B)^N}{N!}e^{\mu (B)};\text{with }\chi (C_{B,N}C_{B^{},N^{}})=\chi (C_{B,N})\chi (C_{B^{},N^{}})\text{if }BB^{}=\mathrm{},$$
(39)
where $`\chi (C_{B,N})`$ is the probability of finding $`N`$ points in the 1-point volume $`B`$, and $`C_{B,N}`$ is the corresponding set in the phase space of the Poisson suspension. It is in this phase space that G can define his coarse grained $`\epsilon `$-entropy for a boundary driven system. If then $`\{B_i\}`$ is a partition of the 1-point space in cells of volume $`\epsilon `$, this entropy takes the form (cf. Eqs.(8.98),(8.99) of ):
$`S_\epsilon (\{B_i\})`$ $`=`$ $`{\displaystyle \underset{i}{}}{\displaystyle \underset{N=0}{\overset{\mathrm{}}{}}}\chi (C_{B_i,N})\mathrm{log}\chi (C_{B_i,N})`$ (40)
$`=`$ $`{\displaystyle \underset{i}{}}\mu (B_i)\mathrm{log}{\displaystyle \frac{e}{\mu (B_i)}}+(\epsilon )`$ (41)
where the rest term is
$$(\epsilon )=\underset{i}{}e^{\mu (B_i)}\underset{N=0}{\overset{\mathrm{}}{}}\frac{\mu (B_i)^N}{N!}\mathrm{log}N!.$$
(42)
The terms inside the external sum are of order $`O(\mu (B_i)^2)`$ and higher, hence can be neglected in Eq.(41) if $`\mu (B_i)`$ is small. This is obtained for a multibaker system of length $`L`$, by taking sufficiently fine partitions, i.e. $`B_i`$ of sufficiently small volume.
This step is of fundamental importance for the IT-like results of to be valid for a many-point system. In fact, these results are obtained using the first term of Eq.(41) only, neglecting $`(\epsilon )`$. However, in principle, $`(\epsilon )`$ could be large, compared to the first term in Eq.(41), because the macroscopic limit ($`L\mathrm{}`$ for multibaker maps) has to be taken before the fine graining limit (cf. discussion below Eq.(8.126) of ). This means that, however small the volume of $`B_i`$ might be, as long as it does not vanish, its measure $`\mu (B_i)`$ will be large in general, since the density will be large, making $`(\epsilon )`$ also large. In this case, neglecting $`(\epsilon )`$ implies that the IT-like results are not derived from the many-points distribution $`\chi `$, but instead from a kind of information entropy defined through the 1-point distribution $`\mu `$. Now, because $`\mu `$ is not normalized, it cannot be a factor of a many-points distribution, making those results valid only for a 1-point system.<sup>11</sup><sup>11</sup>11To have a normalized 1-point distribution, one would have to implement the boundary conditions in a different way, using, e.g., a compact phase space with bulk dissipative dynamics, like in thermostatted systems, or with biased dynamics in certain regions (e.g. the walls) of the system .
Gaspard overcomes this difficulty in an ingeneous way, by splitting the fine graining limit into two steps: The limit of vanishing linear size of the partition cells along the unstable direction is taken before the macroscopic limit, the remaining limit of vanishing linear size of the cells $`B_i`$ along the stable direction is taken after the macroscopic limit, so that the macroscopic limit “interrupts” the fine graining process. This way, the volumes of the partition cells are made vanishing before the macroscopic limit is taken, $`(\epsilon )`$ can be neglected and the results presented in Section 3. follow.
However, the fine graining limit, in particular the part taken before the macroscopic limit, comes at the cost of losing the coarse grained information entropy (which then diverges). Similar considerations hold also for the results of BTV and GD, therefore it seems that the knowledge of both the entropy and its production rate cannot be given together with the present approaches, as already noted for the thermostatted approach.
### 4.6 The thermostat difficulty
The need for the term $`\mathrm{\Delta }_{th}S_{\mathrm{},k}^{GD}(X)`$ in Eq.(15) was deduced by GD from the fact that the information entropy flow $`\mathrm{\Delta }_eS_{\mathrm{},k}^{GD}(X)`$ cannot represent an entropy flow between the system and its environment, if the system is closed or periodic. Nevertheless, this seems to be at odds with IT. Indeed, $`\alpha (x)𝐩_i`$ is merely introduced to enable the externally driven dynamical system to reach a stationary state as is done automatically by a thermostat in $`\mathrm{\Delta }_eS`$ in IT. In fact, it is for that reason that $`\alpha (x)𝐩_i`$ is usually referred to as the “thermostat” of the system. However, dynamically, this term has nothing to do with a real thermostat and, in fact, it appears in Eqs.(2.1) as a Lagrange multiplier, due to the application to the $`N`$-particle system of Gauss’ (purely dynamical) principle of minimum constraint, to make the system preserve its kinetic or total energy in the course of time. In the derivation of Eqs.(2.1), no use is made of the properties of any other dynamical system constituting a thermostat. Therefore, an interpretation of $`\alpha (x)𝐩_i`$ as representing an actual physical thermostat, which absorbs the dissipative energy created in the system by the external forces $`𝐅^e`$, and has an explicit representation in the entropy balance Eq.(15), is an interpretation which appears to have no basis in the purely dynamical nature of the equations (2.1) themselves.
What can be said, instead, is that Eqs.(2.1) serve as a convenient tool to describe a system in a nonequilibrium state by purely dynamical means, without incurring the technical difficulties posed by infinitely large reservoirs. That a real system would not settle on a nonequilibrium state without the presence of a real thermostat, seems irrelevant in the analysis of the dynamics of Eqs.(2.1).
## 5 Discussion
1. The above discussion on the coarse grained approach to a complete dynamical theory of IT pointed out difficulties which we found in the current formulations. Therefore it seems to us that a coarse grained entropy approach based on $`S_G`$ does not provide at present a satisfactory connection with IT. The same can be said about thermostatted systems. However, for the latter systems the irreversible entropy production is at least unambiguously known at any time: in the transient as well as in the stationary state. This is not the case in the coarse grained description. Indeed, we pointed out various difficulties which affect the treatments of IT provided by BTV, G and GD. The approach of BTV could avoid the problems connected with the transient states, and it is worthwhile to further study this topic, but the phase space dynamics seems to be very special. On the other hand, the approach of G and GD was intended to describe stationary states only , despite the full time dependent treatment they give .
2. It seems that the possibility of identifying in thermostatted systems other contributions, beyond the irreversible entropy production term, occurring in IT, is not obvious. On the basis of our analysis we would argue that, so far, the dynamics of thermostatted systems allows us only to identify the irreversible entropy production rate. It is obvious that a stationary state of a real system, with a given irreversible entropy production rate will be affected by an equal and opposite divergence of an entropy flow. Nevertheless, this does not emerge from the dynamics of thermostatted systems. The connection of dynamical properties of thermostatted systems with the term div$`𝐉_{S,tot}`$ occurring in IT, remains therefore unclear.
3. Although the idea of a possible connection between coarse-graining, information loss and entropy changes discussed here is very intriguing, as far as we can tell, it does not seem to work in its present form for macroscopic systems, as long as one connects it with $`S_G`$, which diverges to $`\mathrm{}`$. The fact that the rate of change $`\dot{S}_G`$ equals the irreversible entropy production rate of thermostatted systems does not seem to be a reason sufficient to assume that $`S_G`$ itself has any direct connection with the entropy of such a system. Morever, the connection of the information loss used here with the usual (Kolmogorov-Sinai) information loss, if any, and its relevance for the calculation of the IT entropy is also not clear to us. Therefore, it seems to us that further study of the connection of the dynamics of particle systems in nonequilibrium states and IT is still required.
## Acknowledgements
We would like to thank F. Bonetto, C.P. Dettmann, J.R. Dorfman, P. Gaspard, T. Gilbert, T. Tél and J. Vollmer for inspiring discussions. LR gratefully acknowledges support from GNFM-CNR (Italy) and from MURST (Italy). EGDC gratefully acknowledges support from the US Department of Energy under grant DE-FG02-88-ER13847.
## Figure captions
Figure 1. One time step in the evolution of the infinite multibaker chain. The squares with labels $`0,1,\mathrm{},L`$ constitute the system. The others constitute the reservoirs. One time step corresponds to one application of $`\varphi `$, which moves the poins with a given shade to the points with the same shade. This time evolution is volume preserving, hence it does not affect the density of points. Starting from any initial distribution, we see how the densities of the baths enter into the system. In the stationary state, only the blackest and the white densities fill the cells of the system.
Figure 2. One time step of the evolution of the BTV multibaker model. Unlike in Fig.1, there is phase space contraction here if $`lr`$. The same dynamics of Fig.1 is obtained if $`s=0`$ and $`l=r=1/2`$. One time step moves the points of the different vertical strips of rectangle $`m`$ with a given shade to rectangle $`m1`$, $`m`$ or $`m+1`$, respectively, with the same shade.
Figure 3. From left to right we have the partition $`𝒜`$, in in the baker square $`m`$, the partition $`\varphi ^1𝒜`$, and the partition $`\varphi ^1𝒜𝒜`$, in the baker square $`m+1`$, respectively. The preimage $`\varphi ^1A`$ of every set $`A`$ in cell $`m`$, which is the union of cells of $`𝒜`$, can be partitoned by cells of $`\varphi ^1𝒜𝒜`$, while it cannot be partitioned by cells of $`𝒜`$ itself.
Figure 4. In the left panel is depicted the decay of the IT entropy from a transient to a stationary state, which takes a time of the order of the Maxwell relaxation time (or is determined by an appropriate transport coefficient). Here we have assumed that the initial (equilibrium) entropy is higher than the steady state entropy. The right panel shows the decay of the coarse grained entropies for various partition sizes (curves labelled by $`(1)`$, $`(2)`$ and $`(3)`$), and the divergence of the Gibbs entropy (thickest line). All the coarse grained entropies start from the same value, and eventually settle on a plateau. However, they remain close (within a distance $`\delta `$, say) to $`S_G`$ for longer and longer times if the relevant partitions are finer and finer. Curve $`(1)`$ corresponds to the coarsest partition. The region delimited by curve $`(2)`$, by $`S_G`$ and by the two vertical (solid line) segments is made of points whose distance from $`S_G`$ is less than $`\delta `$.
|
no-problem/9908/hep-ph9908225.html
|
ar5iv
|
text
|
# Normalization Constants of Large Order Behavior
## Abstract
A perturbation scheme is discussed for the computation of the normalization constant of the large order behavior arising from an ultraviolet renormalon. In this scheme the normalization constant is expressed in a convergent series that can be calculated using the ordinary perturbative expansion and the nature of the renormalon singularity.
| PACS numbers: 11.15.Bt, 11.10.Jj, 11.25.Db |
| --- |
| Keywords: renormalon, large order behavior |
The large order behavior in field theories arising from a renormalon is generally given in the form
$$a_n=Kn!n^\nu b_0^n[1+O(1/n)]\text{for}n\mathrm{}.$$
(1)
While the constants $`\nu `$ and $`b_0`$ are calculable, the normalization constant $`K`$ cannot be determined exactly. An infinite number of renormalon diagrams contribute to it, but it is not known how to sum such diagrams to all order .
Though the normalization constant cannot be determined exactly, we shall see that it can be calculated perturbatively to an arbitrary precision. The large flavor ($`N_f`$) expansion is often invoked for an approximate evaluation of the normalization, and in the literature it is further asserted that the normalization cannot be computed without resorting to it . However, the $`N_f`$ expansion may not be considered a systematic perturbation scheme, as it is not proven that it gives a convergent series or even compatible with nonabelian gauge theories. At large $`N_f`$, the asymptotic freedom is lost, and so it is probably incompatible with asymptotic free gauge theories.
We note, however, that there is a systematic method that is both compatible with nonabelian gauge theories and gives a convergent series. In we have discussed a scheme which express the normalization constant of an infrared (IR) renormalon in a convergent series which depends only on the strength of the renormalon singularity and the ordinary perturbative coefficients of the amplitude in consideration. A sample calculation in QCD using the radiative calculations up to three loops shows that our method gives a rather quickly convergent series. The purpose of this letter is to extend the scheme to the case of an UV renormalon.
Let us first review the perturbation scheme briefly in the case of an IR renormalon. To be specific, we consider a Green’s function $`D(\alpha )`$, such as the Adler function, in QCD and its expansion
$$D(\alpha )=\underset{n=0}{\overset{\mathrm{}}{}}a_n\alpha ^{n+1}.$$
(2)
The Borel transform $`\stackrel{~}{D}`$ is then defined as follows. We first define it in the neighborhood of the origin as
$$\stackrel{~}{D}(b)=\underset{n=0}{\overset{\mathrm{}}{}}\frac{a_n}{n!}b^n,$$
(3)
and then by analytically continuing to the whole $`b`$-plane. $`\stackrel{~}{D}(b)`$ is known to have IR renormalon singularities at $`b=n/\beta _0`$, $`n=2,3,4,\mathrm{}`$ and UV renormalon singularities at $`b=n/\beta _0`$, $`n=1,2,3,\mathrm{}`$, where $`\beta _0`$ is the first coefficient of the $`\beta `$-function. Throughout this letter we assume there are no other singularities associated with $`\stackrel{~}{D}(b)`$ except for those caused by the instantons which are irrelevant for our discussion. Now consider the first IR renormalon at $`b=2/\beta _0`$. The nature of the first IR renormalon singularity is given in the form
$$\stackrel{~}{D}(b)\frac{\widehat{D}}{\left(1+\frac{\beta _0b}{2}\right)^{1+\nu }}\text{for}b\frac{2}{\beta _0},$$
(4)
where $`\nu =2\beta _1/\beta _0^2`$, and $`\beta _1`$ is the second coefficient of the $`\beta `$-function. Then the large order behavior caused by the renormalon is given in the form (1) with $`K=\widehat{D}/\nu !`$ and $`b_0=2/\beta _0`$. Note that the normalization constant becomes the residue of the singularity in the $`b`$-plane. Thus we can equivalently work with the renormalon residue in order to study the normalization constant.
The perturbative calculation of the residue is based on the observation that the residue of the closest singularity to the origin in the complex plane can be expressed in a convergent series involving only the Taylor expansion at the origin. Assuming that the strength of the singularity is known all one needs to calculate the residue is to move the singularity in consideration by conformal mapping closer to the origin than any other singularities in the Borel plane.
This scheme works as follows in the case of the first IR renormalon. The closest singularity to the origin in the $`b`$-plane is the UV renormalon at $`b=1/\beta _0`$. Using the conformal mapping
$$z=\frac{\beta _0b}{1\beta _0b}$$
(5)
we can make the first IR renormalon the closest singularity to the origin. In the $`z`$-plane the first IR renormalon assumes
$$\stackrel{~}{D}(b(z))\frac{\left(\frac{2}{9}\right)^{1+\nu }\widehat{D}}{\left(\frac{2}{3}z\right)^{1+\nu }}\text{for}z\frac{2}{3}.$$
(6)
Now consider a function defined by
$$R(z)=\stackrel{~}{D}(b(z))\left(\frac{2}{3}z\right)^{1+\nu }.$$
(7)
At the IR renormalon singularity, $`R(z)`$ could still be singular, but it is bounded. The residue is then given by
$$\left(\frac{2}{9}\right)^{1+\nu }\widehat{D}=R(z)|_{z=\frac{2}{3}}.$$
(8)
Since $`R(z)`$ is analytic on the disk $`|z|<2/3`$ we can write (8) in a series form by expanding it at $`z=0`$
$$\left(\frac{2}{9}\right)^{1+\nu }\widehat{D}=\underset{n=0}{\overset{\mathrm{}}{}}r_nz^n|_{z=\frac{2}{3}}.$$
(9)
It is easy to see that $`r_n`$ depends only on $`a_i`$ with $`in`$, and thus calculable.
One might question the convergence of the series (9), since it is evaluated at the renormalon singularity at which $`R(z)`$ could be singular. However, it should be noted that the finiteness of $`R(z)`$ at the singularity guarantees the convergence. A numerical evaluation of the series using the 3-loop calculation of the Adler function in QCD shows a quick convergence for small $`N_f`$ case . For example, the first three elements of the series are 0.904, -0.358, 0.003 for $`N_f=2`$, and 0.946, -0.354, -0.098 for $`N_f=3`$.
This example demonstrates that a renormalon residue can be expressed in a convergent series once the nature of the singularity is known. We now show that a similar conclusion can be made with a UV renormalon. In the following we shall assume that $`\beta _0<0`$, for definiteness, and focus exclusively on the first UV renormalon since it gives the dominant large order behavior. The structure of the UV renormalon is a little more complicated. According to Parisi, it is determined by an insertion of dim=6 operators . To be specific, let us consider a Green’s function $`A(\alpha )`$ and its Borel transform $`\stackrel{~}{A}(b)`$. Generally the Borel transform has a branch cut beginning at the first UV renormalon on the negative real axis, and a quantity defined by
$$\text{Im}A(\alpha )=\underset{ϵ0}{lim}\frac{1}{2i}_{\mathrm{}}^0𝑑be^{\frac{b}{\alpha }}\left[\stackrel{~}{A}(b+iϵ)\stackrel{~}{A}(biϵ)\right]$$
(10)
is nonvanishing. The Parisi ansatz states that the dominant contribution to $`\text{Im}A(\alpha )`$, for $`\alpha 0_{}`$, arises from an insertion of dim=6 operators, and is given in the form,
$$\text{Im}A(\alpha )=\underset{i=1}{\overset{M}{}}f_i(\alpha )O_i(\alpha )+O(e^{\frac{2}{\beta _0\alpha }})$$
(11)
where the index $`i`$ runs over all dim=6 operators $`O_i`$. And $`f_i`$ satisfy the renormalization group equation
$$\left[(\beta (\alpha )\frac{d}{d\alpha }1)\delta _{ij}\gamma _{ij}(\alpha )\right]f_j(\alpha )=0$$
(12)
where $`\gamma _{ij}`$ denotes the anomalous dimension of the associated dim=6 operators. An explicit implementation of the Parisi ansatz may be found in . Solving the RG equation one can write formally
$$\text{Im}A(\alpha )=\underset{i}{\overset{M}{}}\frac{\pi \beta _0K_i}{\mathrm{\Gamma }(\nu _i)}e^{\frac{1}{\beta _0\alpha }}\left(\frac{\alpha }{\beta _0}\right)^{1\nu _i}\left[1+\underset{j=1}{\overset{\mathrm{}}{}}\frac{\mathrm{\Gamma }(\nu _i)C_{ij}}{\mathrm{\Gamma }(\nu _ij)}\left(\frac{\alpha }{\beta _0}\right)^j\right]+O(e^{\frac{2}{\beta _0\alpha }})$$
(13)
where $`K_i`$ are undetermined constants while $`\nu _i`$ depend on both $`\beta _0,\beta _1`$ and the one loop anomalous dimension, and $`C_{ij}`$ are calculable constants depending on the higher order corrections on $`\beta (\alpha ),\gamma (\alpha )`$ and $`O_i(\alpha )`$. Note that the summation within the bracket is not well defined; but, this point will be irrelevant in the following discussion.
The corresponding Borel transform to (13) is then given by
$$\stackrel{~}{A}(b)\underset{i}{\overset{M}{}}K_i(1\beta _0b)^{\nu _i}\left[1+\underset{j=1}{\overset{\mathrm{}}{}}C_{ij}(1\beta _0b)^j\right]$$
(14)
in the neighborhood of the singularity at $`b=1/\beta _0`$, and the corresponding large order behavior is given in the form
$$a_n=\underset{i}{\overset{M}{}}\frac{K_i}{\mathrm{\Gamma }(\nu _i)}n!n^{\nu _i1}\beta _0^n[1+O(1/n)].$$
(15)
Our aim is to express the prefactors $`K_i`$ in a calculable, convergent series. Without losing generality, we may assume $`\nu _i>\nu _j`$ for $`i<j`$. And also, for the moment we shall assume all $`\nu _i>0`$, and later will make a comment on the case with a negative $`\nu _i`$. Then it is straightforward to write $`K_1`$, the prefactor of the most dominant term in the large order behavior, in a convergent series. Since
$$K_1=\stackrel{~}{A}(b)(1\beta _0b)^{\nu _1}|_{b=\frac{1}{\beta _0}},$$
(16)
we can obtain a series expression for $`K_1`$ by expanding the function on the r.h.s. at $`b=0`$. The resulting series is then convergent because the function is finite at the singularity and there is no other singularity within the disk $`|b|1/\beta _0`$.
For the prefactors other than $`K_1`$ we get a linear relation among them. Using (14) it is easy to write $`K_i`$ as
$$K_i=[h_i(b)+\underset{j}{}m_{ij}(b)K_j]|_{b=\frac{1}{\beta _0}}$$
(17)
where
$$h_i(b)=\stackrel{~}{A}(b)(1\beta _0b)^{\nu _i}$$
(18)
and
$`m_{ij}=\{\begin{array}{c}(1\beta _0b)^{\nu _i\nu _j}\left[1+_{k=1}^{\left[\nu _j\nu _i\right]}C_{jk}(1\beta _0b)^k\right],\text{for}i>j\hfill \\ 0,\text{for}ij,\hfill \end{array}`$ (21)
with $`\left[\nu _j\nu _i\right]`$ being an integer satisfying
$$0\nu _j\nu _i\left[\nu _j\nu _i\right]<1.$$
(22)
To solve eq. (17) we introduce $`r_i(b)`$ defined by
$$r_i(b)=\underset{j}{}[\delta _{ij}m_{ij}(b)]K_jh_i(b).$$
(23)
Note that by definition $`r_i(b)`$ vanishes at the renormalon singularity. From (23) we obtain
$$K_i=\underset{j}{}[1m^{(N)}]_{ij}^1(h_j^{(N)}+r_j^{(N)})$$
(24)
where $`f^{(N)}`$ for a function $`f(b)`$ denotes the $`N`$-th order Taylor expansion evaluated at the singularity, i.e,
$$f^{(N)}=\underset{n=0}{\overset{N}{}}\frac{d^nf(0)}{db^n}\frac{\beta _0^n}{n!}.$$
(25)
Note that $`h_i^{(N)}`$ is calculable in terms of the perturbative coefficients of $`\stackrel{~}{A}(b)`$. From the definition (21) we have
$`m_{ij}^{(N)}=\{\begin{array}{c}\frac{N^{\nu _j\nu _i}}{\mathrm{\Gamma }(\nu _j\nu _i+1)}\left[1+_{k=1}^{\left[\nu _j\nu _i\right]}\frac{C_{jk}\mathrm{\Gamma }(\nu _j\nu _i+1)}{\mathrm{\Gamma }(\nu _j\nu _ik+1)}N^k\right],\text{for}i>j\hfill \\ 0,\text{for}ij\hfill \end{array}`$ (28)
and from (14), (23)
$`r_i^{(N)}=`$ $`{\displaystyle \underset{j=1}{\overset{i1}{}}}{\displaystyle \underset{k=[\nu _j\nu _i]+1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{K_jC_{jk}}{\mathrm{\Gamma }(\nu _j\nu _ik+1)}}N^{\nu _j\nu _ik}`$ (29)
$`{\displaystyle \underset{j=i+1}{\overset{M}{}}}{\displaystyle \frac{K_j}{\mathrm{\Gamma }(\nu _j\nu _i+1)}}N^{\nu _j\nu _i}\left[1+{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{C_{jk}\mathrm{\Gamma }(\nu _j\nu _i+1)}{\mathrm{\Gamma }(\nu _j\nu _ik+1)}}N^k\right]`$
$`+c_iN^{\nu _i}(1+O(1/N))`$
where $`c_i`$ denotes a constant. The terms proportional to $`c_i`$ arise from the regular part of the Borel transform and are not generally calculable. As expected $`r_i^{(N)}`$ vanishes in the large $`N`$ limit.
Since $`m_{ij}^{(N)}`$ can be divergent under large $`N`$ limit, $`_j(1m^{(N)})_{ij}^1r_j^{(N)}`$ can be nonvanishing in large $`N`$ limit, and thus we may write part of (24) as
$$\underset{j}{}[1m^{(N)}]_{ij}^1r_j^{(N)}=\underset{j}{}(\mathrm{\Delta }_1)_{ij}K_j+r_i^1$$
(30)
where we have separated those terms nonvanishing under large $`N`$ limit from those vanishing, and put the former into $`_j(\mathrm{\Delta }_1)_{ij}K_j`$ and the latter into $`r_i^1`$. Note that the nonvanishing part is linear in $`K_i`$. This is because the terms proportional to $`c_i`$ in (29) give rise to a contribution only of $`O(N^{\nu _i})`$, thus affecting only $`r_i^1`$, which can be easily seen from the fact that $`(1m^{(N)})_{ij}^1`$ is at most as divergent as $`m_{ij}^{(N)}`$ in the large $`N`$ limit. Substituting (30) into (24) we obtain
$$K_i=\underset{j}{}[(1\mathrm{\Delta }_1)^1(1m^{(N)})^1]_{ij}h_j^{(N)}+\underset{j}{}(1\mathrm{\Delta }_1)_{ij}^1r_j^1.$$
(31)
We can now repeat this step by
$$\underset{j}{}(1\mathrm{\Delta }_{m1})_{ij}^1r_j^{m1}=\underset{j}{}(\mathrm{\Delta }_m)_{ij}K_j+r_i^m$$
(32)
for a finite number of times (say $`l`$ times) until $`_j(1\mathrm{\Delta }_l)_{ij}^1r_j^l`$ vanishes in the large $`N`$ limit, to obtain
$$\stackrel{}{K}=\underset{N\mathrm{}}{lim}\left[(1\mathrm{\Delta }_l)^1\mathrm{}(1\mathrm{\Delta }_1)^1(1m^{(N)})^1\stackrel{}{h}^{(N)}\right].$$
(33)
This is our main result.
Now as an example, let us consider a Borel transform whose UV renormalon singularity is given by (14) with $`M=2`$. Then from (28) and (29)
$$m_{21}^{(N)}=\frac{N^{\nu _1\nu _2}}{\mathrm{\Gamma }(\nu _1\nu _2+1)}(1+O(1/N)),\text{otherwise}m_{ij}^{(N)}=0$$
(34)
and
$`r_1^{(N)}`$ $`=`$ $`{\displaystyle \frac{K_2}{\mathrm{\Gamma }(\nu _2\nu _1+1)}}N^{\nu _2\nu _1}(1+O(1/N))+c_1N^{\nu _1}(1+O(1/N))`$
$`r_2^{(N)}`$ $`=`$ $`{\displaystyle \frac{K_1C_{11}}{\mathrm{\Gamma }(\delta +1)}}N^\delta (1+O(1/N))+c_2N^{\nu _2}(1+O(1/N))`$ (35)
where $`\delta =\nu _1\nu _2[\nu _1\nu _2]1<0`$. Using the definition (30) we find
$$\mathrm{\Delta }_1=\left(\begin{array}{cc}0& 0\\ 0& \frac{1}{\mathrm{\Gamma }(\nu _1\nu _2+1)\mathrm{\Gamma }(\nu _2\nu _1+1)}\end{array}\right),$$
(36)
and obtain
$$\left(\begin{array}{c}K_1\\ K_2\end{array}\right)=\underset{N\mathrm{}}{lim}\left[(1\mathrm{\Delta }_1)^1(1m^{(N)})^1\left(\begin{array}{c}h_1^{(N)}\\ h_2^{(N)}\end{array}\right)\right].$$
(37)
Now having given the series expression for the normalization constants, a comment is in order. In deriving (33) we have assumed that all $`\nu _i>0`$. As long as the normalization constants associated with positive $`\nu _i`$ only are concerned, (33) can be used without modification. However, if some of $`\nu _i`$ are negative, we have to consider $`\stackrel{~}{A}^{(p)}(b)`$ instead of $`\stackrel{~}{A}(b)`$ defined as
$`\stackrel{~}{A}^{(p)}(b)`$ $`=`$ $`{\displaystyle \frac{d^p}{db^p}}\stackrel{~}{A}(b)`$ (38)
$``$ $`{\displaystyle \underset{i}{\overset{M}{}}}K_i^{}(1\beta _0b)^{\stackrel{~}{\nu }_i}\left[1+{\displaystyle \underset{j=1}{\overset{\mathrm{}}{}}}C_{ij}^{}(1\beta _0b)^j\right]\text{for}b{\displaystyle \frac{1}{\beta _0}}`$
such that $`p`$ satisfies $`\stackrel{~}{\nu }_i=\nu _i+p>0`$ for all $`\nu _i`$. Clearly $`K_i`$ are related linearly with $`K_i^{}`$, and so calculable once $`K_i^{}`$ are known. Now $`K_i^{}`$ from $`\stackrel{~}{A}^{(p)}(b)`$ can be obtained in a series form using the steps taken in (17)– (33), and is given by (33) with $`\stackrel{}{K}\stackrel{}{K^{}},\nu _i\stackrel{~}{\nu }_i`$ and $`\stackrel{~}{A}(b)\stackrel{~}{A}^{(p)}(b)`$.
In conclusion, we have shown that the normalization constants of the large order behavior caused by an UV renormalon can be expressed, as in the case of an IR renormalon, in a calculable, convergent series, and thus can be computed to an arbitrary precision using the ordinary weak coupling expansion. Considering that the calculation of the normalization constants is equivalent to summing all sort of the higher order renormalon diagrams, it is surprising that they can be computed from the usual perturbation expansion.
Acknowledgements: This work was supported in part by the Korean Science and Engineering Foundation (KOSEF).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.